[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: network controls are neccessary



Sorry for the delayed response.

> > Erik Nordmark wrote:
> > For instance, would it make sense to push
> > all of the rules into the hosts
> 
> Be careful, Craig might have a heart attack just at the thought of doing
> it. There would be terrible security consequences. I don't see how this
> could be anything else than pushing subnet-specific rules to the hosts
> that belong to the subnet.
> 
> > or is the set of rules so large that a host-based
> > solution would need to cache subsets of the rules
> > on demand?
> 
> It's not a matter of size but complexity.
> 
> First, the hosts do not have the intelligence that routers have. For
> example, how long is it going to take before all hosts implement the
> equivalent of a route-map? Are you going to enable an IP phone to
> process BGP communities? Pushing all the rules to hosts would imply
> either:

I did not suggest that hosts should run BGP. That would be completely silly.
But IPv6 hosts are supposed to have a source address selection table
according to a draft in the IPv6 WG (soon RFC).
If the exit routing policy can be expressed with a few rules it
would essentially be additional rules in that table (and a protocol
by which the hosts can learn those rules).
Hence my questions on the list (so far without answer) about reasonable
sizes for the exit router selection policy/routes.
 
> a) The hosts to have the same routing capabilities as routers currently
> have.

Sorry I don't follow the logic. The hosts don't route. Hosts with multiple
source locators that talk to a node with multiple destination locators
just do a selection e.g. when creating a new connection.

> b) The rules have to be modified to accommodate dumber hosts.
> 
> Second, part of the configuration of a router is the router's location,
> which is implied. Pushing policies to the hosts would require a coupling
> between the topological database and the routing policies that is beyond
> what most large companies typically have (not to mention that it is
> questionable that such a monster could be maintained).
> 
> One in the other, the sad truth is that multiple addresses per host do
> not fly in the large organization. What is not flying either is policy
> that is not at the edge for egress traffic (stateful firewall issues,
> etc). We have had multiple posts about this on mh.

I'm not talking about today - I'm talking about a future solution which 
separates identifiers and locators end2end e.g. using Pekka Nikander's proposed
HIP approach. In that case the administrative cost of the additional locators
would be close to zero.
And you could get reasonable security.


> There is nothing fundamentally wrong in having multihoming schemes that
> use several addresses, but this should be virtualized at the edge of the
> network, which is basically what MHAP does.

I think the edge thing is also important to explore in parallel with
a host-based scheme.
Getting a handle on the security aspects (starting with a threat analysis)
for the edge solutions would be quite useful IMHO.

  Erik