[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: The state of IPv6 multihoming development



Greg Maxwell wrote:
> ...
> Contentential boundries are the only ones that make sence.. 
> Any tighter and there is too much movement and interconnection.
> 
> Contentential bounderies also make sence because they could 
> be aligned with the addressing authorties.
> 
> But.. They don't buy us much.. At best they would reduce the 
> routing table by a factor of 4 or so, probably less..
> 
> Heres the thing.. If a simple linear reduction like that will 
> suffice, then lets just flood the DFZ and let hardware 
> improvments take care of it. If we do not believe it will 
> suffice, then we must have a solution that does not increase 
> the routing tables for networks who do not have a direct 
> relationship with the multihomed network.
> 
> Lets work on answering these questions: Is it approiate for 
> network operators to have to carry the burden created by 
> other peoples multihoming? and Can we expect routers handle 
> the future growth of the IPv6 internet without aggregation?
> 
> I thought that was already answered by the heavily 
> aggregation centric approach of IPv6. Perhaps not.. If we 
> decide yes, they can, then simple.. lets just declare that 
> the IPv4 way is the way to go.. If not, then we should not 
> waste time talking about aggregation break schemes.

The current provider-aggregate approach is very focused on the needs of
the service provider at the expense of the needs of the enterprise.
While this does create a very scalable system, it also creates one that
doesn't solve the real problems (more correctly it only solves a subset
of them). I will agree that to a large degree the only thing that makes
sense today is contential aggregation, but I don't think that gets us
very far. It is a great first step, so we should take it, but we need a
plan that goes further. Those are the kinds of issues I was talking
about in my USE draft.

> 
> My thought on scaling: I don't think that it's unreasonable 
> to say that simple enhanced versions of todays routing 
> hardware/software could handle a million routes.. But what 
> happens when we want links that move many 10s or hundreds of 
> gigabits a second... and we're forced into using optical 
> packet-level switching..
> 
> In the stuff I offered up using pure transport-level 
> multihoming, the number of routes a network had would be a 
> pure linear relationship to the sum of 
> (customers+peers+transit providers).  A unpeered lower tier 
> networks routing table would be close to 1:1 with the number 
> of customers they have.  I can easily see this making or 
> breaking the possiblity of 'optical routing' until there are 
> major advances in photonic computing devices.

Changing the model to make the transport deal with the multi-homing is a
very long term problem. It is not that it is a particularly difficult
task, but that human nature plays here so it just takes a substantial
amount of time. When SCTP is well published, taught in every University
course as the proper way to create connections, and those who have been
taught to do so are finally in charge of software development projects,
then we might start to see a shift. Basically you have to promote/retire
out the entrenched development community that is under pressure to
deliver and will always fall back on what they were taught and know
works.

Tony