[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: The state of IPv6 multihoming development



On Wed, 23 Oct 2002, Tony Hain wrote:

> Iljitsch van Beijnum wrote:
> > ... If we give people the tools, they can
> > decide on how to use them. That's why we need geographically
> > aggregatable PI address space.
> >
>
> More correctly, that is why we need an aggregatable PI address space.
> The fact that people generally put up abstraction barriers that align
> with geography is orthogonal and an artifact of human nature. Since they
> will tend to want to do that anyway, we can leverage that with geo-PI
> allocations, as long as they fit the fundemental requirement of being
> aggregatable.

Contentential boundries are the only ones that make sence.. Any tighter
and there is too much movement and interconnection.

Contentential bounderies also make sence because they could be aligned
with the addressing authorties.

But.. They don't buy us much.. At best they would reduce the routing
table by a factor of 4 or so, probably less..

Heres the thing.. If a simple linear reduction like that will suffice,
then lets just flood the DFZ and let hardware improvments take care of it.
If we do not believe it will suffice, then we must have a solution that
does not increase the routing tables for networks who do not have a
direct relationship with the multihomed network.

Lets work on answering these questions: Is it approiate for network
operators to have to carry the burden created by other peoples
multihoming? and Can we expect routers handle the future growth of the
IPv6 internet without aggregation?

I thought that was already answered by the heavily aggregation centric
approach of IPv6. Perhaps not.. If we decide yes, they can, then simple..
lets just declare that the IPv4 way is the way to go.. If not, then we
should not waste time talking about aggregation break schemes.

My thought on scaling: I don't think that it's unreasonable to say that
simple enhanced versions of todays routing hardware/software could handle
a million routes.. But what happens when we want links that move many 10s
or hundreds of gigabits a second... and we're forced into using optical
packet-level switching..

In the stuff I offered up using pure transport-level multihoming, the
number of routes a network had would be a pure linear relationship to the
sum of (customers+peers+transit providers).  A unpeered lower tier
networks routing table would be close to 1:1 with the number of customers
they have.  I can easily see this making or breaking the possiblity of
'optical routing' until there are major advances in photonic computing
devices.