[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: The state of IPv6 multihoming development
On Wed, 23 Oct 2002, Greg Maxwell wrote:
> We're still in the game of having a cost set onto the outsiders.. sure,
> now we've flattened the nonlinearity of price (hardware to contain a
> million routes is more complex then a group of 100k routers with a similar
> totally routing ability).
> Is this really about the limits of top end scaling?
> I always considered it to be about making routing more O(1)ish with
> respect to multihomed networks you don't have a bussiness relationship
> then O(n*mumble)..
O(1)? :-)
Well, maybe there is such a beast. When I studied software engineering I
was told there were no sorting algorithms that could do better than
O(n*log(n)). However, some digging produced radix sort, which was used
in the good old days of the Hollerith machines and scales O(n). So just
because people think it can't be done doesn't mean it can't be done.
> The multinational geospacial distribution is a non-issue imho.. there is
> too much clustering.. In places where multihoming load will matter..
Why is that a bad thing?
> thats
> where the majority of the networks are.... I think it's reasonable to say
> that if a router can handle X routes (i.e. all of the US, all of asia, or
> all of europe) that asking it to handle 4X routes is not a big deal.
Agree. That's why flat routing within a continent isn't worth the
hassle, while flat routing within a country/state/province or a metro
area (+ aggregating away everything on the outside) would be.
> I don't believe that it is reasonable to ask providers to zone their
> networks any smaller then continents.
Is it reasonable to ask people to not multihome? Is it reasonable to ask
providers to buy bigger routers because the routing table explodes? If
we give people the tools, they can decide on how to use them. That's why
we need geographically aggregatable PI address space.
On the other hand, we could just throw the towel in the ring, accept a
large DFZ and start optimizing for that. 1 KB per prefix is way to much.
If we can get this down to 10 bytes on average (in theory it could be
one bit per path: reachable or unreachable) and double our memory and
slightly more than double our CPU power every two years, we're at 2^4 *
1024 / 10 * 114k = ~200M routes in 2010's equivalent of a 7200. Not too
shabby.