[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Regionally aggregatable address space for multihoming



On Mon, 11 Jun 2001, Brian E Carpenter wrote:

> This is my problem with this proposal, as it has been with every geo addressing
> proposal over the last 5 years.

> Because of this fact (you have to increase the area covered until it is
> big enough to be flat-routable), I think that every geo area would
> reproduce exactly the problem of entropy and default-route-table bloat
> that we have in today's tiny little Internet.

When the number of routes per area grows, it is likely that the areas
themselves shrink, if only for this very reason. I'm not saying geographic
aggregation will magically make all the problems go away. However, it will
buy us two or three orders of magnitude, which is nothing to sneeze at.

> So I have little confidence
> that this solution is scaleable to the 10 billion node Internet.

The number of nodes is not very relevant: you can fit 10 billion many, many
times inside the /48 a single customer would get. All that matters is the
number or routes and the resources needed to transmit, process and store each
route. In IPv4 aggregation isn't used to its full potential, for historical
and address depletion reasons (besides some ignorance) and because of
multihoming. In IPv6 it will be much simpler to agressively aggregate,
because there is little need to give ISPs only small blocks of address space
and have them come back for more as they grow.

So the main threat to the routing table are the multihomed networks. It would
be nice to know how many people or business will be multihomed in the future,
but there are no real figures. We only know that the number is still
relatively small but rising fast. I think we can get away with assuming an
upper limit on multihoming of 1% of the population. This is something like
10% of all businesses. Obviously anything that does better than 1% would be
great, but if we can achieve 1% we'll have bought a lot of time to think
about the other 99%.

With a world population of 10 billion looming, this would be 100 million
routes to multihomed networks. This is too much for any single router.
However, if we can break up the world into 100 regions, this would only be 1
(or rather 2) million per region, which is still a lot but managable, if not
now at least in the near future.

Combined with more efficient ways to store and transmit routes, this should
work for a long time to come.

The current routing paradigm is to store a lot of information about each
individual route, since aggregation makes sure there aren't very many similar
routes. But in a few years (decades) when 1% of New York City is multihomed,
this means more than 200k routes (to 100k destinations) within a fairly small
geographic region. If this market is serviced by 20 ISPs this means 10,000
routes per ISP, each taking more than 100 bytes of memory, even though these
routes are likely to be nearly identical.

If we assign all these addresses out of a single 131072 * /48 block and then
each ISP uses a bitmap attached to the aggregated /31 to announce whether any
of those /48's is connected to them, it only takes somewhat over 100 bytes
for the /31 + a 16k bitmap for a total of ~16500 bytes in stead of more than
a megabyte. Another ISP that peers with all of them will only get some 330
kilobyte of routing information in stead of 20 MB.

My idea for geographical aggregation can be deployed in the current IPv4/BGP4
Internet without breaking anything. The only thing that has to be done is
divvy up the world in regions and assign address space to multihomed networks
within the same region from an aggregatable block. Then networks can start to
filter out routes from blocks far away without the risk of suboptimal routing
to multihomed networks closer by if and when they want, with potential
benefits of new route storage paradigms in the future.

Iljitsch van Beijnum