[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A tunneling proposal



On Mon, 16 Jul 2001, xxvaf wrote:

> > Is there a known limit to the number of routes in the global routing
> > table? I know bad things started to happen at about 4000 and 10000, but
> > obviously those problems have been solved. Routers run just fine with 100,000
> > routes at present, and unless I'm mistaken, the most common types of router
> > have CPUs and memory that are well below what most of us have in our desk top
> > PCs.

> The veracity of "Routers run just fine with 100,000 routes..." depends on
> how you define "fine".

A few weeks ago, someone posted a survey on the list. Nobody answered "we
can't handle the huge number or routers" on it.

> Modern routers don't fall over with 100,000 routes in them. But initial table
> load and BGP convergence times when paths change are both a lot longer than
> many would like.

Ok, but is that enough reason to throw out the current way of multihoming?

> > On top of that, each route takes a LOT of memory: 240 bytes for the routing 
> > table and for each peer route in the BGP table in a Cisco.

> The size of individual routing state entries in modern routers has been the
> subject of a great deal of optimization over the years. Don't expect to see
> it improved by an order of magnitude or even by a factor of two. 

Really? It seems to me that this has not changed since I first started
running BGP in 1995.

Optimizations have been done under the current CIDR-paradigm, where every
route is unique and carries a lot of additional information (such as
communities) with it. If we optimize for very many nearly identical routes
the results could be very different.

> Memory size is not the principal issue; memory speed and routing table update
> bandwidth are.

You have a point on the table updates. A solution for that could be to only
allow information about routes going down to be immediately forwarded, since
this is presumably an operation that can be optimized not to be very
expensive. New route announcements could be slowed down.

I don't believe memory speed is a reason to keep the routing table at its
current size: doing a binary search on 10,000,000 rather than 100,000 items
only takes 24 steps rather than 17. If this is a problem for 10,000,000
routes, 100,000 routes is too much as well.

> > I think it's worth it to look at this, because with CIDR it is pretty much
> > impossible to efficiently route traffic: many locations are hidden behind
> > aggregates.

> If global routing state size resumes a hyper-exponential growth pattern that
> exceeds Moore's law, the problem will get worse faster than CPUs increase
> in speed.

Do we have any inidication that the global routing table is growing faster
than the Moore's Law rate?

> Eliminating CIDR will guarentee hyper-exponential growth - for all
> the talk about how CIDR has "failed" in that growth is continuing, the growth
> rate would be far, far worse without it - aggregation has successfully
> "hidden" at least an order of magnitude of growth.

I'm not saying "CIDR is dead, everybody stop aggregating". But CIDR has
reached the limits of its capabilities. Just look at the routing table vs the
number of AS numbers: for every assigned ASN there are five routes. This has
very little to do with the evil multihomers, since they have little reason to
announce more than a single route.

> Look back at old archives of the IETF and other lists to read some of Noel
> Chiappa's and others' writings on the mathematics of network topology and
> addressing. If the "addresses" used by routing don't follow the underlying
> network topology, excessive state is introduced. For multihoming to really
> work, it needs to use topologically-significant addressing. That suggests
> that the "multi" in "multihoming" also implies multiple addresses, with
> something like SCTP to handle them intelligently.

1. Forget SCTP: if we want this, we should build it into TCP and not change
   transport layers to a protocol that has just one desirable feature and
   forget 20 years of experience with the most successful protocol in
   history.

2. The current way of multihoming works much better for the multihomed
   network, what is their incentive to go with multiple addresses?

Iljitsch van Beijnum