[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] Consensus check: mapping granularity



In einer eMail vom 21.03.2008 01:54:56 Westeuropäische Normalzeit schreibt brian.e.carpenter@gmail.com:
> quote from draft-carpenter-idloc-map-cons-01.txt:
> 4.6.  Scale

>    We want no arbitrary scaling limits.  However, proposed  scaling
> targets are 10 to 100 billion Stacks (which scales the  Identifier
> Namespace), and 10 million sites.  Although the  latter does not
> directly scale the Internet's Locator Namespace,  it indicates the
> worst-case granularity of the routing table for  that Locator
> Namespace.  If we don't do better than random  allocation of address
> blocks to sites, we will end up with 10  million routing table
> entries.

These weren't consensus numbers. Since then, Tony has convinced me
that we need to allow mapping down to individual hosts as well
as aggregates (so my answer to his question is Yes), but we probably
need to increase the likely size of the map from 10 million
to 100 million. I think these changes are compatible, but note my
belief that aggregates as small as a SOHO network would be
very rare in the map. Others disagree, and that changes the
numbers substantially.

    Brian
10 millions, 100 millions and even billions - these figures wouldn't frighten me  because the only right solution is the one that scales logarithmically. My solution would not only shrink the table size and update churn, but consequently also the work load per router. Hence it could take advantage of the idle time, e.g. by pursueing similar goals like  the rtgwg wrt. to multipath and detours.
 
Quote------------
 
> Doesn't the issue persist independent of the characteristics based on
> which a path gets selected?  Independent of *how* a path gets selected,
> you need to decide *who* selects it (or who selects which part of it).

    Well, that's true of course. But nothing can change the fact
    that the originating host chooses the source address and destination
    address that the packet starts out with, and all subsequent choices
    depend on that.

         Brian
End of quote -----------------
 
Example: Be that the destination is "just behind the globe". So you may have a West-route, a North-, East- and  South-route. The ingress may take a choice and yes, all subsequent choices may not only depend on that but should also COMPLY with that !!!! Sure, some respective assignment may have to be conveyed, but first of all be enabled ! Multihoming ist just the last hop of the West-,..,South-route issue.
 
------
There are  more issues which have never been discussed here:
Support of (better models of) multicast, mp2mp ! 
The irrational behind using orthogonal models for intra/inter-domain routing.
The fundamental deficiencies of distance vector.
 
------
Incremental deployability:
Like "ships in the night", as has been mentioned, several mechanisms may coexist:
Seafarers used to watch the stars. Using the compass and watching the magnetic needle came in parallel.
Car navigation did not start out with CDs that covered the entire continent either.
All of these methods are independend from each other, can evolve, and also, may fallback if necessary.
My solution was argued down to be not incrementally deployable. Just the opposite is the case and can be compared with the progress in automobile navigation where at first the navigation quality was also minor (e.g. was lost itself), so that more often the driver had to fallback using other methods (maps, direction signs).  
Here,  at the beginning when a rather small number of routers takes part in the new protocol, switching back to the current method will be necessary although the packet hasn't yet approached the destination very well. But that will be improved the more routers are going to participate. In a half-half scenario, falling back to current BGP however means, to benefit just from the REGIONAL update churn! The global update churn is only required for today's routing.
 
Yes, a new additional mechanism gets its deployment incentives from providing better service (the compass needle can be watched also in cloudy nights, the car navigation makes driving comfortable,..)
I mentioned quite a few such additional aspects above. There are even more (e.g. there is really no IPv4 depletion problem) and I learn more and more the longer I follow this mailinglist discussion.
 
Heiner
 
 
 
 
 
 
 
 
 
 
 
 
 
extremely, it would hereb small. There is no prefix aggregation problem if you aggregate the area instead (remember:Rhekther's law offers 2 alternative choices).
 
 
 
I have heard that a Dijkstra can be already be run within a second  dealing with 10 000 nodes.