[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: draft-kurtis-multihoming-longprefix comments



Tony Li wrote:
> ...
> You're suggesting abstraction that grows with distance.  You 
> may recall that I proposed that we do exactly this in 1992 
> with continental aggregation.  I was shouted down.

That doesn't mean it was a bad idea, just ahead of its time. :)

> 
> This only works because in almost all cases, continents are 
> internally connected and the number of exceptions is quite 
> small.  Further, over time, as we wire the planet, even these 
> exceptions will disappear.
> 
> So yes, I have no objections to continental scale aggregation.  
> Unfortunately, that is not a technique that recurses down to the 
> micro level.  We need to work with the connectivity that is 
> naturally in place and cannot force fit it to suit.
> 
> To take your street address metaphor farther, we should not 
> insist on building bridges across the Grand Canyon just to 
> insure that a zip code is contiguous.

The contiguous requirement only applies to short prefixes. In the case
where it is not possible to make it contiguous, longer prefixes are
necessary. The fact it is not perfect does not mean the whole approach
is invalid, just that it carries a set of engineering trade-off's like
all the others. 

For the sake of discussion, let's assume it is technically, financially,
or politically infeasible to bridge the Grand Canyon. Also that there is
a significant population of multi-homed sites on both sides. This would
lead to all circuits on the north rim routing through Salt Lake City,
while the south rim routes through Phoenix (ignoring the obvious cross
connect through Las Vegas or LA). Further, continental scale aggregation
is done in San Jose, Chicago, and D.C. 

Using the draft I have been working on, there are 18 /24's that would
need a longer prefix exception list. 
See: http://www.tndh.net/~tony/ietf/GrandCanyon.htm
Most of that space would be cleanly described by /26's, where the
conflicts in turn being cleanly broken down by /28's ... The routing
table grows with each exception region, but not at the rate of
multi-homed sites. At the same time, an announcement only needs to be
present if there is a site in the space covered.

The prefix x6C0::/12 covering Santa Maria (west 120w), Santa Ines (south
30n), Denver (east 105w), Wyoming northern border (north 45n),
associates with the San Jose aggregation point. If divided into /14's,
the cut would be through the middle of the canyon about 32km west of
Grand Canyon Village. The Glen Canyon section would be divided about
55km north of the Utah southern border. This leaves the entire /14
containing Fresno, Boise, and most of Nevada contiguous. The other 3 are
mostly contiguous, just needing ~50 /28's to cover the exceptions.

The point is that only providers connecting to San Jose would need to
know the detail. Regional providers in the rest of the world would only
see the x6C0::/12 announcement, if that. They might only be looking at
the ::/6 which identifies the section. Also, it is clear there are other
rivers covering more distance, so this is not meant to claim this is a
worst case, just an exercise to show what would be necessary to address
the premise.

You claim it can't recurse down to the micro level. I disagree, because
it will recurse on whatever scale makes economic sense. When there are
too many exception entries, it will become economically viable to split
the space by putting in more exchanges. The economic driver is the
reason for the current set of exchanges. Historically those have been
driven by circuit costs, but in an environment of capacity glut the
economics change. Either way, you can't avoid flat routing in some
subset of the routers, it is just a trade-off between how many routers
need to know the detail vs. the interconnectivity required to mask that
detail. 

You further claim we have to work with the connectivity that is in
place, which is demonstrably false. Otherwise we would still be routing
through FIX-E&W via government nets or NSF Regional's (or IMPs if we go
further back). Connectivity evolves over time to match the current state
of the players and the economic limitations. While we can't dictate a
connectivity model, we can define technologies that work best with one
or two models, while having the ability to handle exceptions. This is
exactly how we migrated from government sponsored to commercial
networks. It is also the case that even if we found a 'perfect'
technology for the current interconnectivity, in 5 years time that model
wouldn't be perfect anymore because the connectivity underneath it as
well as the policies for traffic management would have changed. If we
can agree on the defining characteristic separating the basement
multi-homer from the global enterprise network, we can define different
connectivity models for each. 

My approach is not intended to be a grand one-size-fits-all technology,
and exceptions showing when PA should be used are discussed in the
drafts. It is simply an approach that leverages current BGP deployments,
and can be scaled in different parts of the world to whatever level
makes economic sense. In the spirit of globalization via the Internet,
it explicitly ignores artificial political borders. This may be a good
or bad thing based on your views about being politically correct. ;)

Tony