[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: updating GSE for the new millennium



Peter,

On Wednesday, April 30, 2003, at 11:42  PM, Peter Tattam wrote:
Additionally, I think it is a bad idea to relabel the source as the label
supplied is unlikely to be the best path. The other end needs to determine
this.
Agreed.

Perhaps I misunderstand.  If the original values are restored before
delivery at the final destination, the TCP/UDP checksums would compute
correctly and IPSec would not be affected.
For a TCP connection, the srce/dest part of checksum can be cached. One just
has to agree on knowing when to include the srce/dest pair in the checksum.
Another reason for determining if the packet is clean or dirty. Minor
optimization that's all. Don't forget that in Ipv6 we don't do IP header
checksumming any more.
Right. Not convinced this would be necessary, but understand what you're saying.

We need to search for protocols that depend on the addresses in the header
being intact and work out whether the packet needs to be reconstructed or
cached values can be used instead.
This would be the same set that is sensitive to NAT.

If we are considering very high speeds,
this could be important. Messing with the packet in transit could be a
performance hit and if we can get away without doing it, all the better.
As the rewriting occurs only at the edges, I don't believe extreme performance is that critical (relative to the core).

This is the really tough bit. Maintaining an accurate and secure database of
mappings is non-trivial. Each site has their own specific view of the network
topology which makes this quite different to the DNS system.
Sorry, don't follow. The mapping table (in my view) is independent of topology -- it is a simple key/value pair where the key is the site identifier and the value is the set of one or more ISP provided aggregation locators from ISPs providing service to that site. The border device at the source fetches the appropriate value based on the first 48 bits of the destination address and rewrites the destination address with (one of) the aggregation locator(s). Where a destination is multi-homed, the policy determining which out of a set of aggregation locators is chosen would be administratively defined by the source site administrator (although I can imagine some sort of preference information being provided by the destination in the mapping table).

I gather your view is different?

With regards to maintaining the mapping table, I see two generic ways of doing this, either pushing data out like routing protocols or pulling data in on demand like the DNS. Both are (obviously) tractable and both have advantages and disadvantages. For obvious reasons I like the DNS model (not necessarily the DNS itself), but I see this is a side (albeit important) issue to the underlying architecture.

The only other issue I wonder about is using single box to do translations at
the site boundary. Because such a box is an insertion, it could be a single
point of failure, and could well have scalability issues.
In as much as the existing border router is a single point of failure or potential bottleneck, yes. In my view, the mapping/re-mapping functionality could easily be integrated into the site border router. A separate box would also make sense. Note that if it is a separate box, you can have more than one.

Rgds,
-drc