[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Transport multihoming



On Thu, 24 Oct 2002, Iljitsch van Beijnum wrote:

> Greg, Peter,
> 
> As I see it, the reason to have the multihoming functionality inside one
> or more transport protocols is that the transport layer has end-to-end
> knowledge that makes it possible to make better multihoming decisions.

My recent thoughts are that it need not be tied directly to the TCP protocol,
but can instead be done at the IP layer of the host stack.  However the TCP and
IP layers should be aware of each other in much the same way that PMTU is
facilitated.  If done that way, there would be benefits from caching the
multihoming information and sharing it over several connections.  Also because
of the strong aggregation, the cached information would be able to build a
multihoming hint tree more efficiently than would a flat list of IP addresses. 
For example, if a major link from one aggregator showed a problem, it would
take provide a hint that all aggregations from that provider would be
inaccessible and the stack could use this intelligence in advance. 

> 
> Would it be possible to have a modified TCP talk to a non-modified TCP
> through some kind of "mudem" (multihomer/demultihomer), without loss of
> the core multihoming functionality, and without the "mudem" having to
> keep long-term state?

Depends what you mean by long term.  Please clarify.

This is perhaps close to what I alluded to by a decoupling process.  What is
fundamental to a reliable working solution using my concepts is that if the
prefix replacement is decoupled, it must still be done in a secure way so that
protocols like TCP which might depend on address immutability can have an iron
clad guarantee that the address selection is valid.  

It is clear that moving the process out of the host kernel enviromnent into an
ancilliary processor environment would imply using some kind of secure control
protocol to ensure that addresses are dealt with correctly, and this adds a
degree of complication which I believe to be excessive.

It is my conclusion then that it should be done on the host.  This also has the
advantage of distributing the processing and memory requirements for such a
protocol away from the core of the internet which fit the requirement for being
scalable.  Careful thought needs to be given to huge servers which are likely
to be servicing tens of thousands of connections though as the multihoming
process could add some burden to servers of that calibre.  It is perhaps sites
like these that could benefit from decoupling the multihoming process in a
controlled manner.

I strong suggest that such multihoming be restricted to prefix replacement
only, and not arbitrary address replacement, as there will be significant
advantage in exploiting the implied tree structure imposed by the strong
aggregation.

> 
> This would create a much more attractive deployment path as people can
> choose to either upgrade hosts or put them behind a box to provide the
> multihoming functionality.
> 
> It also makes it possible to move the multihoming decision making to a
> place where it can be better controlled if this is desired.

Yes, I see your reasoning, but only if the immutability of end point addressing
and the prevention of connection hijacking can be guaranteed.

> 
> Iljitsch (or just call me dr. Frankenstein)
> 
> 
> 

Peter

--
Peter R. Tattam                            peter@trumpet.com
Managing Director,    Trumpet Software International Pty Ltd
Hobart, Australia,  Ph. +61-3-6245-0220,  Fax +61-3-62450210