[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] Are we solving the wrong problem?



Yes, I completely agree. I considered this approach in the TAMARA concept as "vectorized TCP".
There is a strong reason to follow this path: as the internet is still a (broken) end-to-end network, only the end points may have exhaustive information on relative performance of different paths *for free*. More precisely, by end-points I mean the TCP code at the end-point hosts, because, obviously, it already does the calculations and keeps the data. Any entities in the middle may just guess.
Finally, if end-point TCP stacks do some congestion control decisions why do not they do some routing decisions?
This vectorized TCP / real SCTP approach complements prefix-bunch architectures well.

On 19/02/2008, Mark Handley <M.Handley@cs.ucl.ac.uk> wrote:
So, what happens if we stop trying to hide the multihoming.  Take a
server at this multi-homed site and give it two IP addresses, one from
each provider's aggregated prefix.  Now we modify TCP to use both
addresses *simultaneously* - this isn't the same as SCTP, which
switches between the two.  The client sets up a connection to one
address, but in the handshake learns about the other address too.  Now
it runs two congestion control loops, one with each of the server's IP
addresses.  Packets are shared between the two addresses by the two
congestion control loops - if one congestion-controlled path goes
twice as fast as the other, twice as many packets go that way.

OK, so what is the emergent behaviour?  The traffic self-load balances
across the two links.  If one link becomes congested, the remaining
traffic moves to the other link automatically.  This is quite unlike
conventional congestion control, which merely spreads the traffic out
in time - this actually moves the traffic away from the congested path
towards the uncongested path.  Traffic engineering in this sort of
scenario just falls out for free without needing to involve routing at
all.  And more advanced traffic engineering is possible using local
rate-limiting on one path to move traffic away from that link towards
the other.  Again, this falls out without stressing routing.

Now, there's quite a bit more to it than this (for example, it's great
for mobile devices that want to use multiple radios simultaneously),
but there are also still quite a lot of unanswered questions.  For
example, how much does this solve backbone traffic engineering
problems?  The theory says it might.  I'm working on a document that
discusses these issues in more depth.  But I think the general idea
should be clear - with backwards-compatible changes to the transport
layer and using multiple aggregatable IP addresses for each
multi-homed system, we ought to be able to remove some of the main
drivers of routing stress from the Internet.  That would then leave us
to tackle the real routing issues in the routing protocols.

I hope this makes some sort of sense,

Mark

--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg



--
Victor