[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Path-switch-triggered congestion (Was: RE: Extending shim6 to use multiple locator pairs simultaneously)
On 03/23/06 at 3:45pm +0200, john.loughney@nokia.com wrote:
> I think you are seriously missing the point. Congestion control is
> used to protect the network. It isn't about link bandwidth, per-say.
>
> TCP uses slow-start to probe the network to ensure it doesn't start
> sending at a higher rate than the network can support. Link knowledge
> is not enough. There is on-going work in TCP to try to fix this, so
> that a new TCP connection could start sending at a higher rate, if
> there is some path knowledge, Quick Start for TCP and IP.
> http://www.ietf.org/internet-drafts/draft-ietf-tsvwg-quickstart-02.txt
Sure. And if TCP wants to do Quick Start or anything else on a rehoming
event, I'd say that's fine. I just don't think that we can say what
higher-layer protocols MUST do in response to a lower-layer rehoming
event. The whole point of shim6 is that it's transparent to upper layers
for compatibility.
> But think about it - if there is a rehoming event because one network
> path became unavailable, there's a possibility that many hosts will
> rehome. If they rehome to a network that is already congested and
> continue sending at a high rate, there will be severe network
> congestion, which might cause major network problems.
Sure, but that's the case today as well. If I'm running an IGP with two
unequal-bandwidth links, and the high-bandwidth link goes down, all the
traffic from all those hosts will rehome to the slower link. This will
cause congestion until congestion avoidance halves the sending rate enough
times to sufficiently lower throughput. I don't think a shim6 rehoming
event is any different, so I don't think any special action is required to
avoid violating the "do no harm" principle.
> I see the only options to be:
>
> 1) after rehoming, TCP will need to run slow-start, in order to probe
> the network.
> 2) have an alternate mechanism, like Quick Start, in order to quickly
> determine the proper sending rate after a rehoming event.
I think those are optimizations. Since TCP isn't necessarily aware of a
rehoming event (unless it knows how to talk to shim6 with the API), I
think that simply letting congestion avoidance deal with the new path is a
viable option as well.
-Scott
> PS - there is some work that has been looking at modifying TCP to deal
> better with different kinds of connectivity changes:
>
> https://datatracker.ietf.org/public/idindex.cgi?command=id_detail&id=120
> 91
> https://datatracker.ietf.org/public/idindex.cgi?command=id_detail&id=101
> 65
>
>
>
> >-----Original Message-----
> >From: ext Iljitsch van Beijnum [mailto:iljitsch@muada.com]
> >Sent: 23 March, 2006 09:57
> >To: Scott Leibrand
> >Cc: Loughney John (Nokia-NRC/Helsinki); erik.nordmark@sun.com;
> >marcelo@it.uc3m.es; shim6@psg.com
> >Subject: Re: Path-switch-triggered congestion (Was: RE:
> >Extending shim6 to use multiple locator pairs simultaneously)
> >
> >On 22-mrt-2006, at 23:30, Scott Leibrand wrote:
> >
> >>> Current behavior is that TCP doesn't really take into
> >account radical
> >>> chances in link capacity. There is some on-going work in TSVWG on
> >>> how to handle extreme congestion events, but this isn't really
> >>> something solved.
> >
> >> But how much of a problem is this in reality? Say you are
> >> transmitting at 1Gbps (1,000,000 kbps), and get switched to a 28.8
> >> kbps modem link.
> >> Since each packet lost halves the sending rate, and you've reduced
> >> your throughput by a factor of approximately 2^15, it should take
> >> approximately
> >> 15 dropped packets to throttle back the sending rate sufficiently.
> >
> >That's with congestion avoidance, which kicks in for
> >individual lost packets. If you lose a lot of them you're in
> >slow start.
> >
> >But you're not considering delay and window size. If you're
> >doing 1 Gbps over a somewhat long distance (let's say 40 ms
> >round trip delay) then you need a ~ 5 MB window. If you were
> >to pump such a large window into 28 kbps you'd saturate that
> >link for 23 minutes...
> >
> >Even assuming a regular 64k window (which would limit your
> >datatransfer with 40 ms RTT to 13 Mbps) you'd be saturating
> >that 28k link for 18 seconds.
> >
> >> IMO this is not a show-stopping problem, as TCP will throttle back
> >> fairly quickly, and the impact should be limited to
> >approximately the
> >> depth of the slow link's buffer. That's not to say it's not worth
> >> addressing (in the TSVWG), but it doesn't seem to me like something
> >> that should hold up shim6...
> >
> >Well, let's just do some experimenting when we have implementations.
> >
>
>