On 22-mrt-2006, at 23:30, Scott Leibrand wrote:
Current behavior is that TCP doesn't really take into account radicalchances in link capacity. There is some on-going work in TSVWG on howto handle extreme congestion events, but this isn't really something solved.
But how much of a problem is this in reality? Say you are transmitting at1Gbps (1,000,000 kbps), and get switched to a 28.8 kbps modem link.Since each packet lost halves the sending rate, and you've reduced your throughput by a factor of approximately 2^15, it should take approximately15 dropped packets to throttle back the sending rate sufficiently.
That's with congestion avoidance, which kicks in for individual lost packets. If you lose a lot of them you're in slow start.
But you're not considering delay and window size. If you're doing 1 Gbps over a somewhat long distance (let's say 40 ms round trip delay) then you need a ~ 5 MB window. If you were to pump such a large window into 28 kbps you'd saturate that link for 23 minutes...
Even assuming a regular 64k window (which would limit your datatransfer with 40 ms RTT to 13 Mbps) you'd be saturating that 28k link for 18 seconds.
IMO this is not a show-stopping problem, as TCP will throttle back fairly quickly, and the impact should be limited to approximately the depth of the slow link's buffer. That's not to say it's not worth addressing (in the TSVWG), but it doesn't seem to me like something that should hold upshim6...
Well, let's just do some experimenting when we have implementations.