[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] Why delaying initial packets matters



Lars,

On Feb 12, 2008, at 12:54 AM, Lars Eggert wrote:
Sure, and the concern isn't that apps and transports can't deal with these events when they happen. However, dealing with these events usually has some negative performance impact.

Yes. And if the negative performance impact is sufficiently great, software and/or protocols and/or expectations are modified. Or at least they have been in the past.

The question is if any of the proposed layer-3 extensions increase the probability of these events happening to a degree where user- perceived performance is significantly impacted.

And the answer may well be "no, they don't",

I'm sure they do.  Once again and with feeling: TANSTAAFL.

I'm a bit confused. I thought we're talking here about modifying a key component of the underlying Internet architecture for the long term. Such modification will have both positive and negative impacts. The question isn't whether or not there will be an impact, rather the question is (or should be) does the benefit outweigh the cost.

Conceptually, as far as I can tell, we have a tradeoff. All things being equal and relative to each other:

1) push-based systems will

a) not significantly increase latency/packet loss
b) be less scalable
c) allow less dynamicity

2) pull-based systems will

a) increase latency/packet loss, at least for the first packet of a new 'flow'
b) be more scalable
c) permit more dynamicity

Does anyone disagree with these assumptions?

but I'd be good to back that statement up with data.

I agree data should be collected and loudly applaud the efforts of Dino et al. in writing actual code (I personally don't trust mathematical models as long ago I saw one too many models demonstrating how ATM was a perfect answer, regardless of the question). However, from a conceptual point of view, I don't see how we can get away from the tradeoffs I mention above. Whether or not you find option (1) or option (2) acceptable will likely largely depend on the assumptions you make going in -- we can all come up with scenarios that are unacceptable for either option.

From my perspective, looking at how the Internet has evolved over time, scalability and dynamic behavior have been areas in which we've been bitten time and time again. We continue to see increased deaggregation. We continue to see increased growth. We continue to see increased dynamic behavior. We also see increased bandwidth, cheaper memory, faster processors, etc. I don't see these changes reversing or even slowing down over time. Thus, it seems to me we should anticipate and optimize for these changes instead of optimizing for the way things have been.

However, with all that soapboxing done with, I freely admit I'm not half as bright as most of the people on this list and it is likely I'm missing something fundamental. I'd be appreciative if someone could explain it to me (using small words)...

Regards,
-drc


--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg