[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Comments on draft-crocker-mast-proposal-00.txt



On zaterdag, sep 6, 2003, at 14:32 Europe/Amsterdam, Spencer Dawkins wrote:

But is it realistic to expect to deploy a technology in IPv4 that
usesup additional address space?

Dave keeps saying "a proposal, not a specification", but as I read the
MAST proposal, it doesn't use up additional address space - Dave,
can you give me a clue here?

Wouldn't a host need two or more addresses to use MAST? (I must admit I haven't read it in detail but the general principles are similar to several earlier proposals.) Anyway, I would hate to see IPv4-specific problems (NAT...) get in the way of an IPv6 solution.


If you really want to be cool, _use_ the different paths
concurrently. I'm convinced that as soon as we've got the basic multiadressing mechanisms in place, load balancing single TCP sessions over multiple paths will be the next big thing.

In principle, I agree.

The problem I have is that I'm working with orders-of-magnitude
nominal bandwidth differences between interfaces in my application (50 Kb/s for GPRS, to 11 Mb/s for 802.11, to 100 Mb/s for 802.3).

You encapsulate IP in 802.3? How does that work for you?


PS. :-)

The increase in throughput from using two different interfaces simultaneously gets lost in the noise.

Right. That is, unless someone else does the same thing and talks to your ethernet address using her GPRS address and the other way around...


For applications that want/need more than 10 or 100 Mbps, running over 50 kbps is useless most of the time anyway. So when you fire up the microwave but forget to close the door and the wireless stops functioning, you really don't want your download to continue over GRPS, which is both to slow to be useful and too expensive.

If you have a box with three gig-E interfaces, doing a transfer to
another box with a 10-gig-E interface, using three interfaces simultaneously could be pretty noticeable, of course. And pretty phat.

Well, I was thinking more along the lines of a cable + ADSL setup, but I'll take 3 x GE if I can get it. :-)


If you track the RTTs and queue sizes carefully, you should be able to select the path with the lowest delay for a certain packet size if you have a low bandwidth, low delay line and a high bandwidth, high delay one. This would be very useful in setups that include a satellite path.

I was involved in a project where we needed to deliver real time traffic of different speeds (ranging from a single packet a second to upwards of 10 Mbps) to three remote systems over TCP. (Don't ask.) We were supposed to load balance over the three remote systems for resiliency reasons. So we implemented a system that distributed the data in round robin fashion. This was pretty bad because it was fairly common for one system to become very slow for a while. In this case our performance went down compared to using only a single system! So we started fiddling with all socket options known to man and on each write selected the destination for which the least amount of data was still buffered for transmission. This worked amazingly well, we could now use all three destinations to the full capacity and react to changes almost instantly. (At the cost of some end-to-end reordering of the data.)

For critical environments, I could see loadbalancing as a way of
providing better feedback about alternative path failures ("you're down to one path, which is still working, but if that path fails, it's all over"). Maybe some of nice OPS people could express an opinion about whether real
customers need this capability?

I consider myself mostly an ops person and the thing I like about this is that it takes the "pick an address and pray" factor out of the equation: you always get the full available speed. If you want more input on this, you may want to ask the SCTP crowd, they have requirements along the lines you mention but I don't think they implement load balancing.


Iljitsch