[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Identification, layering, transactions and universal connectivity



On zaterdag, aug 2, 2003, at 00:56 Europe/Amsterdam, Tony Li wrote:

|    - Work on the address selection problem.

I don't want to discourage all of the metaphysics, but I would
like to mumble a bit about this part of the problem.  What are
the boundaries of the solution space for this particular area?
Ultimately, in a utopian world, we'd like an oracle that gives
us the perfect pair of addresses to use for any given communication.
These addresses would not only provide reachability, but would
provide optimality in routing, latency etc.

Back in realityland, we have no such oracle.
Not something that magically knows which address pairs are good and which are bad, no. But it should be possible to leverage information that exists already or discover information when needed.

If we want to
make an intelligent decision, we need to have data to base that
decision upon.  Without the data, possibly via indirect access,
we are guessing, tho that's not necessarily a bad thing.  See
Ethernet.
Ethernet is not a bad thing???

I think blindly guessing is pretty pathetic, especially if you consider that in large sites, many other hosts may be doing the same thing a little earlier or a little later. And the same host will very likely done the same thing before.

It's not so much that we must desperately try to find the best address pair. BGP doesn't give us that either. But it's essential that we're able to weed out the very bad pairs quickly. It would be very helpful if we had some hints in the mapping system for this, such as "this is a low bandwidth/expensive address, only use as a backup" or "this is an address with partial connectivity (ie globally unique not globally routable) so only use when you know it's reachable".

The data that we would like to have includes topological connectivity,
including policy constraints, available bandwidth, latency, QoS
availability, etc. ad nauseum.   Where is that data today?
Some of it is in the routing subsystem.  Most of it is not
collected today.  Much of it is real-time information, so
not maintaining a 'current' copy would make the information
worthless.  And the overhead of storing the information and
distributing it is pretty clearly daunting.
80/20 rule?

Instead of trying to swallow this huge mountain of data, we
have a very simple alternative: perform experiments.  By
simply trying the addresses that we get from our addressing
system and actually sending the packets, we are, in effect,
gathering the data that we want to make a decision in the
first place.
Yes. But the question is: do we want to go evaluate connectivity properties for half a dozen round trips before we do anything, do we choose something and go with it until we notice it doesn't work so well, or do we combine these by starting the communication immediately using one address pair and then evaluate connectivity in the background?

This also has the advantage that we can vary the amount of
effort that a particular application/host puts into selection.
Some hosts that only require reachability can do simple
linear search.  Other hosts can do parallel searches of the
possible connections, plus do periodic evaluations to see if
there are more optimal alternatives over time.
How much magic do we need in the application to enable this differentiation?

The network control plane does not change from the functions
that we have available today.  In other words, the problem
is already solved.
???