[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: (multi6) control / load balancing of ingress traffic.



Tony Hain writes:

| The current requirements draft tries to give each group what they
| want, even though the totally network centric and the totally host
| centric approaches are mutually exclusive. Until we have agreement on
| what we want for an IPv6 solution, no progress will be made.

You are quite right: without clear and solid requirements,
we can make an architectural approach which does not meet
people's real needs.

Hence the work on the requirements document as a key part of
the current charter.   Please feel free to suggest improvements
to the editor & the list, and as is the case in dealing with
busy operator types, if you feel there's been a packet lost,
retry.  :-)  Unfortunately, while the editors are clever people,
they are not psychic or all-knowing, so the document likely
will improve with people's shared insights. 

I am not certain there is a clear consensus that the document
is finished, however, as discussed in SLC, we will do a WG
last call as a motivation tool if people don't chime in
with their ideas on the list and/or to the editors.

(I've just moved countries this week & there is little that is more
distracting than buying and selling properties in different parts
of the world.  Add that to my and Thomas's busy schedules (he is an
AD too...), and I guess you might forgive us for not having done
the WG last call already.)

| Maybe the problem is the assumption that a single approach can be found
| for very different functional requirements. 

This need not happen.   If one considers the address format
bits of the IPv6 address as a way to "hack in" an OSI AFI-like
semantic, you could end up with different addresses being
routed by separate (even S.I.N.) routing systems.  So if
there turns out to be a real need for multiple parallel
IPv6 routing systems, there is at least one way of 
accomplishing this, that makes the question of which address
to use possibly interesting.

Note that we do this already for IPv4: unicast and multicast
routing systems are in some places completely independent,
and in some places fairly tightly related, and a packet is
exposed to one system or the other based on the setting of
the first few bits of the address.

An application doing the equivalent of "ping ntp.mcast.net" or
"ping all-routers.mcast.net" with a long-ish ttl might or might
not expect the possibility of multiple responses, for example.
Consider that there is no protocol reason that a DNS query
on any given domain name could give back *both* a unicast and
a multicast IPv4 address.  The application may have the smarts
to choose one versus the other, or it may get "assistance" (e.g. an error)
from the operating system.

Likewise, if the DNS returns an address which will be routed
with the current CIDR system (with attendant protocols like BGP4),
and an address which will be routed with a completely different
one (which may have a different addressing structure and set
of semantics).   An application may choose to use one or the
other based on its own needs, or may lean on the operating system
for assistance, or may choose one at random.  The packet in flight
would be handled by a "FIB" of the appropriate type by forwarding
components along the way.  These FIBs in turn are essentially
tables constructed after processing information learned from
routing protocols.   Different protocols accumulate different RIBs,
which in turn may produce very different FIBs.   Abstractly speaking.

(RIBs and FIBs are discussed in detail in e.g. RFC 1771).

So, a question is: should it be a requirement that we have ONE
single routing system for everyone using IPv6?   Are there
site-multihoming requirements that fall out of the possibility of
having several interdomain routing protocols?

(Wearing my IRTF Routing RG hat, I'd like to hear "no" to the
first question :-) )

	Sean.