[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A new spin on multihoming: multihoming classes.



On Sun, 9 Sep 2001, Daniel Senie wrote:

> >That's one or two bits out of 128. Considering that 16 bits are wasted in
> >the IEEE -> EUI-64 conversion, I don't see the problem in IPv6.

> Though the use of MAC address is now an optional thing, due to privacy
> concerns. Yes, there's a large block of addresses. The concern is that we
> not give away too many bits for one thing or another. We also have the
> issue that all providers to a site will need to supply prefixes that
> essentially match (i.e. same sizes from each, so that things overlay
> cleanly). The RIRs would need to adjust their policies to permit this.

I haven't followed the latest developments, but I think the plan is to
give each organization its own /48 and each network, however small (even a
dial-up user) its own /64. So the blocks are always the same size, that
helps. Don't underestimate the huge address space of IPv6: even if you're
multihoming within each TLA (costing 16 bits) with a /48 there is room for
4 billion multihomers.

> >In IPv4, things are different, of course. Still, a single host could have
> >several addresses if this serves an important function, I think.

> It does, but would 2 or 3 different upstreams each provide a block of
> addresses large enough for a web server farm? At what point is the
> consumption of addresses considered more expensive than handing the site an
> AS number, and allowing them to advertise their blocks?

As it is now, you can only be sure that you won't be filtered with a /20.
That means 1300 triple-homed web servers...

[Load balancing with NAT]

> One of my concerns is that these boxes (available today) will look more
> attractive to many people than a migration to IPv6. Indeed, if a site can
> get multiple providers to supply enough address space, then for many cases
> this is an efficient and cost-effective way to get into multihoming.
> Certainly such boxes could be made to play in the IPv6 space as well, or do
> 6to4 in the course of their work.

Essentially you're just making renumbering easier with these boxes: they
don't help much when a line goes down. I wouldn't call this multihoming.
I'm certainly not a big fan of NAT, but it does help preserve address
space in IPv4. Load balancing using NAT is an even bigger hack that
regular NAT and has little to do with multihoming: you're trying to
balance the load over the servers, not over the lines.

> >I do not propose to store data that needs to be changed often in the DNS.

> Unfortunately, one of the things this type of multihoming lends itself to
> is load balancing manipulation. By altering the order of the addresses in
> responses (i.e. doing something other than round robin) is a pretty
> effective way of altering balancing. Setting TTLs very low makes this
> possible. If we go down this path, people WILL be setting their TTLs very
> low.

If we depend on the DNS to only deliver the "best" address. What I propose
is that we use the DNS to deliver all addresses and find out which ones
work and which don't separately. This way, there is no harm in keeping a
broken address in the DNS and the TTL doesn't have to be especially low.

But even if people use low TTLs for things like load balancing, that's not
necessarily evil. Only the A records for the server in question need to
have a low TTL, the rest of DNS tree can have a regular TTL. This means
the resolving name server can directly contact the destination name
server, so only the endpoints experience the higher load. Presumably, the
resources needed for a single DNS query/reply will be dwarfed by the
subsequent "real" communication that takes place.

> Some will argue for ignoring the TTLs and caching longer, but that'll
> create significant problems for many applications, and likely increase
> support costs in the long run.

I don't see the problem... Why would a low TTL for "leaf" records be such
a big problem that people will want to break protocols? Obviously we want
high TTLs for NS records and such.

From another message:

> Be a bit careful here, or you fall into a trap that some of the DNS load
> balancer vendors fall into. You care about the distance/latency between
> the RESOLVER and the target machine, NOT between the RESOLVER's Name Server
> and the target's name server. Several load balancer products ensure the
> latter case is not a problem, however, they do not address the issue of
> proximity of the name server doing the lookup on behalf of the
> user/service/whatever.

By resolver you mean the resolver library on the host that wants to set up
a session?

The latency between a host and the caching name server it uses must be as
small as possible, since applications such as WWW need to get at the
caching name server a lot, and usually the user is waiting during that
time. Also, in many cases this information is not cached locally, so the
same information may be requested many times.

> The recursive name servers used when making requests are NOT required to
> be topologically close to the machine requesting the lookup.

They are in my book.  :-)

> I bring this up just so we don't go down any paths that could lead to
> bad assumptions about DNS resolution as a cure-all.

Yes, we should be careful with the domain name system. On the other hand,
we don't want to prematurely close off paths that could lead to the
multihoming walhalla...

Iljitsch