[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: A new spin on multihoming: multihoming classes.



At 01:11 PM 9/9/01, Iljitsch van Beijnum wrote:
>On Sun, 9 Sep 2001, Daniel Senie wrote:
>
> > >That's one or two bits out of 128. Considering that 16 bits are wasted in
> > >the IEEE -> EUI-64 conversion, I don't see the problem in IPv6.
>
> > Though the use of MAC address is now an optional thing, due to privacy
> > concerns. Yes, there's a large block of addresses. The concern is that we
> > not give away too many bits for one thing or another. We also have the
> > issue that all providers to a site will need to supply prefixes that
> > essentially match (i.e. same sizes from each, so that things overlay
> > cleanly). The RIRs would need to adjust their policies to permit this.
>
>I haven't followed the latest developments, but I think the plan is to
>give each organization its own /48 and each network, however small (even a
>dial-up user) its own /64. So the blocks are always the same size, that
>helps. Don't underestimate the huge address space of IPv6: even if you're
>multihoming within each TLA (costing 16 bits) with a /48 there is room for
>4 billion multihomers.
>
> > >In IPv4, things are different, of course. Still, a single host could have
> > >several addresses if this serves an important function, I think.
>
> > It does, but would 2 or 3 different upstreams each provide a block of
> > addresses large enough for a web server farm? At what point is the
> > consumption of addresses considered more expensive than handing the site an
> > AS number, and allowing them to advertise their blocks?
>
>As it is now, you can only be sure that you won't be filtered with a /20.
>That means 1300 triple-homed web servers...

No, you misunderstand.

With a multi-port NAT setup, you use ISP-provided space from each provider. 
There's no BGP, no ASN, no advertisements, and no filtering, 'cause you're 
within someone's large block on each of the links.


>[Load balancing with NAT]
>
> > One of my concerns is that these boxes (available today) will look more
> > attractive to many people than a migration to IPv6. Indeed, if a site can
> > get multiple providers to supply enough address space, then for many cases
> > this is an efficient and cost-effective way to get into multihoming.
> > Certainly such boxes could be made to play in the IPv6 space as well, or do
> > 6to4 in the course of their work.
>
>Essentially you're just making renumbering easier with these boxes: they
>don't help much when a line goes down. I wouldn't call this multihoming.

Wrong. These boxes work QUITE WELL in the event of a line outage. Outbound 
connections from the site which were in progress over the dead link 
obviously die, but new connections will go only over the functioning links. 
With reasonable DNS-based load balancing and short TTLs, the inbound 
connectivity will be relatively unaffected as well.

>I'm certainly not a big fan of NAT, but it does help preserve address
>space in IPv4. Load balancing using NAT is an even bigger hack that
>regular NAT and has little to do with multihoming: you're trying to
>balance the load over the servers, not over the lines.

I'm not a big fan of NAT either, despite having implemented it and done a 
fair bit of writing about it, but it CAN be useful, and this "multihoming" 
approach using NAT is fairly workable. Products are on the market which do 
this already.


> > >I do not propose to store data that needs to be changed often in the DNS.
>
> > Unfortunately, one of the things this type of multihoming lends itself to
> > is load balancing manipulation. By altering the order of the addresses in
> > responses (i.e. doing something other than round robin) is a pretty
> > effective way of altering balancing. Setting TTLs very low makes this
> > possible. If we go down this path, people WILL be setting their TTLs very
> > low.
>
>If we depend on the DNS to only deliver the "best" address. What I propose
>is that we use the DNS to deliver all addresses and find out which ones
>work and which don't separately. This way, there is no harm in keeping a
>broken address in the DNS and the TTL doesn't have to be especially low.

Ah, but if you give out more than one address, but specifically manipulate 
the order of the entries given out, and keep the TTLs low, it's possible to 
get a degree of load balancing at the same time.


>But even if people use low TTLs for things like load balancing, that's not
>necessarily evil. Only the A records for the server in question need to
>have a low TTL, the rest of DNS tree can have a regular TTL. This means
>the resolving name server can directly contact the destination name
>server, so only the endpoints experience the higher load. Presumably, the
>resources needed for a single DNS query/reply will be dwarfed by the
>subsequent "real" communication that takes place.

Assuming the DNS and the actual traffic are going in the same neighborhood, 
which admittedly is the most common case.


> > Some will argue for ignoring the TTLs and caching longer, but that'll
> > create significant problems for many applications, and likely increase
> > support costs in the long run.
>
>I don't see the problem... Why would a low TTL for "leaf" records be such
>a big problem that people will want to break protocols? Obviously we want
>high TTLs for NS records and such.

Agree, people shouldn't do these things, but some appear to. Some 
applications cache DNS data, for example, which is a concern. Such caching 
should be limited to the timeouts specified, but will applications bother? 
A better resolver API would probably be a good thing.


> >From another message:
>
> > Be a bit careful here, or you fall into a trap that some of the DNS load
> > balancer vendors fall into. You care about the distance/latency between
> > the RESOLVER and the target machine, NOT between the RESOLVER's Name Server
> > and the target's name server. Several load balancer products ensure the
> > latter case is not a problem, however, they do not address the issue of
> > proximity of the name server doing the lookup on behalf of the
> > user/service/whatever.
>
>By resolver you mean the resolver library on the host that wants to set up
>a session?

Yes.


>The latency between a host and the caching name server it uses must be as
>small as possible, since applications such as WWW need to get at the
>caching name server a lot, and usually the user is waiting during that
>time. Also, in many cases this information is not cached locally, so the
>same information may be requested many times.

Web browsers cache DNS info for at least short periods. If they didn't 
browsing over a dialup would probably be painfully slow. There are 
instances where they'll be far away. While having the name server close 
speeds things up for the user or server making the request, having that 
topological closeness ALSO be used to try to determine best site for load 
balancing and other such games is a concern.


> > The recursive name servers used when making requests are NOT required to
> > be topologically close to the machine requesting the lookup.
>
>They are in my book.  :-)

Hope you don't have to use VPNs much. That's one of several cases where the 
name servers can wind up being VERY far away.


> > I bring this up just so we don't go down any paths that could lead to
> > bad assumptions about DNS resolution as a cure-all.
>
>Yes, we should be careful with the domain name system. On the other hand,
>we don't want to prematurely close off paths that could lead to the
>multihoming walhalla...

My concern is that we don't choose paths prematurely that may result in 
other troubles. DNS has been our kitchen sink for a long time, and while 
it's done well, some of the ways it's being used are problematic, and the 
situation could get worse. Let's consider these approaches but be careful.


-----------------------------------------------------------------
Daniel Senie                                        dts@senie.com
Amaranth Networks Inc.                    http://www.amaranth.com