[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] Elegance and the rejection of SHIM6 host-based multihoming



On 24 sep 2008, at 5:24, Robin Whittle wrote:

But how could an application protocol know it is running over SHIM6
and therefore decide it doesn't need to use keepalives?

Applications don't need keepalives.

I am confused by this but it is probably not worth pursuing.

:-)

Life is easier if you just go with the flow and don't want to control everything. As an app, when you have data, just send it. Either it will get there or not. Having sent keepalives doesn't make the former more likely or even accurately predict the latter.

With Ivip, there would be no such delay or
extra traffic testing connectivity etc, because the end-user
provided multihoming monitoring system would already have detected
the problem and fixed it already by changing the mapping.

How does ivip know I have an IMAP session that has been idle for 20 minutes?

I still prefer the idea of the end-user having their own,
potentially highly customised, outage detection system with their
own preferred decision making system to restore connectivity with a
single action

Right, that's exactly what REAP does.  :-)

Yes, that is correct for shim6 is it's currently defined. But you could easily (ok, maybe not so easily) add a mapping system to shim6 and then
it COULD support all of this.

Then it wouldn't so much resemble SHIM6 as Ivip's option for ITR
functions in the sending hosts!

The shim6 control messages could be reused with very few changes.

Get over it, this is a result of choices made in the IPv6 design a long
time ago, regardless of the presence of shim6.

But those choices only make sense if SHIM6 is widely adopted.

Many hosts that run IPv6 will also run some kind of IPv4, so that means that they'll have at least an IPv6 link local address, a global IPv6 address and an IPv4 address. So apps must be prepared to cycle through all addresses and know which one to use for referrals anyway, regardless of whether the host has multiple global IPv6 addresses.

rather than giving up on this and expecting hosts to
maintain reliable communications with other hosts in an environment
of multiple IP addresses, any one of which could become unusable
without prior notice.

This is current practice today because DHCP servers may give you a different IPv4 address at any time. When that happens applications like Apple's Mail simply reconnect to the server without interrupting the user's work.

Other apps need some kind of manual restart, though.

Since when are referrals reliable in the first place? They're a big, big
mess and the IETF hasn't even considered starting a cleanup effort.

1 - A specifying C to B by way of C's DNS name.

That doesn't happen in practice because home users have no control over their DNS names.

2 - A specifying C to B by way of C's potentially multiple
   addresses, of which A may only know one.

Current protocols are unable to do this.

Both are a lot more complex than using a single IP address - and for
any applications which use a single IP address now, there would have
to be significant application and protocol changes to handle either
of the two options above.

Yes. But like I said, hosts have multiple addresses and/or don't know their own address because they're behind NAT today so there is no easy way out.

There is plenty of IPv4 address space for the forseeable future, it's
just not distributed evenly. Redistribution efforts haven't had
spectacular results in the past, and I don't expect them to be succesful
enough in the future to continue to meet current demands after we run
out of free IPv4 address space.

Past redistribution attempts, of which I know little,

"Please return unused addresses or at least sign the ARIN contract".

were not
motivated by the increasing financial impetus to gain precious IPv4
space

I believe that freeing up considerable amounts of address space is more expensive than the price the big address users are willing to pay so if a market is allowed it will only work for small blocks where there isn't much of an issue in the first place.

so I expect there will be considerable business and
technical innovation in making the most of the increasingly
fragmented and populated IPv4 space,

If the big users aren't going to string together lots of small blocks when they can't get big blocks anymore the fragmentation won't increase beyond what's already happening now.

The most obvious technical solution is a core-edge separation scheme
which can slice and dice address space down to single IP addresses
in a scalable manner: LISP with PTRs or Ivip.

No, the most obvious technical solution is RFC 2460. But apparently obviousness isn't much of a consideration.

I'm fine with having host level mobility in a map and encap scheme as long as there is a hierarchy, I don't want to see 3.7 billion /32s at the top of the addressing hierarchy.

I don't have a clear idea about how many DFZ routes constitutes an
unworkable problem.

We cleared the 250k hurdle without too many problems, so I guess we'll be ok until around 500k if the growth doesn't happen too fast.

But big ISPs have lots of internal prefixes, so they may run into some limit at any time and start filtering longer prefixes, like Sprint 10 years ago.

Probably there is no clear point where things
get unacceptably bad

Because BGP is a real time system, when you do X and you break it, you quickly undo X and it will probably work again.

--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg