[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RRG] NAT-PT and other approaches to IPv6 adoption - Dual Stack Lite



Robin,

On Aug 1, 2008, at 1:48 AM, Robin Whittle wrote:
I don't believe RRG is assuming IPv6-only.  I believe RRG is
assuming dual-stack with (at least) a more scalable IPv6 routing
infrastructure that may also be applicable to IPv4.
Dual-stack doesn't help the IPv4 routing scaling problem at all.

True. It doesn't hurt it either. The protocol being used is a bit orthogonal to whether the routing system scales.

Every end user still needs an IPv4 address.

Well, every NAT would still need an IPv4 address. But this is also a bit orthogonal to whether the routing system scales.

What matters in routing scalability is how addresses are aggregated. You can make IPv4 or IPv6 scale or not depending on how the addresses are allocated and announced to the routing system.

Only when large numbers of end-users are on IPv6-only services -
including perhaps a service which uses special NAT arrangements to
share one IPv4 address with multiple end-users - will IPv6 be able
to help reduce the IPv4 scaling problem.

Not necessarily. You seem to be assuming IPv6 will be allocated in ways that promote aggregation and/or address recipients won't announce more specifics of the blocks they received from their allocators. If (as is the current trend at all RIRs) IPv6 addresses are assigned as "provider independent", we'll have recreated the non-scalable IPv4 swamp. Yay us.

Currently, IPv6 is additive to the size of routing system, that is IPv6 routes do not displace IPv4 routes. As such, it exacerbates the routing scalability problem.

[Alain's "dual stack lite"]
This is clearly unsuitable for many customers who make extensive use
of Bittorrent etc.  I assume that P2P applications only work
properly when they can use uPnP IGD to get a public port so they can
accept incoming communications from other such programs.

I haven't been following the P2P stuff particularly closely but my understanding is that there are a common set of conventions that allow P2P applications to bypass NATs. This would be what I would expect: applications will evolve to meet the constraints of the environment they must operate in to be successful. If the preponderance of networks an application will be used in sit behind NAT, then the application will have to cope with NAT or no one will use it.

And gee, from Comcast's perspective wouldn't it be just horrible for Comcast (you know, the folks who just got slapped by the FCC because they were trying to stop P2P from sucking up all their upstream bandwidth) if P2P stuff didn't work anymore?

I don't see how such a service could be marketable, since it does
not suit a significant proportion of end-users.  The marketing would
have to be very careful so as not to promise too much.

Not promise too much? This is a type of marketing I'm not familiar with. (:-))

I believe there is a lot of scope for better utilization of IPv4
space, so I think we are a long way from the state of affairs where
Comcast and their competitors can't scrape together enough space to
offer the service that people are used to getting.

Given someone from Comcast is proposing "dual stack lite", I suspect you are underestimating the situation Comcast believes they're facing. For example, while it is very clear that IPv4 address utilization efficiency can be greatly increased, the question is how do you incent folks to be more efficient. This gets into religion and questions of address allocation policy that I suspect we don't want to get into here. If you're interested in this sort of thing, I'd recommend ppml@arin.net (and/or seeing a therapist).

My conclusion is that this sort of service is going to be difficult
or impossible to sell as long as customers can choose between it and
a competing service with a unique IPv4 address.

As you know, the free pool of IPv4 address space is projected to run out in 2011 (last I looked). My guess is Comcast (et al.) will change their default allocation to their new customers. The vast majority of customers most likely won't notice. Those that do will likely be given the option of purchasing "SuperDuperPowerBooster(tm)" (or some other marketing term) for an additional (say) $9.95 a month which will grant them a dynamically allocated IPv4 address that can be used for NAT.

The IPv4 Internet has a routing scaling problem which has been
recognised for years.

Since IPv6 currently uses the same routing technology as IPv4, the _Internet_ has a routing scalability problem. The symptoms of that problem are most apparent in IPv4 because that is where the vast majority of prefixes come from, however the benefit of fixing IPv4 routing is merely enabling further penetration of (potentially multi- layer) NAT. And if we're doing that, then what's the point of deploying IPv6?

One attractive arrangement might be for businesses to get a single
IP address, or just a few, and multihome them via DSL and some other
mechanism for backup - such as HFC cable or WiMax.  Then they run
their mailserver, NAT boxes etc. on that one or a few IP addresses.

The only way they will be able to do this scalably, or affordably,
is with a little portion of address space - sliced and diced by
map-encap.

If businesses are sitting behind one or two addresses (e.g, a NAT and a public service machine), then the addresses can be 'provider aggregatable'. Multi-homing of the kind you describe can be done with "Stupid DNS Tricks" without burdening the routing system. You now have an InterNAT that scales to O(2^32) end points.

One possible scenario to address this: as IPv4 continues to
get sliced and diced, ISPs will be forced to start deploying more
and more draconian filters in order to keep their routers from
falling over/converging in reasonable time-frames.

Do you mean refusing to accept some subset of the routes they
currently accept?

Yes.

I think this is an unlikely scenario.

We've been here before, circa 1996. Some of still have the t-shirts. We have an empirical proof that ISPs will do what they feel necessary to protect their own infrastructure including filtering long prefixes. Some argue that such prefix length filters are unstable, that commercial pressures result in a tendency to remove the prefix length filters. My view is that there is a tension between operators and sales folk. As routers get upgraded, sales folk win battles and prefix length filters are relaxed. As routers fall over, operators win battles and the prefix length filters are put back in.

There is no way ISPs are going to cut off connectivity to any prefix
which their customers actually want to access.

How many routes do you think a typical end user customer of an ISP like Comcast or NTT need to see? I figure (with no actual data) that the typical end user accesses a minute fraction of the 250K routes. Yes, a few customers will be unable to reach their pr0n and will whine. If enough customers whine about the same prefix, the ISP will undoubtely allow the route. If only a few people whine, the ISP can then say to those few customers "pay an additional $10/month and will let that route through". This would lead to a natural incentive for folks to figure out ways to not pay the 'long prefix surcharge', including potentially deploying IPv6.

I am sure this scenario would never occur.

:-)

http://markmail.org/message/i6ckia4vhsy5duql

Regards,
-drc


--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg