[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

equicast



Just a thought - IPv6 already allows for several kinds of multicast. Could an "equicast" (equitable load-balancing anycast - exactly one of the listeners responds to pending messages in turn when available) be a simple solution to load balancing? An analog would be the one long queue in the airport served by many agents. (Obviously I'm just waiving my hands here, there would be some knotty issues like who maintains the queue of packets, and how do the servers coordinate saying "next?", what to do when no server is available after a timeout. Definitely sounds like an application rather than IP solution.)

I don't want to trigger a debate on this, it is a good candidate for halfbakery.com so just take it as food for thought rather than a chance to shoot holes in swiss cheese. But if anyone is aware of serious work on this kind of approach I'd be curious.

Ed J.

Mark Smith wrote:
On Mon, 16 Apr 2007 15:55:16 -0700
james woodyatt <jhw@apple.com> wrote:

On Apr 16, 2007, at 13:46, Mark Smith wrote:
On Mon, 16 Apr 2007 11:37:25 -0700, james woodyatt <jhw@apple.com> wrote:
Yes.  I have to do this to make application layer gateways (ALG) for
IPv6 to be fully transparent.
They won't be "fully transparent". The devices on the public side of
the NAT will be able to detect them, or conversely, will break because
those devices or applications assume (quite reasonably, because that's
the (IPv6) Internet architecture) that there is a one-to-one mapping
between a network layer device and address.
p1. The mapping between nodes and addresses is already not one-to-one.


Which is really one of the fundamental reasons why IPv6 exists. Mapping
between nodes and addresses isn't one-to-one anymore because of NAT.

The network layer of the Internet (and DECNET, Appletalk, IPX and CLNS
etc.) were designed with the basic property that all nodes attached to
the same network would have unique and individual addresses.

The key property that unique and individual addresses gives a device
that is a "member" of a network layer is the equal ability to send and
receive packets to any of the other members of the network. The true
nature of network layers is peer-to-peer. Whether the target node
decides to process or ignore the received packet is up to the
individual and unique target node.

It is only the application architecture that defines whether a node is
performing a "client" or "server" role. Even then, that isn't a network
layer node classification - if I run a webserver and a web client on
the same node, as separate processes, you can't then point to that box
and say "its a server" or "it's a client".

The most common form of NAT, port translation, takes away the peer
property of the network layer devices, and that is why time and
resources have had to be spent developing NAT workaround mechanisms
such as STUN, for applications whose best suited architecture is
peer-to-peer. It would be better if the time and resources developing
and implementing mechansims such as STUN could have been spent on
adding functionality to the application, or debugging it further,
rather than working around a limitation in the network layer that
wasn't originally there or intended to be there.

Most applications, up until the predominance of NAT, also made the
assumption that a network layer address uniquely identified a member of
the network layer. 1-to-1 NAT breaks that, and so it breaks those
applications. Again, time has to be spent implementing work arounds in
applications that are being caused by a limitation in the network
layer. And again, there would be other things that the time and
resources could be better spent on.

IOW, the IPv4+NAT network layer is broken, but people are being forced
to work around those faults in the applications.
Fixing the problem properly means fixing it where it is occuring. IPv6,
by restoring network layer addressing uniqueness, fixes the IPv4+NAT
problem where it's occuring - at the network layer.

p2. I don't see the NAT required for IPv6 ALG's as breaking end-to- end addressability.


If the source or destination addresses are changed within the network,
rather than being preserved fully end-to-end (or network edge to network
edge - where the traffic sinks and sources exist), and that is what NAT
does, then NAT is breaking end-to-end addressability.

Whether network policy permits application reachability is independent of whether applications are addressable. I only think NAT needs to be used to redirect application flows between middleboxes, not between application endpoints in separate addressing realms.


This sounds a bit to me as though the network layer is the incorrect
layer to be trying to solve this problem. The middlebox I'm guessing
you're talking about are things like application load balancers. It
would seem to me that the better way to solve high end application
performance and availability is build things such as CPU/Memory sharing
in the end node operating system. Technologies such as mainframes, or
commodity hardware with an OS that supports a "single system image"
across multiple machines would be a better fit for this sort of problem.

Having had to deal with the problems that "transparent" ALG devices
cause at the network layer, I don't want to see them again. e.g.
traceroute doesn't show the path the traffic is actually taking, they
create performance bottle necks, you have to traffic engineer certain
traffic to always pass a certain point in the network so that the ALG
gets a look at it, [...].
If stateful packet filters are widely deployed in residential IPv6 gateways and turned on in the factory default mode, I can only see one way to make some of those problems avoidable: by defining something as a bump-in-the-stack that uses a yet-to-be-defined ICMP subprotocol for signaling application listeners to the packet filters in the path, and rolling that out everywhere in the Internet instead.


I think a better way to solve this problem is to have firewalls on each
node, rather than within the network. You don't need to worry about
openning pin-holes in an upstream device if the only thing that "doesn't
have the pin-hole" is the node itself.

Have a read of Steve Bellovin's paper, "Distributed Firewalls", which
details the idea of end-node firewalling. It's the place I first came
across the idea.

http://www.cs.columbia.edu/~smb/papers/distfw.html

Since I read that paper a number of years ago, an intersting thing has
happened - all major end-node OSes come out of the box with firewalls
installed and enabled.

I think a good way to think about end-node firewalling is to use the
truism, "if you want something done properly, you need to do it
yourself". Nodes that are attached to networks can't trust that there
is going to be an upstream device that is going to protect them, and
the only way they can get protection is to do it themselves. Once
they're doing it themselves, there isn't any need to develop any PMP
protocols, because there doesn't need to be an upstream protection
device. Having per-end node firewalls also makes all the end-nodes more
resilient - if a peer on the subnet is compromised, exploited or
attacked, that doesn't in anyway weaken other node's protections. The
same certainly can't be said about upstream network located firewalls.

Personally, I think this alternative makes better sense over the very long term, but my experience is that the very long term always comes at a lower priority than the end of the current financial quarter.

[...] they create problems with remote websites and
applications that very reasonably assume a one-to-one mapping between
an IP address and a user.
Yeah, but there are security considerations there. RFC 3041 is an effort to address some of those, and not in a way that will make it *easier* for remote websites to continue assuming a one-to-one mapping between IP addresses and users. This was always a dumb assumption, and it will only get dumber with IPv6, no matter what we do to resolve the problems I'm trying to highlight here.


I completely disagree. As I mentioned before, every network layer
protocol ever designed has had the per-node uniqueness property.
Uniqueness of addressing is a requirement for unicast communication to
take place with any chance of success.

Can you describe some of those interactions ? As far as I'm aware RTSP, as do other applicaton protocols above the transport layer, just use IP
as a dumb packet transport between specified IP addresses.
Sure. Both the RTSP client and server write IP address and UDP port numbers for the RTP and RTCP flows into the RTSP headers. The stateful packet filter that blocks incoming flows will need to inspect the SETUP method and search for the "Transport:" headers, parse them for the source, destination, client_port and server_port attributes, and open/close pinholes for them as needed.

There are other applications, I'm sure, that have similar problems. As noted, ISAKMP/IKE is one of them unless ESP encapsulation in UDP is used in conjunction with some kind of ICE-like UDP probing scheme to keep the outbound UDP pinhole open while incoming UDP/ESP- encapsulated flows might be received.

I'm pretty sure that BitTorrent is another such application. Also, I'm given to understand that some VoIP applications may be broken by the stateful packet filtering IETF recommends unless there is an ALG present for the application.


All these complications disappear once the end-node running the
application is the same node that is taking ownership if its own
protection.
I figure while I'm doing that, I might as well write a general
purpose IPv6 NAT.  I wish I didn't have to do this, but the security
considerations are pushing me into it.
Can you detail those security considerations ? I'd find it hard to
believe that NAT is the only security technique (and personally I don't
really consider it to be one anyway) that solves your problem. What
unique property does NAT provide that no other alternative (such as
stateful firewalling with public addressing) doesn't?
I only *NEED* the NAT to redirect flows into transparent ALG's to support the stateful packet filter. (I'd say it's the market that seems unconvinced-- despite the laudable efforts of the authors of the NAP draft-- that IPv6 is sufficient for their needs without general purpose NAT being available, but that's not really my concern.)

I'm considering the task of writing a general purpose IPv6 NAT because:

1) I now have to maintain a full suite of ALG's for both IPv4 and IPv6;

2) I've got a collection of IPv4 ALG's already that depend on NAT to work;

3) I will still have support IPv4/NAT for the foreseeable future;

Therefore... the easiest way forward for me is the shortest path: by extending my general purpose IPv4 NAT to support IPv6. It saddens me to have to do it, but there it is.


The question I ask is do you have to solve your problems when using
IPv6 exactly the same way you've had to solve them when using IPv4 ?
Yes, IPv4 and IPv6 are similar enough that a fair number of IPv4
solutions can be directly moved to IPv6 by just replacing the
addressing. But IPv6 is also different enough that inherent constraints
that IPv4 created, which then limited possible solutions, may not
exist. As a couple of examples, unique local unicast addressing
wouldn't be possible to implement in IPv4, and neither would HIP.
Regards,
Mark.