[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [e2e] End-to-end is a design guideline, not a rigid rule



I'll agree with points made in each of the emails in this thread.

From my perspective, "end to end" includes both "end-to-end-across-a- single-communication" and the "end-to-end-in-a-disruption-tolerant- manner" models that Dave C mentioned. End to end in an email means that when I send a message to the various recipients of end2end- interest, I expect service in each of the hundreds of cases of that to be essentially the same - that the content of the message will not be changed en route, that the envelope of the SMTP message will be updated at each application layer hop to facilitate problem diagnosis, and that delivery will be timely within the service limits of the application or a response will be sent to me saying that it could not be accomplished. Supporting that, the MUA-MTA, MTA-MTA, and MTA-MUA hops will similarly be handled with minimal effort. One would expect that the interaction of MUAs and MTAs across a network of ISPs to be indistinguishable from one in which they all happened to be colocated on a common LAN apart from the rate and timing side-effects of the engineering of the network.

One place where I depart from a common view of the end to end argument is that there are times when it makes sense to actively enquire of the network and expect the network to make a response that characterizes itself. A completely "stupid" network, such as a 3/4" diameter yellow coaxial cable, would not be able to respond, and as I understand Isenberg, that is the way all networks should behave. All intelligence should be in the end system and only in the end system. But (Dave R, tell me if I am wrong) Saltzer/Reed doesn't seem to suggest that. The point of the original end-to-end argument was not that intelligence should reside only in the end station or only in the application; it was that a lower layer should not do something that also had to be done at a higher layer without a good justification. An example, often repeated, is that LAPB go-back-N retransmission is redundant in the presence of TCP or application retransmission, and that it measurably resulted in packet duplication around bit errors. That said, 802.11 also has retransmission, and if it didn't, behavior on wireless LANs would be a lot worse than it is. Hence, we retransmit in TCP in the general case, but 802.11 presents a case where link layer retransmission is still justified. This understanding of the end-to-end principle would seem to suggest that interactions with the network that inform the intelligent edge and enable it to make better decisions are within the principle's scope. I view both the integrated services and the differentiated services architectures in that light - one doesn't want the network to subvert the intent of the intelligent edge, but interactions that enable it to better achieve its intent are good.

And then, what is subversion? It is pretty common to put in what amounts to a network honeypot, in which one of the addresses in a prefix is routed down a tunnel to a collector. In the event that anyone sends something to the address, the collector picks it up, and management remediation actions follow. Is this "subversion of routing"? I would argue that it is "routing", but is not "subversion". Ditto the case where a system comes under attack and network ops staff reroutes the address through the same kind of tunnel. That certainly subverts the attack, and makes the targeted system unavailable for a period of time until the attack can be interdicted. But for any legitimate use of the targeted system, it's hard to describe as subversion; it's part of the process of restoration.

As to NATs and such - to my way of thinking, a NAT is two things in one. It is a stateful firewall (it maps active authorized address/ port pairs, creates such a mapping if it originates from "inside", and if the mapping doesn't exist it blocks communication from the "outside"), which if one thinks having skin on the human body is good for its health one has to consider a reasonable prophylactic protection. To the extent that applications and protocols above the network layer know something about network layer addresses, NATs also create difficulties in deploying such applications. In that sense, a NAT is a man-in-the-middle attack, something that makes life difficult for the application. I'm all for good firewalls; the end to end model doesn't speak highly of things that break application behavior, however.

So, coming back to Dave C's point about our current network architecture not doing very well with hidden boundaries, I would say "you are correct; it doesn't". I don't think that is a failure of the end-to-end principle, however. It may be a failure of our ability to apply it correctly. If all applications were message-based, like email is, one could imagine a firewall acting something like an MTA - terminating the conversation in one domain and then repeating it in another, in a manner entirely consistent with the end-to-end principle as applied to email in its two forms of end-to-end-ness. If all applications were able to be proxied, like SIP, or the various users of SOCKS are, the proxy could literally be the trusted-and- known third party that made the transition happen. If IP were very slightly different, with the AS number in the header and listed in DNS and the routing protocols, and addresses being understood as local to the identified AS, we could assign an AS to every region behind a NAT, and the whole thing would work quite nicely end to end.

The problem is not that the architecture and available tools don't handle the concept of separation of domains; it is that our current common implementation of separation of domains involves a man-in-the- middle attack on a subset of the relevant applications and protocols. As Dave R points out, the man-in-the-middle attacks that we build in make the network harder to manage and harder to maintain, and makes the applications harder to improve. Fixing the architecture, in my opinion, will involve removing the things that subvert the intent of the end system, which is to say, changes them in the direction of Salter/Reed's version of the end-to-end principle.



From: Dave Crocker <dhc2@dcrocker.net>
Date: December 1, 2005 7:04:04 AM PST
To: end2end-interest@postel.org
Subject: [e2e] End-to-end is a design guideline, not a rigid rule
Reply-To: dcrocker@bbiw.net

Folks,

A posting on Farber's IP list finally prompted me to write some thoughts that have been wandering around in the back of my mind. I'm interested in reactions you might have:


"Andrew W. Donoho" wrote:
> The debate about NAT obscures the real issue - that there are legitimate reasons to assert policies for net access at organizational boundaries. Yes, we want the internet architecture to be end to end.


This struck me as a particularly useful summary statement about some core architectural issues at hand: Internet technical discussions tend to lack good architectural constructs for describing operations, administration and management (OA&M) boundaries, and we lack robustness in the "end to end" construct.

The issue of OA&M boundaries has long been present in the Internet. Note the distinction between routing within an Autonomous System and routing between ASs. To carry this a bit further, note that the original Internet had a single core (backbone) service, run by BBN. The creation of NSFNet finally broke this simplistic public routing model and required development of a routing protocol that supported multiple backbones.

As another example, the email DNS MX record, that one finds over the open Internet, is also generally viewed as marking this boundary and is often called a Boundary MTA. However the Internet Mail architecture does not have the construct explicitly. For a year or so, I have been searching for a term that marks independent, cohesive operational environments, but haven't found one that the community likes. Some folks have suggested a derivation of an old X.400 term: Administrative Management Domain (ADMD).

More generally I think that this issue of boundaries between islands of cohesive policy -- defining differences in the trust within an island, versus between islands -- is a key point of enhancement to the Internet architecture work that we must focus on. I have found “Tussle in Cyberspace: Defining Tomorrow’s Internet,” (Clark, D., Wroclawski, J., Sollins, K., and R. Braden, ACM SIGCOMM, 2002) a particularly cogent starting point, for this issue.

On the question of the "end to end" construct I believe we suffer from viewing it simplistically. What I think our community has missed is that it is a design guideline, not a rigid rule. In fact with a layered architecture, the construct varies according to the layer. At the IP level, this is demonstrated two ways. One is the next IP hop, which might go through many nodes in a layer-2 network, and the other is the source/destination IP addresses, which might go through multiple IP nodes.

The TCP/IP split is the primary example of end-to-end, but it is deceptive. TCP is end-to-end but only at the TCP layer. The applications that use TCP represent points beyond the supposed end- to-end framework.

My own education on this point came from doing EDI over Email. Of course I always viewed the email author-to-recipient as "end to end" but along comes EDI that did additional routing at the recipient site. To the EDI world, the entire email service was merely one hop.

This proved enlightening because the point has come up repeatedly: For email, user-level re-routing and forwarding are common, but outside the scope of the generally recognized architecture. I've been working on a document that is trying to fully describe the current Internet Mail architecture:

  <http://bbiw.net/specifications/draft-crocker-email-arch-04.html>

However it is not clear whether it will reach rough consensus.

My own view is that the email concept of end to end has two versions. One is between the posting location and the SMTP RCPT-TO (envelope) address and the other is between the author and the (final) recipient. Failure to deal with this explicitly in the architecture is proving problematic to such email enhancements as transit responsibility, such as by SPF or DKIM).

In other words, the Internet technology has never been a pure "end to end" model. Rather, end to end is a way of distinguishing between components that compose an infrastructure versus components that use the infrastructure -- at a particular layer. "End to end" is a way of characterizing a preference to keep the infrastructure as simple as possible.

This does not mean that we are prohibited from putting anything into the infrastructure or changing the boundaries of the infrastructure, merely that we prefer to keep the it unchanged. In this light, NATs (and firewalls) are merely a clear demonstration of market demand for some facilities that make end to end layered with respect to some operational policies, to permit the addition of a trust boundary between intra-network operations and inter- network operations.

We should not be surprised by this additional requirement nor should we resist it. The primary Internet lesson is about scaling, and this appears to be a rather straightforward example of scaling among very large numbers of independent and diverse operational groups. Growth like this always comes with vast cultural diversity. That means that the basis for trust among the independent groups is more fragile. It needs much more careful definition and enforcement than was required in the kinder and gentler days of a smaller Internet.


d/
--

Dave Crocker
Brandenburg InternetWorking
<http://bbiw.net>


From: Joe Touch <touch@ISI.EDU>
Date: December 1, 2005 11:38:42 AM PST
To: dcrocker@bbiw.net
Cc: end2end-interest@postel.org
Subject: Re: [e2e] End-to-end is a design guideline, not a rigid rule

The "ends" and "hops" in E2E are relative, at least they always have been to me. All the E2E argument says, in that context, is that you can't compose HBH services to end up with the equivalent E2E.

It never said not to do HBH (e.g., for performance). It never said where the ends definitively were for all layers, IMO.

Joe


From: "David P. Reed" <dpreed@reed.com>
Date: December 1, 2005 11:40:08 AM PST
To: end2end-interest@postel.org
Subject: Re: [e2e] End-to-end is a design guideline, not a rigid rule

[oops, Dave C. pointed out that I replied only to him, instead of only to e2ei, and encouraged me to send it to the whole list]

The end-to-end argument was indeed a design guideline not a rigid rule as proposed. On the other hand, as you point out, Dave, its value as a guideline is making a system scalable and evolvable. And there's a corollary: building function into the network has costs as well as benefits. Too often we ignore those costs, because they are less visible than the benefits.

However, I disagree with your example. The problem is that topology doesn't map to authority. Yes, there are organizational boundaries, and organizations have an interest in communications between peers. However, those organizational boundaries do NOT correlate closely with physical network boundaries. The premature binding of organizational boundaries to physical topological connect points is why NATs and so forth so often miss the mark on solving the true "end-to-end" problems we have.

So, I agree with you on your major point, but I disagree that email is a good example of how to either apply or ignore the end-to-end argument.

One merely has to examine the move to having hotel ISPs spoofing SMTP connections based on their organizational "interest" in blocking spam (and their lawyers assert that the the law *requires* them to do this). That man-in-the-middle solution actually prevents better solutions (such as crypto-authentication that prevents man- in-the-middle attacks) to the actual end-to-end requirements that users want.



Attachment: PGP.sig
Description: This is a digitally signed message part