[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Simple Internet and The End to End Principle



Iljitsch;

Thank you for demonstrating how a simple "optimization" harms
the Internet without really improving anything.

> >> The source address is something I'd really like to check.
> 
> > You don't.
> 
> > If payload passes authentication check, the payload is considered
> > to be reliable regardless of the source address.
> 
> Re the spam discussion on the IETF list: I'd rather know I'm dealing 
> with a "good" source rather than check the content of everything I 
> receive.

How can you check source without cheking the content?

Note that ICV of AH is computed over both IP header and the conent.

> >> When I get around to it, I plan to write a draft about how ISPs can do
> >> proxy IPsec AH processing for their customers to eliminate denial of
> >> service traffic.
> 
> > It is an utter violation of the end to end principle and assured
> > not to scale, which, for example, means poor performance.
> 
> Writeups of e2e that I've seen explicitly state that something that 
> should be done end-to-end MAY also be done in the network as an 
> optimization. As long as it's an optimization and not shifting of 
> responsibility it's ok.

The good source on the end to end principle is RFC1958, which is,
thanks to Brian, very carefully worded. It does not state: "as long
as it's an optimization and not shifting of responsibility it's ok".

In the RFC, it is stated that:

   To quote from [Saltzer], "The function in question can completely and
   correctly be implemented only with the knowledge and help of the
   application standing at the endpoints of the communication system.
   Therefore, providing that questioned function as a feature of the
   communication system itself is not possible. (Sometimes an incomplete
   version of the function provided by the communication system may be
   useful as a performance enhancement.")

So, it is assured to be "incomplete" and it is of course that hosts
are expected to have the functionality.

For example, TCP takes care of lost packets, though ethernet may
perform local retransmission of packets lost in collision.

> I don't see why this shouldn't scale: 10 times more traffic means 10 
> times more crypto chips. O(n) scaling isn't that bad.

The RFC continues as:

   This principle has important consequences if we require applications
   to survive partial network failures. An end-to-end protocol design
   should not rely on the maintenance of state (i.e. information about
   the state of the end-to-end communication) inside the network. Such
   state should be maintained only in the endpoints, in such a way that
   the state can only be destroyed when the endpoint itself breaks
   (known as fate-sharing). An immediate consequence of this is that
   datagrams are better than classical virtual circuits.

and

   The network's
   job is to transmit datagrams as efficiently and flexibly as possible.
   Everything else should be done at the fringes.

As you should know, common line rate today is 10G, because people
want routers operate at the rate at which hardware can barely
perform the simplest job of forwarding packets.

> Good luck DoSing a box that can do crypto at line rate.  :-)

See above. :-)

Note that a performance bottleneck is retrieval of SA.

Now, the RFC also state:

   To perform its services, the network maintains some state
   information: routes, QoS guarantees that it makes, session
   information where that is used in header compression, compression
   histories for data compression, and the like. This state must be
   self-healing; adaptive procedures or protocols must exist to derive
   and maintain that state, and change it when the topology or activity
   of the network changes.

and, what, do you think, will happen, if your AH use serial number
and your site is multihomed?

> > In this case, a proxy box of an ISP is an easy victim of the DoS
> > attack.
> 
> 
> Obviously DoSers can raise the stakes by trying to DoS (some part of) 
> the ISP rather than the customer, but I'm not worried about that. ISPs 
> typically have bandwidth in the order of gigabits, while customers may 
> only have a few megabits. Flooding a 10 Mbps pipe is trivial, and done 
> so often stopping it is a problem. Flooding a few 1 Gbps pipes is hard 
> and should show up on the radar screens of even the most uncooperative 
> source networks so stopping this is much, much easier.

These days, it costs about $50 a month to have 100Mbps access, though
ISP backbone is only as fast as 10G.

> >> The master keys are communicated to the
> >> ISP (for instance by inserting them in a BGP attribute)
> 
> > in plain text?
> 
> Why not? Obviously the BGP attribute wouldn't be transitive, so the 
> master keys would stay within the ISP network.

Are you seriously saying that you distribute plain text keys to
multiple ISPs you are conneted and can still feel safe?

> >> This requires some serious hardware at the ISP side
> 
> > You are saying you must provide high performance hardware to get
> > severely limited rate, even though there is no attackers, which
> > is a lot worse than DoS.
> 
> ISPs would need enough of these boxes to easily handle the maximum 
> expected DoS traffic. The system would only have to be enabled for a 
> certain customer when the customer is under attack.

Wrong.

Another problem of your approach is that it is not local optimization.

ISP or hosts at another end of the communication is involved, which
can not be controlled by your ISP.

On the other hand, a host naturally controls behaviour of its peer,
which is why the end to end principle is great.

> But even if the 
> system is enabled all the time there might be situations where the 
> protection is worth the extra cost. Obviously this would be an extra 
> value added service on top of regular IP transit, rather than a 
> standard part of IP.

Again, it is not a local service.

Another problem of non-local optimization is reduced MTU, which
makes 1280B of packets from your host dropped somewhere in the
Internet.

And there should be a lot more.

							Masataka Ohta