[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

shim header in every shimmed packet



We had a very productive interim meeting in Amsterdam this weekend. But maybe it was even a bit _too_ productive... I have some other stuff to say but let me get the main thing out of the way in a message of its own.
If you read Erik's proto-00 draft you'll find a mechanism that uses  
the flow label and the next header / protocol field to indicate that  
the shim is active, demux the context (we seem to have settled on  
this word for "shim session") and find where in the protocol chain  
the virtual shim header is located. Although I believe that this use  
of the flow label isn't inherently incompatible with the way it's  
currently defined, this mechanism does have the potential to get  
rather complex when all corner cases are considered. So it makes  
sense to look for something simpler.
And of course, it doesn't get any simpler than simply putting in an  
explicit shim6 header with the context tag in every shimmed packet.  
This is what's on the table now.
I think a shim header is the right choice in certain cases. For  
instance, when there is a protocol chain, having the shim header  
somehwere in there not only serves to make demultiplexing possible,  
but it also explicitly indicates the order of the processing, so  
there can be no confusion. Or when a host uses several ULIDs with the  
same set of locators.
However, I think it's a bad idea to mandate a shim header in all  
cases. When a receiver knows it can rewrite locators into the right  
ULIDs without a shim header being present, it should be possible to  
suppress the header. One way this could happen is when a host has  
multiple ULIDs, but the locator sets for these ULIDs don't overlap,  
such as when ULID A1 has B1 as a locator, and ULID B2 has A2 as a  
locator, then there is never any ambiguity whether an incoming packet  
should be rewritten or not. If there are no issues with header  
ordering (like in the very common case when the IPv6 header is  
immediately followed by a TCP or UDP header) and the source locator +  
destination ULID identify the context, the source address can also be  
rewritten.
So in these cases it makes sense to suppress the shim header. Now  
obviously it may be tricky to determine exactly when the receiver  
will or won't need the shim header, but the good part is that we  
don't have to figure that out now. What I'm arguing for is a  
capability that allows a receiver to let the sender know it doesn't  
need the shim header for a certain context (when there are no  
extension headers). So the only thing we need is specify a capability  
bit in the initial shim exchange that signals this, and require that  
implementatiosn suppress the shim header when the bit indicates this  
is desired.
In the interim meeting, it was argued that something like this should  
be an experimental extension to the shim. I believe that's not good  
enough: it needs to be a mandatory part of the base spec in order to  
be useful. This is why: if it's an extension, this means additional  
work to implement it. With the complexity of an IP stack and the  
upgrade cycles in the most widely used operating systems being the  
way they are, this means it takes a VERY long time to implement new  
capabilities. And since both ends need to implement this, the chances  
of being able to use such an extension within a reasonable timeframe  
are very low. As such, implementers will very likely put this at the  
bottom of their list. Also, in the mean time firewall makers will get  
used to the shim header being there, and things may break if it  
suddenly goes away. So I'm 95% sure that if shim header suppression  
isn't in the core spec, it will never fly.
The question is of course whether saving 8 bytes on shimmed packets  
is worth the trouble relative to the added complexity of having  
packets with or without the header. It looks that to a large degree,  
this is a matter of personal taste. We're essentially increasing  
overhead on IPv6 by at least 1% (7.7% to 8.7%  for two 1500 byte TCP  
packets + an ACK). But we're seeing more protocols that use small  
packets these days, where the increase in overhead is much higher.  
For instance, VoIP with 20 ms samples compressed to 32 kbps with 20  
bytes RTP overhead means 80 bytes data with 60 bytes (75%) overhead.  
Adding 8 bytes increases the overhead by 10% to 85% and makes the  
packet 5.7% larger.
This essentially means we're taking between 1 and 5.7 % of peoples  
bandwidth away (when the shim is in effect). For anything other than  
bandwidth something like this would be inconceivable. I think it's  
the wrong thing to do for bandwidth too, as long as we can avoid it.
Practically, it's already difficult to sell people on increasing the  
overhead for every IP packet by 20 bytes when moving to IPv6. Another  
8 bytes isn't going to help here if we want people to use shim  
multihoming rather than existing BGP style multihoming. The argument  
is that the overhead would only be present when there is a failure,  
but I don't believe it will play out that way in practice: people  
will want to use the shim even when there is no outage for traffic  
engineering purposes.
Considering all of this, I think adding a bit and a single insert  
shim header / don't insert shim header decision into the core spec  
and therefore (presumably) all implementations isn't too much to ask.
Iljitsch