[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Design decisions made at the interim SHIM6 WG meeting



On 27-okt-2005, at 15:29, marcelo bagnulo braun wrote:

I don't think "decisions" is the right word here... These are more like suggestions by the chairs.
sorry, but i disagree here.
The points below were accepted by most of the people in the room (i certainly accepted most of them)
It's important that the wg participants that weren't at the interim  
meeting determine the merit of all of these points themselves so the  
wg can come to rough consensus rather than just say "well most people  
that were in Amsterdam agreed so it must be the right decision".
The real point is what happens when the address pair selected by either the application or RFC 3484 doesn't work. How do we handle this case?
the idea, (and i think this is was this decision point is about) is that we need an update to rfc 3484 to deal with this.
What RFC does is two things:

1. select source and destination addresses
2. tell application programmers to cycle through all destination addresses
Any modifications to these won't help if an application, with help  
from RFC 3484 or its successor, selects an address pair that isn't  
reachable and then DOESN'T cycle through all source/destination  
combinations.
Most IPv4 applications don't cycle through all destination addresses,  
and a significant number of IPv6 applications doesn't either. I don't  
see applications cycle through all source/dest pairs because that's  
very hard to do right.
So either we have three choices:

1. adopt the "second shim" to do this for the application
2. make it possible to repair the situation where there is no reachability between the ULIDs from the start.
3. update RFC 3484 and wait for all applications to be updated

I believe 3. won't work in practice and if it did, it would be a huge duplication of effort as all applications would have to implement the same functionality. Remember that we chose to make the shim such that unmodified applications can benefit from it.
I have no problem with a shim header for demultiplexing in cases where demultiplexing would otherwise be very hard or impossible. For instance, in the case of several extension headers, an explicit shim header makes it possible to indicate which headers see modified addresses and which headers see unmodified addresses unambiguously.
However, I think it's a very bad idea to have a shim header in EVERY packet with rewritten addresses, because there are cases where the shim context can be determined from information that's already in the packet unambiguously so an extra header is unnecessary.
[...]

this is the point of debate: whether this extension should be mandatory in the base spec or not.
my opinion about this, is that this would be indeed quite complex.
As I've explained before: this can be exceedingly simple. We just  
need signalling message, or a field in an existing signalling  
message, so that a host can tell its correspondent that it doesn't  
want to see the shim header for packets with rewritten addresses  
within this context. If the sender simply honors this option then  
we're done in the base spec.
Deciding when a host can safely set this option is more complex of  
course, but THAT part can be an experimental extension. (I'll write a  
draft shortly.) However, it's essential that the capability to  
suppress the shim header is in there from the start, or implementing  
the logic to enable this capability will be so hard to get off the  
ground (then both ends would need to be updated rather than just the  
receiver) that probably nobody is going to bother etc etc.
I mean i like the idea but i am afraid that when we try to define the details, the result will be complex. If not i would certainly support this
How about this: I'll write the draft about how this would work in  
practice (i.e., when a host can demultiplex without the shim header)  
and after that we make the final decision?
That would be after Vancouver, though.

4. Use a 32 bit context field with no checksum, and 15 reserved bits and a 1 bit flag to indicate control / payload. Note potential DOS risks
If we know there are risks today, maybe it makes more sense to make the tag variable size (to be negotiated between the peers) rather than fixed?
what about making it fixed of 47 bits from the start?
Then there is no room for extensions, and some implementations may be  
more confortable with 32 bit math.
     * Adopt HIP parameter format for options; HIP parameter format
       defines length in bytes but guarantees 64-bit alignment.
I don't want this alignment. It wastes space, it's an implementation headache and it buys us nothing.
i think that this is a small price to pay to allow future convergence with hip that is anther protocol somehow related with shim6
No, I don't see how this makes sense. It just makes the decision  
making much more complex as the needs of two different mechanisms  
must be aligned first.
Forking is a bad idea, it increases the complexity of the shim manifold while pretty much the same functionality can also be provided in a different way.
i guess it depends on how you handle it... i mean for me, i see it just as multiple contexts with the same ulid pair, but with different context tag. In this view, it doesn't seems very complex, it is just multiple contexts..., am i missing something?
With forking, a context is no longer uniquely identified by a ULID  
pair. This means the context id must be carried along everywhere. One  
place where this is inconvenient is between the application, that  
specifies certain ToS requirements, and the shim. The transport  
protocol that sits in the middle must now somehow "color" all packets  
with the right context information so the shim knows what to do with  
the packet. IIRC there are also signalling complexities.
18. Use a statically specified in the initial protocol specification of
    (10) seconds.
This means that senders must transmit something (data, keepalive) every 10 seconds, right? So the receiver needs to wait a bit LONGER than 10 seconds to time out.
as i recall it, 10 seconds was the time of 3 packets i.e. a node is expected to send packets every 3 seconds, so if 3 packets are missing, then we have a problem detected.
I think every 3 seconds is excessive. So in my draft I just wait 12 -  
15 seconds at the receiver (10 seconds that the sender uses + margin  
for RTT and jitter), and then initiate the full reachability  
exploration. This will first do a packet with the currently used  
address pair but after a very short timeout it will start to explore  
alternate address pairs.
This means the full exploration may be triggered when omly two  
packets are dropped, which is a bit aggressive, but on the other hand  
it means only an FBD keepalive every 10 seconds rather than every 3  
seconds so I think this makes sense.
Note that the tradeoff is:
< 240 seconds: we MUST repair the problem before TCP times out
> 180 seconds: wait for BGP (90 - 180 second timeout) to fix the problem < 90 & > 40 seconds: don't wait for BGP, but do wait for OSPF (40 second timeout) to fix the problem
< 40 seconds: don't wait for OSPF
are you suggesting that we change this to a higher value? i think that this was one proposal, and i think that it can be easily changed...
With one FBD packet per 10 seconds I don't have a problem. A quick  
scan of RFC 1889 seems to indicate that RTP stays below 10 seconds  
for its return traffic in typical cases so that shouldn't be an issue.