[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: TSV-DIR Review of draft-ietf-shim6-protocol-09.txt
On 23 nov 2007, at 15:05, Bernard Aboba wrote:
My concern is about wireless media which can experience large
variations
in signal/noise ratio, in the process generating transient "link down"
indications. This could cause those connections to migrate to other
media/interfaces.
Wouldn't that be something that should be fixed in the driver for that
interface? Declaring a link to be down has significant implications on
many systems, this shouldn't be done at the drop of a hat for links
where this determination isn't easily made. Having drivers declare
links down too soon and then having the next layer ignore that is not
a good solution, especially because there are also link layers which
can determine their up/down status much more accurately.
If the host has implemented the strong host model, then
when the transient "link down" is resolved, the connection won't
resume
using the prior outbound interface. This could lead to applications
experiencing sub-optimal conditions long-term based on a transient
event.
Hm, I must say that I don't know off the top of my head if shim6 will
automatically rehome to the primary address pair after some time. I'll
have to reread the specifications. Or does anyone else remember this?
There are a few approaches that come to mind:
a. Continue to make decisions based on timers, perhaps using the "link
down" indication as a hint to lower the timer values (e.g. requiring
only two retransmissions instead of three)
I'm not a fan of timer-based decision making if it can be avoided
because it's extra work and you pretty much always wait too long or
not long enough.
2. Suggest the weak host model to be used along with SHIM6, so that if
the "link down" proves to be transient, the connection will migrate
back
to its former outgoing interface.
That would be good, yes.
It would be possible to adjust the keepalive interval based on RTT
estimates,
though.
If the information is available, this might be the best approach.
But is it worth the trouble? The timeout will remain the same (10
seconds unless something else is established during the shim context
setup) so the only difference is that if the RTT is 10 ms you could
choose to send a keepalive after 9980 ms but if it's 1500 ms you send
the keepalive after 6000 ms. Implementers will probably just use 3
seconds so 3 keepalives are seen before a timeout or 4 seconds so 2
are seen before a timeout.