[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: CUD and FBD
marcelo bagnulo braun wrote:
In CUD, each node waits for a given time Tu for ULP feedback. If no
feedback is received during Tu seconds, then it probes reachability by
sending probes. This means that it will send a probe and wait for To for
the answer to come back. It will probably send several probes, e.g. m
probes. The node will wait for w seconds between each probe.
So the time required in FBD would be: n*t seconds
And the time required in CUD would be Tu + To + (m-1)w
Presumably To=w, but to avoid causing congestion, presumably CUD needs
to do binary exponential backoff. (NUD doesn't do this, since it assumes
that the local link isn't likely to suffer congestive collapse from only
the NUD probes, but for the end-to-end path I don't think this
assumption holds.)
This raises the question whether the packets generated by FBD needs to
be concerned about congestion control.
If we have a simple model of the network where the paths are symmetric
and traffic volume is symmetric, then one can definitely argue that FBD
does no need this, because at most it sends one packet for every
received packet. (Only if a packet has been received in the last t
seconds and the ULPs haven't sent anything back, will FBD send a keepalive.)
But the limit on FBD (of at most sending one packet in response to one
received packet) means that it exhibits the same behavior as TCP SYNs
and DNS queries; in those cases the protocols do not apply any
congestion control to the "responder" but does assume that the
"requestor" handles congestion avoidance (by retransmitting using binary
exponential backoff.)
So my take is that FBD doesn't need binary exponential backoff, which
would be a great benefit.
Now i think that for the comparaison, and if the expected resiliency is
to be similar, we could assume that n = m (the number of probes is the
same).
Also, that Tu will be smaller or similar to t, since Tu is the timeout
of the application, and t is the default timeout of the shim.
I NUD Tu isn't related to the application/ULP timeout; instead it is a
protocol constant (which is picked so that with high probability if
there will be ULP positive advice it will arrive in less than Tu
seconds). Basically you want it to be at least Rtt + ack delay (to
handle protocols like TCP that implements delayed acks).
Another thing to look at is whether the timers can be dynamic, or keep
them static as in NUD. Since shim6 is end-to-end I think it makes sense
to make them dynamic.
In CUD, when the probes are not supressed, then the probe+response can
be used by the host to determine the roundtrip time, hence it can be the
basis for the "w" timeout.
In FBD it is not clear to me what techniques can be used to make "t" be
a function of the Rtt. Any ideas?
- Overhead
In CUD, each time there is an idle period, 2 packets are generated, one
in each direction.
I don't think that is necessary (assuming ULP advice).
A sends packet to B, and B sends a packet back shortly there after. Then
things go idle.
This means that the reachability state will move from REACHABLE to
UNKNOWN (RFC 2461 uses a slightly different description).
If A then goes to send a packet, we can use the DELAY state idea from
RFC 2461 to avoid sending an immediate probe. As long as B responds and
the ULP on A passes down positive advice, any probe from A to B can be
supressed. Whether B ends up probing A is more difficult to tell; if B
receives packets from A (need multiple packets) that indicate to the ULP
on B that A has received it's ack, then the ULP on B can provide
positive advice.
But if there is no positive advice from the ULP, the overhead would be 4
packets when the flow restarts (a probe from each direction, which each
of them being acknowledged). This could be minimized to 3 but can't do
any better to verify that both directions are working.
In FBD, each time there is an idle period, i packet is generated
"i" or "1"??
I think this can actually be zero extra packets, depending on details of
the time period FBD uses to "measure" received and transmitted packets.
A sends a packet to B, and B responds in time period 0. No need to send
extra packets.
The communication is silent in time period 1,2,3, and 4.
In time period 5, A sends a packet to B, and B responds. Both ends have
sent and received in time period 5, hence no need to send a keepalive.
Only if the packet arrives at B at the very end of time period 5 so that
the ULP response is sent by B in time period 6, would FBD cause an extra
packet. Presumably one can avoid that by having FBD, after an idle
period, start the new time period when the first packet is received.
FBD imposes half of the overhead than CUD
[Note: As currently defined in the draft, if there is no ULP feedback,
CUD will periodically generate probes, which would greatly increment the
overhead imposed by CUD when the ULP does not provides feedback (e.g.
UDP) resulting in an overhead much greater than the double of the one of
FBD . However, i think that not only ULP feedback can be used as an
indicator of communication progress, but also the reception of packets
could be assumed as an indication of progress. In this case, the
overhead of CUD would be the double of the one in FBD]
FWIW I don't think reception of packets can be an indication in CUD that
the bidirectional path is working.
If A is transmitting packets to B, and B is acking, but the acks don't
make it to A, then B will continue to receive (ULP retransmitted)
packets from A for a few minunutes, even though the path from B to A is
broken.
Erik
- References:
- CUD and FBD
- From: marcelo bagnulo braun <marcelo@it.uc3m.es>