[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: DISCUSS: draft-ietf-radext-fixes



Lars Eggert wrote:
...
>>   RADIUS clients do not have to perform duplicate detection.  When a
>> client sends a request, it processes the first reponse that has a valid
>> Response Authenticator as defined in Section 3 of [RFC2865].  Any
>> duplicate responses MUST be silently discarded.
> 
> If clients need not perform duplicate detection, how can they discard
> duplicates? We seem to have a misunderstanding here.

  As network designers, we know that the packets are duplicates.  The
client implementations, however, simply track requests & responses.  If
they have a response to a request, they stop tracking the request.  Any
packet that they receive after that is either a duplicate and can be
dropped, or is an attacker trying to spoof the server, and can be dropped.

  Perhaps the text should be "Any later responses MUST be silently
discarded", which avoids the "duplicate" issue.
...
> Works for me, but let me point out that because the Identifier field is
> only 8 bits long, this may prevent the client from issuing new requests
> if 256 requests are waiting to be retransmitted. I'm not sure how likely
> this is, but it might occur for example after a longer network disruption.

  Yes.  This issue is covered in Section 2.5 of RFC 2865:

   A NAS MAY use the same ID across all servers, or MAY keep track of
   IDs separately for each server, it is up to the implementer.  If a
   NAS needs more than 256 IDs for outstanding requests, it MAY use
   additional source ports to send requests from, and keep track of IDs
   for each source port.  This allows up to 16 million or so outstanding
   requests at one time to a single server.

  All major implementations will open multiple source ports if the 8-bit
Identifier space has been exhausted.


> Choosing 2 minutes as the cache lifetime means that you are guaranteed
> that all duplicated requests will be answered with a cached copy of the
> original reply, whereas any shorter cache lifetime means that late
> duplicate requests will be re-processed as if they were new. That seems
> to goes against the "(the sever) MUST resend its original Response
> without reprocessing the request" statement in the document.

  That's all true, except that after 30 seconds, the client will give
up.  The issue for the server is more about inter-packet spacing than
MSL.  The MSL matters really only to the NAS, who sees the entire
network in the packet round trip time.  The NAS either accepts the first
valid response within that 30 second window, OR gives up on the request.
 Any later retransmits of cached requests by the server are ignored by
the NAS.

  Unless the network drastically changes inter-packet spacing, the
server will see packets spread over approximately 30 seconds.  So if the
server responds to the first request, it must cache the reply for long
enough to ensure that it can send a cached response to all re-transmits
from the NAS.  So 30 seconds plus a little bit is sufficient for server
cache timeout.

  If retransmits from the NAS have an inter-packet spread of more than
30 seconds, then either the network is sitting on the packets for 30+
seconds, OR the packets are being routed via drastically different
routes.  I'm not sure those cases are anything we can solve by extending
the server cache time.

  Note I'm not completely opposed to suggesting a maximum cache time of
2 minutes.  Most NASes will re-use Identifiers, which means that the
real-world cache times are usually on the order of seconds.  So server
implementations aren't drastically affected by the 2 minutes versus 30
seconds.

  Alan DeKok.

--
to unsubscribe send a message to radiusext-request@ops.ietf.org with
the word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/radiusext/>