[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: The cost of crypto in end-host multi-homing (was Re: The stateof IPv6 multihoming development)
On Mon, 28 Oct 2002, Pekka Nikander wrote:
[...]
> 4. As a result, the 10.1.1.0/24 network is flooded with unwanted traffic.
> The fix is simple, but needed: Either
> a) during the initial negotiation the hosts check the reachability
> of the secondary addresses, and make sure, through some simple
> and cheap crypto, that it is the same host answering at all of
> the given addresses, or
This isn't the preferred option as the secondary address may be down at
the time of connection/association establishment. If there is no way
around it, we can adopt a solution where you can only use the addresses
that were up at the start of the negotiation, but that isn't the best
way to handle it.
> b) once the primary address becomes unreachable, the hosts check,
> using some simple and cheap crypto, that it is the same host
> answering at the secondary address, *before* sending any larger
> amounts of data to that secondary address.
Why do all this checking? Just assume the host is interested in the
traffic until it tells you otherwise. For TCP apps, you're pretty much
guaranteed to be in slow start anyway, so there wouldn't be much
flooding. However, some way to throttle back streaming applications
until we know the other end is happy would be good. It would be
appropriate to redo the path characteristics discovery at this point in
time for any streaming media applications as it is likely these
characteristics are different now. (Note that using current multihoming
a minute or more of unreachability when rehoming isn't out of the
ordinary.)
If the host at the new address doesn't want the traffic, there should be
a way for it to make the sender stop without relying on transport
mechanisms such as TCP RSTs. It would be good if the host at the new
address receives the IP address from which all of this was initiated so
an attacker can be traced easily.
> The essense: You *have* to check that it is the *same* host that
> is answering in the secondary address *before* you send any larger
> amounts of data to that address.
Why? If this address was presented as a valid secondary address
(assuming this is done in a way that is reasonably secure) AND the host
at the new address accepts the traffic, what's the problem?