[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: API, was: Question re HIP dependency & some architectural considerations



Stepping out of the "things to think about" mode...

The only reasonable approach at the API level is to provide a new API for any multihomed functionality that would be exposed to the host. You can't simply provide a "compatability API" if you change the identifier because the identifier has a semantic meaning to applications today that would be misinterpretted if we overloaded it. For example, think about a web log parser that attempts to do a reverse lookup on a HIP identifier as a case and point. Not so hot, right?

Regards,

Eliot


Iljitsch van Beijnum wrote:

On 26-jul-04, at 12:38, Pekka Nikander wrote:

b) Secondly, it starts to look like that we really should start to
think about the API issues more seriously.


Yes. Hence the call for apps feedback earlier.

Looking at the multi6 goals and recent discussion, it looks like that
any acceptable solutions MUST support unmodified IPv6 API, using
routable IPv6 addresses as API level identifiers.  Only that way we
can fully support all existing applications.


Unfortunately, this way we also perpetuate the brokenness of the current API philosophy.

What I'd like to see is a dual track approach: we do what we can to support multihoming for existing IPv6 applications, and we also create new, less limited ways to do the same and more.

By also allowing identifiers that aren't routable IPv6 addresses, our stuff is much more powerful and futureproof. This means we need some more type fields and make some more stuff variable length, but that's not very hard. However, security is harder for nonroutable identifiers. I don't think we necessarily want to solve that, but if we can create hooks for solving this in user space then this is quite easy to add later on. (The same way that IPsec needs kernel support for some of its stuff but the Racoon IKE daemon is basically a user-level process.)

To me, it remains an interesting question how to support API evolution.
One architectural possibility would be to use HIP as an example:
 - convert AIDs into EIDs at the API level


I forget what the A and E stand for...

This using of internal
EIDs that are not IP addresses in the protocol layer seems to
offer an evolution path for the API.


Another level of indirection?

I don't think that's the solution. The problem is that applications see values that they think are IP adddresses. The first order of business is removing the assumption that these are actual IP addresses. This can't be done by adding stuff lower in the stack.

c) Thirdly, but perhaps less importantly, it looks like that we should
also take an architectural stance at the performance implications.
Some people seem to have an opinion that requiring one to perform an
authenticating D-H exchange each and every time one wants to use
multi-homing is unacceptable.


Please define "use multihoming". If this means once when ISPs are connected or once a day or something like that, that's very different from having to do it for every TCP session.

But is it really so?  If your application
needs the multi-homing benefits, doesn't that usually mean that it
expects to continue  communication over some time span?


The users want to control multihoming, having the application specifically ask for it is probably unworkable.

If so, does the less-than-second delay caused by authenticated D-H really matter?


Maybe yes, maybe no. I'm also worried about the CPU usage, BTW.

At this juncture, I'll observe that we have T/TCP (TCP for transactions) which pretty much eliminates the delay caused by the TCP three way handshake, but T/TCP is often badly implemented, and, as far as I can tell, extremely rarely used. Same goes for HTTP session keepalive/reuse.

Related to this, it does look like that we could develop a protocol
where the hosts have a very light weight piggybacked exchange
initially, with reasonable initial security and the possibility to
"upgrade" the context security into authenticated D-H level.  However,
such a protocol is inherently more complex than one where one
always performs authenticated D-H initially.


I'm not worried about this kind of "complexity". Having a bunch of simple things is "complex" but workable. It's having a single thing that is hard to understand, or a buch of things that interact in complex ways that cause problems.

Hence, to me the question is whether the performance benefit from
the delayed state set up is really worth the added complexity, taking
the nature of multi-homed applications into consideration?


How many different implementations of this kind of stuff are there going to be? 10? 25? 100? And how much time is the human race going to lose by having to wait for 50 ms for EVERY new session on EVERY computer for the next 20 years?

It seems to be in vogue to publish half baked products and then clean up the mess later, but nobody ever got fired for getting it right the first time...