[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: API, was: Question re HIP dependency & some architectural considerations



Actually I think our terminology is tripping us up here.

"Applications"? The network stack is rarely called by
applications. It's called by network adaptation code
built into "lower middleware" such as the Java run time
system, whose job is to hide all this stuff from actual
applications. That's an opportunity - because this lower
middleware can be a super-shim that can do all kinds of
smart things.

"API" has the same confusing implication. We should be
thinking about a more sophisticated network programming
model, indeed, but calling it an API is likely to
restrict our thinking too much.

There is work going on in ICSC (IT-API, http://www.opengroup.org/icsc/)
and DAT (uDAPL, http://www.datcollaborative.org/uDAPLv11.pdf)
which shows that sockets are not sacred.

However, like it or not, multi6 *does* have to deal with
the real world of ULPs that call the current socket API.
That may only be stage 1, but it has to work.

   Brian

Iljitsch van Beijnum wrote:
On 26-jul-04, at 12:38, Pekka Nikander wrote:

b) Secondly, it starts to look like that we really should start to
think about the API issues more seriously.


Yes. Hence the call for apps feedback earlier.

Looking at the multi6 goals and recent discussion, it looks like that
any acceptable solutions MUST support unmodified IPv6 API, using
routable IPv6 addresses as API level identifiers.  Only that way we
can fully support all existing applications.


Unfortunately, this way we also perpetuate the brokenness of the current API philosophy.

What I'd like to see is a dual track approach: we do what we can to support multihoming for existing IPv6 applications, and we also create new, less limited ways to do the same and more.

By also allowing identifiers that aren't routable IPv6 addresses, our stuff is much more powerful and futureproof. This means we need some more type fields and make some more stuff variable length, but that's not very hard. However, security is harder for nonroutable identifiers. I don't think we necessarily want to solve that, but if we can create hooks for solving this in user space then this is quite easy to add later on. (The same way that IPsec needs kernel support for some of its stuff but the Racoon IKE daemon is basically a user-level process.)

To me, it remains an interesting question how to support API evolution.
One architectural possibility would be to use HIP as an example:
 - convert AIDs into EIDs at the API level


I forget what the A and E stand for...

This using of internal
EIDs that are not IP addresses in the protocol layer seems to
offer an evolution path for the API.


Another level of indirection?

I don't think that's the solution. The problem is that applications see values that they think are IP adddresses. The first order of business is removing the assumption that these are actual IP addresses. This can't be done by adding stuff lower in the stack.

c) Thirdly, but perhaps less importantly, it looks like that we should
also take an architectural stance at the performance implications.
Some people seem to have an opinion that requiring one to perform an
authenticating D-H exchange each and every time one wants to use
multi-homing is unacceptable.


Please define "use multihoming". If this means once when ISPs are connected or once a day or something like that, that's very different from having to do it for every TCP session.

But is it really so?  If your application
needs the multi-homing benefits, doesn't that usually mean that it
expects to continue  communication over some time span?


The users want to control multihoming, having the application specifically ask for it is probably unworkable.

If so, does the less-than-second delay caused by authenticated D-H really matter?


Maybe yes, maybe no. I'm also worried about the CPU usage, BTW.

At this juncture, I'll observe that we have T/TCP (TCP for transactions) which pretty much eliminates the delay caused by the TCP three way handshake, but T/TCP is often badly implemented, and, as far as I can tell, extremely rarely used. Same goes for HTTP session keepalive/reuse.

Related to this, it does look like that we could develop a protocol
where the hosts have a very light weight piggybacked exchange
initially, with reasonable initial security and the possibility to
"upgrade" the context security into authenticated D-H level.  However,
such a protocol is inherently more complex than one where one
always performs authenticated D-H initially.


I'm not worried about this kind of "complexity". Having a bunch of simple things is "complex" but workable. It's having a single thing that is hard to understand, or a buch of things that interact in complex ways that cause problems.

Hence, to me the question is whether the performance benefit from
the delayed state set up is really worth the added complexity, taking
the nature of multi-homed applications into consideration?


How many different implementations of this kind of stuff are there going to be? 10? 25? 100? And how much time is the human race going to lose by having to wait for 50 ms for EVERY new session on EVERY computer for the next 20 years?

It seems to be in vogue to publish half baked products and then clean up the mess later, but nobody ever got fired for getting it right the first time...