[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Popping up a level (was Re: [RRG] On identifiers, was: Re: Does every host need a FQDN)



On Aug 13, 2008, at 4:20 PM, Tony Li wrote:
This seems a bit backwards. Shouldn't the socket API in BSD have been the correct abstraction that matched the semantics that we wanted our various
network namespaces to have?

I have long believed that the socket API was (and is) broken in that forces applications to know about addresses and even the semantics of addresses. The fact that the socket API was _slightly_ changed for IPv6 sufficient to require modifications to every network-aware program but still kept this brokenness is (in retrospect) stunning, particularly when it would have been 'easy' to define a 32 bit "network handle" (like a file handle) that used 240/4 which would be an index into kernel managed addresses/names/pcbs/etc. A hack, yes, but at least one that did not require modification to every network- aware program.

But we've got what we've got (or so I assume).

Shouldn't we, as a research group, be looking past that to what a

new socket API might also look like?

I suppose we could.

How many operating systems, languages, environments, etc. should we consider changing?

While I am being a bit facetious, this raises some higher level fundamental points that I'd like some clarity on:

Back at the Amsterdam workshop (2 years ago now), I had made the assumption that in the interests of stopping the bleeding quickly, we had to assume minimal changes to the deployed infrastructure. With this assumption, the "jack up" (as Noel puts it) 16+16 model with a new network element (now called ITR/ETR) inserted at the "site" edge made the most sense to me since it required absolutely no change to existing network elements (from applications to hosts to core routers to even routing protocols). The hard part would be figuring out how to do the mapping between the "inside 16" and the "outside 16" which I saw as mostly an engineering problem with a variety of equally viable solutions, each with their own tradeoffs.

However, various people have argued that the base assumption I made wasn't valid, that we had the time to "do it right" (although the definition of "right" and "wrong" have never to my knowledge been made) and revise any/all aspects of the existing infrastructure, that is, essentially write off IPv6 as a learning experience in what not to do and define IPv7. Other folks have argued pretty much the entire spectrum in between.

Fundamentally, it would seem to me that there are some missing ground rules here. Is anything fixed? Should APIs be considered inviolate? Host network stacks? Routers, core or edge? Existing infrastructure models? Business models? I gather the official answer to all of these is "no", but is this actually realistic?

Also, what sort of timeframe are we considering? While I tend to agree that the routing scalability sky is falling, it clearly hasn't fallen yet and I've heard wildly varying estimates of when we'll reach the event horizon. Should we assume the output of this group needs to be deployed in 2 years? 5? 10? 50?

Thanks,
-drc


--
to unsubscribe send a message to rrg-request@psg.com with the
word 'unsubscribe' in a single line as the message text body.
archive: <http://psg.com/lists/rrg/> & ftp://psg.com/pub/lists/rrg