[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Re: Re: Draft of updated WG charter
Jay,
> I view *sites* as multi-homed, not hosts. Therefore, I view the problem of
> how to make multi-homing work at the site (usually AS) boundary with other
> entities (ASes), not how to make the individual hosts do it.
Certainly your are right about the _placement_ of additional connectivity.
However, this does not automatically mean that the mechanism to integrate _use_
of the alternative connections must be implemented at the site gateway. Of
course, we need to know the type of multiaddressing use that is desired, to be
able to figure out the tradeoffs for where to place the mechanism.
In the most simple, failover use of multiple access links, the additional links
would go unused. To do any sort of load-balancing requires either blind
round-robin or quite a bit of knowledge about the traffic flows. None of these
choices strike me as a very good idea.
On the other hand, how will an endpoint know that it has multiple locators, if
the attachments are at site gateways? There needs to be a way to propagate the
information back to the endpoints.
As tempting as it is to make the paradigm choice immediately, and based on one
or another core principal, the existence of quite a few interestingly different,
detailed proposals suggests that the choice is not at all obvious. This is
clearly something that has significant trade-offs in utility, design complexity
and operations costs.
> The routing
> infrastructure should deal with most of the burden of getting packets to
> destinations as efficiently as possible.
I completely agree. It should deliver packets. It does this quite well. The
effort to get it to that point of performance has been considerable. For all
that, the capabilities of the basic delivery service are pretty limited. (QOS
comes to mind as the major example of an enhancement that certainly cannot be
done elsewhere in the architecture, and still has a long way to go.)
So I believe this means that focusing on the delivery of individual packets is
what the infrastructure should do. Integrating alternatives across a stream of
packets should be done in endpoints or, perhaps, site gateways, so that the rest
of the net infrastructure does not have to incur the cost of that integration.
It also happens that this permits treating mobility and multihoming together,
with a single mechanism.
> The hosts shouldn't be expected to
> do that job, because they are ill-equipped to do so. They should focus on
> layer 4 & up.
Oh. So I guess we should not have hosts distinguish between the hosts on the
local net versus remote destinations? I suspect there are a few other functions
we need to remove from the current Host IP layer. Certainly IPsec needs to be
deprecated.
In other words, perhaps this topic is a tad more complicated?
> > And what is the summary description of the intelligence currently
> > required for v4 multihoming?
>
> In the common case where the site (not the host) is multi-homed, the host
> needs to know very little.
OK. Now I understand. And there is no arguing that that sounds very appealing.
There is one small problem. You are talking about taking a model that has been
worked on for what? 10 years? Yet has gained virtually no adoption. And let's
impose that model on the future architecture?
(Sorry. I know that's harsh, but this topic needs to take note of realities.)
> I realize that cranking the dial up from 32 to 128 raises serious concerns
> about the ability of the routing infrastructure to handle
> non-connectivity-dependent or non-provider-based addressing. However the
> discussions I've seen about IPv6 address usage lead me to believe that
> there
> won't be anywhere near 2^128 routed prefixes.
The IETF has done an impressively bad job of predicting the future. (To be
fair, so has pretty much everyone else.)
For example, note that during the CIDR work, efforts to predict when we would
reach a serious crunch on IP Address allocation were confidently offered as
being roughly around the year 2020, or maybe 2015. At any rate they also
completely dismissed all concerns about the effect of multihoming on the routing
tables, because it was not particularly popular at the time. Tony chaired the
CIDR Deployment wg and might remember these discussions.
So a different view of your prediction is that you want to rely on a theoretical
prediction to ensure the safety of what is probably the most critical, most
fragile and most problematic architectural component of the Internet.
I'd rather not do that.
> > It depends on the nature and degree of the routing work done by the end
> > host, and how much administration is required for it.
>
> Based on what I've seen of how poorly hosts are defaulted & administered,
> any increase in complexity & responsibility placed there is a bad thing.
As correct as your assessment of the history, it leads to the thinking that says
a salesman's (or customer service representative's) job would be far easier if
it weren't for those pesky customers. Besides, the history of the
infrastructure has had its own downsides.
> > And requiring the infrastructure to change before a function is useful
> > tends to require 3 (4? 5?) orders of magnitude more delay.
>
> Really? It seems that I could upgrade my O(10) routers before any
> significant fraction of the O(10k) hosts on my campus could get upgraded.
Ahh. This probably gets to the nugget of the difference in perspective.
You said "my". However a change to the infrastructure requires coordinated
effort by lots of independent "my"s. Adoption of features that have multi-hop
dependencies across independent administrations has proved to have a rather high
latency before reaching a critical mass of utility.
> > Hmmm. Simple end system components, complex network components.
> >
> > Isn't that the phone system approach, and rather pointedly divergent
> > from the historical Internet approach?
>
> I think it depends on where you divide the responsibility. My division is
> that the complexity of layer 3 (IPv4 & IPv6) ought to be in the network, &
> the complexity of layer 4 & up ought to be in the hosts.
Too late. Host Layer 3 already has some pretty interesting complexity.
However it happens that my personal view is towards agreeing with you, somewhat.
The difference is that I believe we are creating other need for a layer between
3 and 4, which I'm tending to call Endpoint IP, distinct from Relay IP.
This is a generalization of the shim/wedge model that some multiaddressing
proposals are pursuing.
d/
--
Dave Crocker <dcrocker-at-brandenburg-dot-com>
Brandenburg InternetWorking <http://brandenburg.com>