[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Multihoming by IP Layer Address Rewriting (MILAR)



On Fri, 7 Sep 2001, Michel Py wrote:

> >> I want to exclude the routers that run defaultless
> >> by choice, rather than out of necessity. You want
> >> to include those.

> That is where I don't follow you. In defining the semantics
> of what is the DFZ, why would you exclude the routers that
> run defaultless by choice? I thought that the definition
> if the DFZ should match WHAT the routers actually do and not WHY.

The "tier 1" networks need to run defaultless, even if this becomes very
expensive. People who do it by choice can always stop if it becomes too
expensive for them. So if it's only the latter who complain, I don't see
why we should listen.

> >> but only 32 MB on one of the VIP cards. Those cards need
> >> a copy of the forwarding table and 32 MB
> >> leaves to little room for error.

> Selling the customer Cisco 12416s would solve the problem ;-)
> Then you can scrounge the 7500 for your CCIE lab :-)

Ah, should have thought of that...

> >> I think that's not the best solution, since this way a
> >> continuous stream of reachability information flows
> >> around the globe, whether this information is of use
> >> at a certain point at a certain moment or not.

> I am unsure of what you are refering to here. If you are
> talking about the MHTP routing table, The way I understand
> BGP is that routes are not exchanged if they do not change
> so there is no reachability information floating around the
> globe unless a change occurs.

Of course. But with 100k+ multihomers that means many changes per minute
(even if we assume just a few flaps a month on average), and if
multihoming becomes easier the number will grow rapidly.

I think any solution where many systems around the world must be kept up
to date on the status of the connections of all multihomers world wide
won't scale: both the amount of information and the number of changes
increase with the number of multihomers, creating a growth curve in
processing requirements of O(x^2) or maybe O(x log x) with a good
implementation. What we want is at least O(x), but preferably O(log x).

And it's not necessary: as soon as you start communicating, it's not too
hard to find out which addresses are valid and which aren't. Finding out
all available potentially usable addresses is harder.

> >> Also, using MH addresses that are reachable without
> >> translation makes sense. That way, there is instant
> >> compatibility with non-MH-capable hosts
> >> and it saves some processing and address space.

> That way the implementation can be gradual, and there
> are tools (control of proxying rate) that could help
> enforce the migration, if desired or required.

So you agree that using separate MH addresses that aren't reachable
through existing SH means, is not the best solution?

> >> Disagree: the DNS system has its own redundancy
> >> features, which are even better than multihoming.

> We are actually saying the same thing: The design of
> DNS redundancy is better than multihoming. How do we
> get people to actually use it?

Why should we? If they misconfigure their DNS they have to live with the
consequences. If we can make everything work when both sides of the
session have their stuff in working order and there is a possible path,
that should be enough. Userfriendliness is good for implementations, but
not so much for protocols, IMO.