[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CELP (was RE:)



marcelo,

mb> A site becomes multihomed becuase it wants to improve its fault tolerance.
mb> This means that if the site were single homed, some parts of the Internet
mb> would be unreachable for him, so he wants to overcome this problem by
mb> multihoming


In fact, this reason had not occurred to me!  I have always assumed that
the reliability reason for multihoming was to continue operation when
their own interface goes down.  That's quite different from a concern
over network partitioning, elsewhere.

(Geographically distributed multihoming for large organizations is a
different matter.)


In any event, let me again stress that I am pursuing this kind of issue
in order to distinguish near-term, narrow concerns, versus
longer-term, general concerns. The hope is that this distinction can
allow initial solutions to be simpler.

From my own sense of the net and from the responses so far, I believe
that a host can maintain an address pool based solely on a simple list
of the destination's addresses. That is, for the near term, the
combinatorial complexities of considering local/remote address pairs can
be deferred. Of course, the implications (that is, the limitations) of
this simplifying assumption need to be stated.

d/
--
 Dave Crocker <dcrocker-at-brandenburg-dot-com>
 Brandenburg InternetWorking <www.brandenburg.com>
 Sunnyvale, CA  USA <tel:+1.408.246.8253>