[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: CELP (was RE:)



> A site becomes multihomed becuase it wants to improve its fault tolerance.
> This means that if the site were single homed, some parts of the Internet
> would be unreachable for him, so he wants to overcome this problem by
> multihoming
> 
> So, i think i don't agree with the assumption that the internet is mostly
> reachable, because if this were so, why would sites multihome in the first
> place?
 
I agree with you. From a network perspective, Internet is mostly reachable
(although, with associated route churns, convergence delays etc.) But, from
user perspective, failures are common even if you multi home. We should 
work to cope up with failures, rather than try to avoid them 100% (which is
not at all possible.) In the current Internet, performance was never a 
bottleneck( reason for lack of incentive to use QoS) only reliability is 
the issue. Site-multihoming is one of the many tools to increase FT.
 
some numbers about Internet path failures ( ack to hari balakrishnan):
 
1995 - routing churn at the rate of 3.3%               - vern paxson     (sigcomm)
1997 - 10% of available route(< 95% of time)           - Labovitz et all (infocom)
1997 - less than 35% of available route(99.99% of time)- Labovitz et all (sigcomm)
2000 - 40% of path outage takes 30+ min to repair      - Labovitz et all (sigcomm)
2001 - 5% of faults last more than 2 hrs, 45 min       - chandra et all (IEEE ToN)
2001 - 7.7% of "overlay" path hrs experience 30 min    - Andersen et all ( SOSP)
2002 and later - http://bgp.lcs.mit.edu/
 
posting quota for multi6 over...