[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Anycast root metrics and analysis



    Date:        Wed, 1 Nov 2000 09:58:14 -0800 (PST)
    From:        hardie@equinix.com
    Message-ID:  <200011011758.JAA03848@nemo.corp.equinix.com>

  | It's not a DNS requirement that the quickest server be used, it's
  | an optimization that helps the applications which depend on the DNS.

Yes, so we desire it to be "fast enough" - we don't require that there
not be something even faster that we're missing out on.

  | I actually believe the point is to distribute the data to multiple
  | places in the network topology in order to improve performance for
  | those who have been under served.

Yes, it should have that effect.

  | If the current plan resulted in worse performance to
  | end sites, I believe it to have failed.

Absolutely.   But there is a difference between "worse performance
than now" and "worse performance than could theoretically be obtained".
The former is what we need to guard against, not the latter.

  | I also believe that there
  | might be other consequences, such as concentration of load on a small
  | number of the extant roots.

That would not be good - but is it likely that it would be worse
than it now is?  In fact, is it even really possible (unless someone
deliberately set out to achieve that).

  | How, then, can we test that the system it is proving can be optimal?

I don't think we need to - all that's needed (long term) is that it
be better than it is now.   Then by placing root servers at more
locations, it should be able to be improved even more - measurements
from various sites can help determine that - if not, the new server
that made it worse should be (re-)moved.   I don't think we should
consider anything to have failed just because it could be even better still.

kre