[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Distribution CPG Protocol
Phil,
Your discussion on an "auction protocol based QoS model" vs. a "policy
based best effort model" clarified many things.
I find those models only differ in whether a real time protocol for
peering set up and reconfiguration is used or not. But in both cases a
distribution peering protocol, a redirection peering protocol and an
accounting peering protocol is needed, they do their job after peering
relations has been stablished on-line or off-line.
This 4th protocol, as Oliver Spatscheck pointed out yesterday, I would
call a dynamic content/application deployment protocol in analogy with
L3 dynamic overlay network deployment protocols as the Xbone
(www.isi.edu/xbone).
Though it brings many challenges, as it has been commented, I find a
couple of reasons to advance in this direction as well.
Without a dynamic deployment protocol, the policy based best effort
model, CDNs will end up posting their surrogates/aggregate CDN
capabilities in a web. Content Providers/CDNs representing CP will check
upto 10 webs and sign an off-line contract with 1,2 or 3. If CP find in
their accounting that their clients service wasnt as expected, they will
need to argue with those CDN and/or check again CDNs web sites for a
different CDN provider.
It will work, however it will prevent:
a) optimal service (least cost, best performance) unless peering with
every CDN that match our requirements, (lets say 7 out of 10, which is
realistic); optimal service decision is passed to (solely done) at the
redirection system.
and b) it will prevent fine-grained per surrogate selection (who is
interested in it? ISPs. They will be able to offer their caches as
surrogates to content providers). It is not realistic for a CP to find
out every possible surrogate that will satisfy content provider service
requirements by browsing ISPs web sites. A dynamic deployment protocol
that rendezvous CP service requirements with surrogates capabilities is
needed.
Anyone interested in following this path?
-Oscar
Phil Rzewski wrote:
>
> At 10:53 AM 1/9/01 -0500, Brad Cain wrote:
> > > Is there consensus that the distribution protocol operates in three phases?
> > >
> > > 1. CDN advertises its capabilities
> > > 2. The Content Provider requests the use of CDN services by identifying
> > > content to be distributed.
> > > 3. The CDN confirms (or denies) the request for service
> >
> >I'll push back on stage #3... It looks like "inline qos" to me... what
> >I mean by this is that I don't think we should be negotiating any type
> >of QoS in the protocol... I would say you just advertise to your
> >neighbors
> >and make sure the off-line SLAs cover any QoS...
> >
> >[...]
> > > 1a. footprint (expressed as a set of IP address prefixes)
> > > 1b. cost
> > > 1c. content type supported
> > > 1d. performance
> > > 1e. generic metric
> > > 1f. load measure
> > > 1g. cache capacity (I'm generalizing from "max file size")
> >
> >I vote for simplicity sake only for: 1a, 1c, and 1e
> >
> >I would strongly recommend that we not try to design a QoS routing
> >protocol... inter-provider QoS is difficult (if not nearly
> >impossible) to achieve dynamically (i.e. without off-line SLAs
> >and strict measurement)
>
> I'd like to second Brad Cain's statement to keep QoS out of the protocol.
> However, based on the ensuing discussions, I think there's some need to
> define what we mean when we refer to QoS.
>
> Like many others, I pull a lot of my experience from the routing world. A
> specific example that seems very obvious to me is multi-homing with BGP.
> This has been around for years, and last I knew, it was still entirely
> policy-based. This is to say that, if I have my own AS and buy connectivity
> from multiple providers, the best I can really hope for is to rely on
> things like AS-Path or manually overriding with some self-set metrics to
> guide traffic flows in and out of my network. I may wish to exchange some
> IP packets with a remote site, and either of my layer 3 providers could
> give me that connectivity. It might seem great to have a protocol that
> determines, in (near-)real time, which provider could do the best job, and
> use that one. Indeed, a couple premium bandwidth providers have made a
> business out of boasting this kind of functionality through some
> proprietary secret sauces. But last I knew, none of this was creeping into
> open protocols and subsequent implementations. I'd like to speculate it's
> because the problem is extremely difficult to solve (people in this
> discussion have already cited many possible issues with implementing this
> in the content layer, such as differing ideas of what "cost" means).
>
> So what's the opposite of "QoS"? I'd like to say it's "policy-based
> best-effort". Drawing again from the bandwidth universe, let's say I buy my
> connectivity from multiple providers and set up BGP policies to route my
> traffic. This means I first determined out-of-band what I bought from my
> providers. For example, I sign a contract for a DS3 of Internet
> connectivity with one of them, and so they have agreed to allow me to send
> up to 45M of Internet traffic on that pipe, and they'll get it through
> their backbone to necessary peering points, etc. I can use the BGP tricks
> to use that DS3 worth of capacity as I see fit. It's certain that the
> provider may fail to live up to the terms (due to congestion, outages,
> etc.), and it's my responsibility to react to that by doing some
> performance testing of my own and argue that (once again, out-of-band) with
> my provider. I may choose to adjust my policies to account for this, but
> it's all out-of-band negotiations resulting in manually-set policies.
>
> Bringing this back into the content layer, a lot of the "QoS-ish" proposals
> I've seen imply in-band communication about what will and won't be done.
> For example, communicating information about cost, load, and performance
> are definitely QoS. All of the proposed "phase 3" is QoS. What's
> essentially being proposed with this type of QoS model is an AUCTION. While
> an auction model might be a nice-to-have, is it really the requirement
> coming from the Content Provider community? Or does the Content Provider
> community just need a policy-based method so they can multi-home between
> CDNs, much has been done at the bandwidth layer for years? Also, the QoS
> approach seems to require that surrogates be much more than caches (and
> indeed, should probably be origin-style servers). After all, the caches out
> there all have garbage collection systems and are making a best-effort
> attempt to be holding on to the "hot set" of content. If you need a system
> that can state exactly how much free space it has, how long it can promise
> to hold onto an object regardless of popularity, and other similar things,
> you're really raising the bar for what it means to be a CDN surrogate
> (which may be fine, but I'm just pointing it out since a lot of the
> existing CDN deployments are based on caches).
>
> What would a "policy-based best-effort" system look like for Content
> Distribution Internetworking? The agreements would be made off-line between
> the Content Provider and the CDNs. I then would propose Brad's selections
> for Phase 1 and Phase 2 components, and the lack of need for Phase 3. In
> fact, you could probably even get away without "content type": I would
> assume that if the Content Provider is going to be sending checks to this
> CDN, they have some idea of the CDN's capabilities. Once the agreement is
> in place, "best-effort" is the law of the land: The CP then distributes its
> objects to the DCPGs of its CDNs, the CDNs distribute internally as they
> see fit. The CP routes requests to the RRCPGs based on their chosen policy,
> and the CDNs route the requests to the surrogates that actually have the
> content. Those CDNs owe the CP accounting data in return to prove from
> where it delivered hits. It's then the CP's job to review the accounting
> data and do real-time performance measurements to determine if the CDN
> provider meets their definition of "good". If not, it's the CP's
> responsibility (once again, out-of-band) to take this up with the CDN provider.
>
> The advantage of this "best-effort" approach is that it preserves the goal
> stated at the BOF of keeping each CDN as a black box. For example, if a
> premium CDN has a dynamic method inside itself to distribute "hot" content
> into more places, this would be reflected in improved performance and
> proven in accounting data. Some other CDN might use multicast for
> distribution and decides to leverage that to just replicate content in
> every surrogate. People can do whatever they want and be held accountable
> by the people who pay them. Most importantly, it keeps the cross-network
> protocols simple.
>
> --
> Phil Rzewski - Senior Architect - Inktomi Corporation
> 650-653-2487 (office) - 650-303-3790 (cell) - 650-653-1848 (fax)