[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: pulish WCIP version 01




(apologies for the cross-post)

Micheal,

Are you planning to publish your technique as Informational, or is
magniFire willing to waive its IP to integrate the technology into a
Standards-track document?

Cheers,



On Sun, Mar 04, 2001 at 04:07:21PM +0200, Michael (Micha) Shafir wrote:
> Dan, Fred,
> 
> You are spot-on when you point out the difficulty with the term "dynamic
> data" or "dynamic content".  Current caching technologies are limited in
> their capacity to deal with short TTL objects.  Cache servers that struggle
> to cope with 3,000 static objects per second will be unable to cope with
> rapidly expiring content (usually defined as NO-CACHE).  Seek times to stay
> up-to-date would never be fast enough and would rapidly overcome the cache
> appliance CPU.
> 
> 'MagniFire' is approaching new attitude to solve this problem, freeing up
> the CPU and dealing with short TTL's.  We are now performance testing
> equipment capable of locating and serving up to 10 million objects a second,
> which isn't a trivial task:  We have to "parse" and reinterpret structure
> and structure location in the URI of every single seek object and be able to
> do it 10 Millions time in a second, each time it is requested, an action
> that would consume all available cycles of a normal CPU.  This was
> previously inconceivable with current caching technology.  The ability to
> seek in a nano-second time frame makes "dynamic content" appear "static",
> solving many of the issues in revalidation and content freshness and
> consistency.  All existing solutions to dealing with "dynamic content" are
> currently implemented on the server-side, bypassing the ordinary network
> requests.
> 
> In 'MagniFire' We think that the ability to cache popular and rapidly
> changing data is starting to look reasonable.
> 
> Micha,
> 
> __________________________________________________
> Michael [Micha] Shafir,
> CTO, Founder
> MagniFire Networks.
> Tel: 972 3 6483120
> Fax: 972 3 6483121
> Mob: 972 54 657900
> Email: Micha@magnifire.net
> Web: www.magnifire.com
> _________________________________________________
> 
> 
> 
> 
> -----Original Message-----
> From: owner-ietf-openproxy@mail.imc.org
> [mailto:owner-ietf-openproxy@mail.imc.org]On Behalf Of Fred Douglis
> Sent: Friday, March 02, 2001 6:14 PM
> To: Dan Li
> Cc: cdn@ops.ietf.org; ietf-openproxy@imc.org; webi@equinix.com
> Subject: Re: pulish WCIP version 01
> 
> 
> Dan,
> 
> Sorry I am getting these comments in after the I-D submission, and no
> doubt too late to affect a new submission before today's deadline, but
> I hope they're helpful moving forward.  Also, not sure if it should just go
> to
> webi, or include the other lists, so I'm erring on the side of inclusion.
> 
> Overall comments:
> 
> I have a problem with the term "dynamic data".  To me, dynamic data is
> something that changes all the time, such as a stock quote, and is
> inherently uncachable.  "Frequently-changing" data is more
> appropriate, as long as the rate of access dominates the rate of
> change -- the observation from the DOCP work among others.
> 
> So, I would search and destroy references to "caching dynamic data"
> and make it clear that you mean "frequently-changing" or maybe
> "semi-dynamic" data.  Alternatively, one might redefine "dynamic data"
> to be absolutely clear about this distinction, something I've tried to
> do below, both when the term is introduced and in the Defs section.
> 
> You changed reliable multicast to IP multicast and claimed that
> reliable delivery wasn't necessary because of the volume IDs, but in
> 3.3 it still says message delivery MUST be reliable.  Should that be
> changed?
> 
> Related work seems incomplete. I know not everything need be included, but
> for
> example, it's incestuous to include [4] and [5] but not earlier work on
> using
> volumes for invalidation -- my own incestuous suggestion there is:
> 
> @InProceedings{cohen98,
>   author =       "Edith Cohen and Balachander Krishnamurthy and Jennifer
>                  Rexford",
>   title =        "Improving End-to-End Performance of the {W}eb Using
>                  Server Volumes and Proxy Filters",
>   booktitle =    "Proceedings of the ACM SIGCOMM conference",
>   year =         "1998",
>   month =        sep,
>   pages =        "241--253",
>   note =         "\url{http://www.research.att.com/~bala/
>                  papers/sigcomm98.ps.gz}",
> }
> 
> I also think the draft uses the first person too much -- lots of
> "we's" in there, or "let's lay out", or ...
> 
> The detailed comments below are for the version that was just
> submitted as a formal I-D.  I had annotated a recent version from a
> few days ago, but saw the new versions just as I was going to send in
> my comments.  Sections that are completely new got only cursory
> examination.
> 
> You'll need to merge these changes back into the original if you agree
> with them...
> 
> Fred
> ----
> 
>     Abstract
> 
>        Cache consistency  is a major impediment to scalable content
>        delivery. This document describes the Web Cache Invalidation
>        Protocol (WCIP). WCIP uses invalidations and updates to keep
>        changing objects up to date in web caches, and thus enables proxy
>        caching and content distribution of large amounts of frequently
>        changing web objects, where periodical revalidating objects one by
>        one is unacceptable in terms of performance and cache consistency.
> 
> Cache consistency of frequently changing objects is a major impediment
> to scalable content delivery, because periodically revalidating
> objects one-by-one is unacceptable in terms of performance and/or
> consistency.  This document describes the Web Cache Invalidation
> Protocol (WCIP), which uses invalidations and updates to keep changing
> objects up-to-date in web caches.  It thus enables proxy caching and
> content distribution of large amounts of frequently changing web
> objects.
> 
>     Table of Content
> 
> Table of Contents
> 
>     1. Introduction
> 
>        In web proxy caching, a document is downloaded once from the web
>        server to the caching proxy, which then serves the document to end-
>        users repeatedly out of the cache. This offsets the load on the web
>        server, improves the response time to the users, and reduces the
>        bandwidth consumption. When the document seldom changes, everything
>        works out wonderfully. However, the hard part is when the document
>        is popular but also frequently changing, i.e., the so-called
>        "dynamic content".
> 
> In web proxy caching, a document is downloaded once from a web
> server to a caching proxy, which then serves the document to end-
> users repeatedly out of the cache. This offsets the load on the web
> server, improves the response time to the users, and reduces
> bandwidth consumption. When the document seldom changes, everything
> works out wonderfully.
> 
> However, the hard part is when the document is popular but also
> frequently changing.  Truly "dynamic content," which is potentially
> different on each access, is inherently uncachable unless a window of
> inconsistency is allowable: for instance, one might cache a stock
> quote and serve a stale value for seconds or minutes, even though the
> raw data changes in real-time.  In this document, we define
> "dynamic content" to refer to content that changes frequently,
> rather than content that can potentially change constantly.
> 
>        Dynamic content is quickly becoming a significant percentage of the
> 
> strike "the" at end
> 
>        Web traffic, e.g., news and stock quotes, shopping catalog and
>        prices, product inventory and orders, etc. Because the content is
>        changing, the caching proxy has to frequently poll the web server
>        for a fresh copy and still tends to return stale data to end-users.
>        Specifically, a proxy using "adaptive ttl" is unable to ensure
> 
> TTL
> 
>        strong cache consistency, and yet "poll every time" is costly. So
> 
> cite Gwertzmann & Seltzer here?
> 
>        the content provider usually sets a very short expiration time or
> 
> a content provider
> 
>        marks frequently changing documents as non-cacheable all together.
>        This defeats the benefit of caching, even though those objects may
>        be cached, should the proxy know when the document becomes obsolete
>        [1]. Moreover, if the proxy can be informed of the change to the
>        underlying data that a web object is generated from, the proxy can
>        re-generate the web object on its own, making it possible to
>        distribute dynamically computed content.
> 
>        ...
>        To provide freshness guarantees to objects in the object volume, the
>        caching proxy subscribes to the invalidation channel and obtains an
>        up-to-date view of the object volume -- a process referred to as
>        "volume synchronization". After the initial volume synchronization,
>        to stay synchronized, the invalidation channel operates in either
>        the server-driven mode or the client-driven mode (or a mix of both).
> 
> s/the/a/g
> 
> 	...
>        The two modes are merely the two extremes of a continuum,
>        characterized by how soon the server proactively sends
>        updates/heartbeats and how soon the proxy revalidates the volume.
>        The sooner, the quicker objects are invalidated and thus the better
>        consistency but also the more load on the server and the proxy.
>        Regardless of the mode, same messages are exchanged between the
>        invalidation server and the caching proxies, whose format is defined
>        by "ObjectVolume" XML DTD. Each round of message exchange, whether
>        initiated by the server or the client, is a process of "volume
>        synchronization" and results in an up-to-date view of the object
>        volume. Based on the up-to-date view, the proxy can provide
>        freshness guarantees to all the objects in the volume.
> 
> The two modes are merely the two extremes of a continuum,
> characterized by how soon the server proactively sends
> updates/heartbeats and how soon the proxy revalidates the volume.  The
> sooner the revalidation, the quicker the objects are invalidated; this
> results in better consistency but also more load on the server and
> proxy.  Regardless of the mode, the same messages are exchanged
> between the invalidation server and the caching proxies, whose format
> is defined by an "ObjectVolume" XML DTD [forward ref]. Each round of
> message exchange, whether initiated by the server or the client, is a
> process of "volume synchronization" and results in an up-to-date view
> of the object volume. Based on the up-to-date view, the proxy can
> provide freshness guarantees to all the objects in the volume.
> 
> 
>     2. Terminology
> 
>        The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
>        "SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in
>        this document are to be interpreted as described in RFC-2119 [2].
>        Since WCIP makes some extensions to HTTP, please refer to RFC-2616
>        [3] for HTTP related terminology. Following are WCIP related terms.
> 
> WCIP-related
> 
> Dynamic Content
> 
> Web resources that change "frequently,"	 where the definition of
> "frequent" depends on the access rate and desired consistency
> guarantees.  [Or something to this effect...]
> 
> 	     ...
>        Invalidation Server
> 
> 	    An application program that provides WCIP services to caching
>        proxies. It maintains the master copy of the object volume and
>        disseminates the volume and changes to volume to caching proxies.
>        (The invalidation server logically differs from the origin server
>        because a cache may fill a request from a CDN content server or a
>        replica origin server. The cache may not be able to tell these
>        various sources from the origin server. Besides, the WCIP service
> 
> Strike "besides"
> 
>        may not reside on each or any of them. "invalidation server"
> 
> capitalize "invalidation"
> 
>        uniquely identifies the source of the WCIP service.)
> 
>        ...
>        Revalidation Interval
> 
> 	    A property of the client-driven mode. The invalidation client
>        initiates volume synchronization with the invalidation server, when
>        the "last synchronization time" was "revalidation interval" ago. The
>        interval SHOULD be smaller than the freshness guarantees of all the
>        objects in the object volume, to avoid unnecessary cache misses.
> 
> smaller, or no greater than?
> 
> 
>        Invalidation Latency
> 
> 	    The time between an object is updated at the origin server to
>        the time the old copy is treated as stale at all the participating
>        proxies. The goal of a freshness guarantee of X seconds is to
>        guarantee that the invalidation latency is within X seconds at all
>        times.
> 
>        Content Delivery Network (CDN)
> 
> 	    A self-organizing network of geographically distributed content
>        delivery nodes (reverse proxies) for contracted content providers,
>        capable of directing requests to the best delivery node for global
>        load balancing and best client response time.
> 
> 
>        ...
>     3.1 Freshness Guarantee
> 
>        WCIP provides reliable invalidations and consistency guarantees so
>        that content providers could make their dynamic content cacheable.
> 
> could -> can
> 
>        It's important that WCIP guarantees that, in the worst case, a proxy
>        subscribed to an invalidation channel will not service stale content
>        X seconds after the content is updated at the origin server,
>        regardless of network partition or server failure. The content
>        provider can specify the value of X, e.g., to 5 minutes.
> 
>        In the normal case, this is not hard. Using WCIP, the proxy will not
>        deliver any stale object as soon as an invalidation arrives from the
>        server. The invalidation latency only depends on network propagation
>        and queuing delay, which are typically within a second. In other
>        cases, however, when the network or the invalidation server is down,
>        invalidations cannot reach the proxy timely. To ensure an upper
> 
> in a timely fashion
> 
>        bound on the invalidation latency, the proxy MUST invalidate content
>        automatically if it hasn't being able to synchronize the object
> 
> been able
> 
>        volume for a certain period of time, assuming the server or network
>        may be down and the volume may have changed.
> 
>        Therefore, to control the freshness, the content provider specifies
>        a "freshness guarantee" for each object in the volume, while the
>        caching proxy keeps track of the "last synchronization time". Then,
>        upon serving a client HTTP request, the proxy MAY use the cached
>        object only if the time elapsed since the last synchronization time
>        is less than the object's freshness guarantee. Otherwise, the cached
>        object is marked as stale and MUST NOT be served from the cache
>        without HTTP revalidation. The proxy is RECOMMENDED not to remove
>        the object right away as HTTP revalidation could turn out to be "Not
>        Modified".
> 
> the object right away as HTTP revalidation could result in an
> indication that the object is "Not Modified".
> 
> 	   ...
>        The invalidation server picks the heartbeat interval while the
>        invalidation client picks the revalidation interval. Both of them
>        SHOULD be smaller than any of the freshness guarantees of the
> no larger?
>        objects in the volume, to avoid unnecessary cache misses. Moreover,
>        the invalidation server SHOULD send invalidations "reasonably" soon
>        after it learns of an object change, but it MAY delay the
>        synchronization until some time before the subsequent heartbeat.
>        Such a strategy allows the server to batch multiple changes into one
>        update, without inducing unnecessary cache misses.
> 
>        ...
> 
>        Object volume also serves as the unit of consistency. If the caching
>        proxy obtains the up-to-date view of the volume, it follows that the
>        caching proxy has the up-to-date view of every object in the volume
>        (in terms of Last-Modified time and Etag). This allows one "volume
>        synchronization" exchange to (in)validate all the objects in the
>        volume, greatly improving efficiency compared to per-object HTTP
>        validation. Moreover, when a web site updates its content, often it
>        would like to preserve a consistent view of the site. I.e., it would
>        like the end-users to see either entirely the new content or
>        entirely the old content, not a bit of the new at some web pages and
>        a bit of the old at other pages. By grouping these correlated web
>        pages into one object volume, we can atomically invalidate the
> "one can"
>        entire volume and thus preserve the coherent view.
> 
> 
>        An object volume is described in XML (see section 5 for DTD). In
> 
> for the DTD
> 
>        essence, it is a collection of object meta-data and can be retrieved
>        incrementally based on its version.
> 
>        ...
>     3.3 Channel Abstraction
> 
>        Invalidation channel may be implemented in many different ways,
> 
> An invalidation
> 
>         ...
>        (2)  Subscription: provide an interface for channel subscription
> 	    based on the channel URI as well as notifying the upper layer
> 	    whenever the subscription terminates unexpectedly. Once
> 	    subscribed, the caching proxy can start send to and receive
> 	    from the channel.
> 
> start to send
> 
>        (4)  Reliability: message delivery MUST be reliable, full duplex,
> 	    and in sequence (wrt. the sender). Moreover, delivery SHOULD be
> 	    real-time in that the average latency should be comparable to
> 	    the network round-trip time from the sender to the
> 	    receiver.
> 
> Still true given the multicast change?
> 
>       ...
>        (6)  Scalability: help to ensure that the invalidation server
> 	    doesn't become overwhelmed by excessive load, by providing
> 	    either IP multicast (see later this section) or channel relay
> 
> later in
> 
> 	    (see section 4.1).
> 	    ...
>     4. Deployment Issues
> 
>     4.1 Channel Relay
> 
>        An invalidation channel may have tens of thousands of invalidation
>        clients. Channel relay points can improve the scalability of an
>        unicast-based channel. Instead of subscribing directly to the origin
>        invalidation server, some invalidation clients are redirected to a
>        channel relay point. A channel relay point can perform one-to-many
>        channel relay and many-to-one connection aggregation.
> 
>        (1)  Channel Relay
> 
>        The channel relay point may have multiple clients subscribed to the
>        same invalidation channel. It in turn only subscribes once to the
>        original invalidation server. By multiplicatively relaying channel
> 
> multiplicatively?
> 
> Why not "hierarchically"?
> 
> 
>        messages, it reduces the load on the invalidation server and helps
>        scale the invalidation channel end-to-end.
> 
> helps to scale
> 
> 
> 
> 			    Invalidation Server
> 				    |
> 				    | conn0
> 				    |
> 				    |
> 			    Channel Relay Point
> 				/    |   \
> 			       /     |    \
> 		       conn1  / conn2|     \ conn3
> 			     /       |      \
> 			    /        |       \
> 			Client1  Client2  Client3
> 
>        A "dumb" relay point copies all messages from connection "conn0" to
>        "conn1", "conn2" and "conn3", and vise versa. A "smart" relay point
> 
> vice-versa
> 
>        also construct the up-to-date view of the volume as well as the
> 
> constructs
> 
>        journal of changes to the volume, based on the messages it receives
>        from the invalidation server. Then, it can respond to client-driven
>        volume synchronization requests, instead of forwarding the requests
>        all the way to the invalidation server.
> 
>        ...
>     4.2 Detect Changes
> 
>        Detecting changes is the job of the origin server and/or
>        invalidation server. Web content may change because of updates from
>        the content owner or updates from the content viewer. E.g., the
>        content owner CNN.com updates its front page every 15 minutes, while
>        Ebay updates its content whenever its customers post new auction
>        items or bids. Therefore, changes may be detected in 4 ways.
> 
>        (1)  When the script runs that generates content and updates the web
> 	    source file (e.g., a news article is updated with the latest
> 	    financial information), the script notifies the invalidation
> 	    server which then sends out invalidations or delta-encoded
> 	    updates to all participating caches.
> 
> cite delta-encoding
> 
> 
>        (2)  When a piece of data in the database is modified via the
> 	    database interface (e.g., the addition to the inventory of
> 
> an addition
> 
> 	    books), a database trigger notifies the invalidation server the
> of the
> 	    event.
>        (3)  When a HTTP request comes in (e.g., a POST request to add a new
> 	    auction item), the origin server or its surrogate (reverse
> 	    proxy) notifies the invalidation server the event.
> 
> of the
> 
>        (4)  The last but simplest way is for the invalidation server to
> 	    poll the origin server periodically to find out if the object
> 	    has changed. Given that there is only one invalidation server
> 	    polling, the poll frequency can be very high, e.g., once every
> 
> polling frequency
> 
> 	    minute, offering decent cache consistency as well.
> 
>        There are softwares providing change detection of web content. In
> 
> There is software [cite]
> 
>        some cases, an event described above may invalidate multiple URLs.
>        If the participating caching proxies are able to interpret such
>        events, the invalidation message may carry the description of the
>        event, instead of the list of invalidated URLs. This may be future
>        work.
> 
> This paragraph made no sense to me.  I think what you mean is that
> WCIP can be used to tell systems like AIDE (my own), URL-minder,
> etc. about changes, and I fully agree -- and probably suggested this
> in the first place.  But the second sentence about invalidation is a
> non-sequitur, and I don't understand the next sentence.  How about:
> 
> There is software providing user-level notification of changes to web
> content [cite].  WCIP could potentially be used to permit agents to
> subscribe to change notification, not for the purpose of cache
> invalidation, but to notify users.  Integrating such functionality may
> be future work.
> 
> 
>     4.3 Discover Channels
> 
>     ...
>        Example:
> 
> 	    Invalidated-By: wcip://www.cdn.com:777/allpolitics?proto=http
> 
> This used to be "cnn.com" and got changed to "cdn.com" yet later
> references say cnn, and "allpolitics" seems specific to CNN.  Are you
> sure about this change?
> 
> 
>     4.4 Join Channels
> 
>     ...
>        being cached in the mean time. This guideline can be applied to
> 
> meantime
> 
> 
> 	...
>        (2)  ObjectVolume update: the invalidation server replies with the
> 	    journal of changes to the volume since version "A" up until the
> 	    latest version "B", if the journal of changes since version "A"
> 	    is still available. The server SHOULD aggregate multiple
> 	    updates to the same object; it only needs to report the last
> 	    one. If the journal of changes is not available, it replies
> 	    with the full copy of latest ObjectVolume. If "A" is equal to
> 
> the latest
> 
> 	    "B", the server simply echoes back the synchronization request.
> 
>        (3)  ObjectVolume processing: the caching proxy examines each object
> 	    entry in the update, records its freshness guarantee, compares
> 
> and compares
> 
> 	    the cached object (if any) with the entry. If the cached
> 	    object's Etag is not equal to that in the entry and the cached
> 	    object's Last-Modified time is earlier than that in the entry,
> 	    the proxy marks the cached object as stale. If the entry URI is
> 	    a directory path instead of filename, all cached objects with
> 	    that directory prefix are marked as stale.
> 
> 	    ...
> 
>        However, if the volume indeed has changed, the invalidation server
>        MUST send back an ObjectVolume description with a base equal to or
>        smaller than 7. Here is an example:
> 
> I'm not that into IETF lingo, but I thought that a specific example
> such as this wouldn't justify MUST rather than simply "must".
> Thoughts?
> 
> 
> 	..
>        (1)  ObjectVolume update: if there're changes to the object volume,
> 
> too colloquial -- there are
> 
>     ...
>        (3)  Update "last synchronization time": in this case, there is no
> 	    synchronization request, just the server's update. To account
> 	    for possible clock skews, the proxy MUST convert the "date" in
> 
> skew
> 
> 	...
>     5.4 Serving Content
> 
>        When a HTTP request comes in with URI, the proxy searches its
> 
> a URI
> 
>        ObjectVolume data structure for a matching entry. If an ObjectVolume
>        entry is a directory path instead of filename, the entry is
> 
> a filename
> 
> 
>        applicable to the URI if the URI has that directory path as prefix.
>        If multiple such directory entries match, the entry with the longest
>        match is used.
> 
>        ...
>     6. Protocol State Machine
> 
> [Not reviewed]
>      ...
>        (2)  Public-key-based strong security with mandatory verification,
> 	    i.e., the invalidation client obtains the public key of the
> 	    channel during channel subscription (e.g., using HTTPS & SSL).
> 
> Why not just say SSL; isn't HTTPS redundant?
> 
> 	    The invalidation server signs or encrypts the channel messages
> 	    with the channel's private key. The invalidation client MUST
> 	    verify the signature and discard the message if the signature
> 	    doesn't match.
> 	    ...
>     8. References
> 
>        9  Mogul, J.C.; Douglis, F.; Feldmann, A.; Krishnamurthy, B.,
> 	  "Potential benefits of delta encoding and data compression for
> 	  HTTP", ACM SIGCOMM 97 Conference.
>        10  Mogul, J.C.; Douglis, F.; Feldmann, A.; Krishnamurthy, B.,
> 	  "Potential benefits of delta encoding and data compression for
> 	  HTTP", ACM SIGCOMM 97 Conference.
> 
> Notice anything odd here?
> 
>        ...
>     Full Copyright Statement
> 
>        "Copyright (C) The Internet Society (date). All Rights Reserved.
>        This document and translations of it may be copied and furnished to
>        others, and derivative works that comment on or otherwise explain it
>        or assist in its implmentation may be prepared, copied, published
>        and distributed, in whole or in part, without restriction of any
>        kind, provided that the above copyright notice and this paragraph
>        are included on all such copies and derivative works. However, this
>        document itself may not be modified in any way, such as by removing
>        the copyright notice or references to the Internet Society or other
>        Internet organizations, except as needed for the purpose of
>        developing Internet standards in which case the procedures for
>        copyrights defined in the Internet Standards process must be
>        followed, or as required to translate it into
> 
> Truncated?

-- 
Mark Nottingham, Research Scientist
Akamai Technologies (San Mateo, CA USA)