Monday, February 27, 2012

[Guest post]: Transparent Caching: A Means Rather than an End

By Larry Peterson*, Chief Scientist, Verivue

Transparent caching is often viewed as something distinct from an Operator CDN—it is used to cache over-the-top (OTT) content from content providers and aggregators with which the network operator does not have an explicit delivery arrangement. But a better model is to view transparent caching as one use (application) of a CDN, no different than other uses (e.g., multi-screen video delivery, multi-tenant CDN for B2B customers, CDN-assisted VoD). In each case, the application leverages a core caching service, plus one or more auxiliary mechanisms. In the case of transparent caching, the delta is an alternative content acquisition mechanism—one that transparently intercepts requests rather than explicitly redirecting requests for pre-registered content. Or said another way, transparent request interception is a means rather than an end.
 

A general-purpose transparent request interceptor does three things. First, it interacts with the surrounding network infrastructure to divert candidate requests to the interceptor. This can be accomplished through proper configuration of some standard protocol—e.g., DNS, BGP, PBR, WCCP—at the operator’s discretion. Second, for diverted requests for cacheable content, the interception service redirects the end-user to the caching service, which in turn acquires the content from the origin server if it is not currently cached. Third, for diverted requests for non-cacheable content, the interception service either proxies the corresponding flow to the origin server or re-configures the network to forward the flow rather than divert it. The better job a transparent request interceptor does at diverting only cacheable content at step one, the fewer “false positives” need to be proxied at step three.
 
Note that the only difference between transparent caching and the other example CDN applications is that for the latter, step one involves the content provider explicitly diverting user requests to the request router using a CNAME or by making the request router an authoritative DNS server for some region of its URI name space. This, in turn, means there are no false positives, so step three is not required. As for step two, both transparent caching and all the other CDN applications leverage exactly the same request routing service and caching service.
 
In other words, we can think of these various CDN applications as being constructed from building block services: 
Transparent Caching = Caching + Interceptor + Request Router + Analytics
Multi-Screen Video = Caching + Request Router + Analytics
Multi-Tenant Operator CDN = Caching + Request Router + Analytics
CDN-Assisted VoD = Asset Manager + Streamer + Caching + Request Router + Analytics
where each component is a general-purpose, stand-alone service. The power of this building block approach is that the next application that comes along can either be constructed from a different configuration of existing services, or requires only an incremental addition to the current service catalogue. This both reduces the time-to-market for new applications, and increases the operator’s ability to leverage (and possibly re-purpose) its existing investment.
 
Of course, this is easier said than done. One key enabler is to run on a virtualized platform, which simplifies the process of provisioning services on the underlying hardware infrastructure. This is the central insight of cloud computing, except applied to the network edge instead of the data center. A second key enabler is an extensible management framework that unifies how operators control and monitor the available services. Having to deal with one-off OSS/BSS processes is an unacceptable burden. The third key enabler is truly general-purpose building block services that work across a wide-range of usage scenarios. In contrast, purpose-built mechanisms generally result in stovepipes solutions that cannot be reused.


_________

*As Chief Scientist, Larry Peterson provides technical leadership and expertise for research and development projects. He is also the Robert E. Kahn Professor of Computer Science at Princeton University, where he served as Chairman of the Computer Science Department from 2003-2009. He also serves as Director of the PlanetLab Consortium, a collection of academic, industrial, and government institutions cooperating to design and evaluate next-generation network services and architectures. 


Larry has served as Editor-in-Chief of the ACM Transactions on Computer Systems, has been on the Editorial Board for the IEEE/ACM Transactions on Networking and the IEEE Journal on Select Areas in Communication and is the co-author of the best selling networking textbook Computer Networks: A Systems Approach.

He is a member of the National Academy of Engineering, a Fellow of the ACM and the IEEE, and the 2010 recipient of the IEEE Kobayahi Computer and Communication Award. He received his Ph.D. degree from Purdue University in 1985.

1 comment:

  1. There appears to have been 'difficulty connecting the dots' between CDNs, wireless network operators, cloud NaaS and CaaS, content as a service, providers because of the loose or non-existing connection between OTT content services and the revenue models of mobile/ICT operators.

    This should be overcome by the pressing demand for broadband driven significantly by video content as a help in achieving higher capacity and level of service more cost effectively. However, this has been the understanding for several years.

    It appears that the wireless and ubiquitous network standards and supply ecosystem have evolved to the point that transport catching should be poised to be used much more broadly. However, I find it difficult to find benchmarks for how this will occur: who will deploy, as part of what service offering, and how active a role wireless operators will play. I'm not sure I'm asking (all) the right questions. Can you shed some light?

    Robert Syputa
    Maravedis

    ReplyDelete