Monday, July 25, 2011

[Guest Post]: Can Data Optimization Find its Way to Backbone Networks?

By Dr. Yair Shapira*, VP Marketing & Business Development, DiViNetworks

Bandwidth optimization by trading bandwidth for storage or processing power has been long debated, and proven beneficial in many scenarios where links are expensive. With the continuously growing hunger for bandwidth scalability of tier-1 backbones is becoming questionable. At over 40% YoY traffic growth data optimization slowly but surely finds its way from sporadic expensive links to mainstream backbones.

Apart from the financial question – proving that indeed optimization is less expensive than merely expanding bandwidth – introducing optimization into a network has its own challenges. After all, the backbone and access networks are preliminary designed to transfer data, not to modify data, nor to serve content, as many optimization technologies suggest.

This article briefly explores the main factors to take into consideration when seeking optimization solutions for backbone networks.

Supportability in times of Internet revolutions

Some optimization solutions suggest distributing their equipment in network nodes, achieving optimization by practically mimicking the original content server within the network, including its content, application and business logic.Think of dozens or even hundreds of nodes, distributed in critical network junctions all over the territory, practically manipulating protocols and content. Can such a system be stable? Will it keep up with the ever-changing Internet?

As opposed to service-core systems, backbone-based optimization systems should refrain from being application-, content- and protocol-aware. Otherwise, continuous pampering will be required and ongoing changes and maintenance of the optimization systems will be inevitable.

Sustainable performance over time

Even the most enthusiastic optimization supporters admit that optimization factors tend to erode with time. Most techniques, such as video and P2P caching, are way too sensitive to various Internet phenomena. The quickly evolving nature of Internet traffic introduces growing uncertainty in performance factors through time.

When calculating ROI for optimization solutions, make sure that the technology is really future-proof, and does not evaporate with every change in the Internet. A lasting optimization method must not be dependent on application, content or format, and should not be based on fragile trends.

Seamless integration with the traffic flow

One thing network operators strive to avoid is modifying their data flow for the sake of optimization. Limiting future changes is also something that operators are not keen on. Yet most optimization systems are ALGs – Application Level Gateways. As such they tamper with layers of communication which are not supposed to be interrupted within the network. Although often referred to as “transparent proxy”, their mere existence as an ALG limits the flexibility in traffic planning – asymmetric routing, link load balancing, tunneling etc.

Operators should thus strive to adopt optimization solutions, which operate at a network level, rather than at an application level.

Maintenance of core-network functionality

With the growing competition and the ongoing decline in ARPU, operators are heavily investing in smart functionality in the core network – traffic management, smart ad insertion, advanced charging, service selection, video optimization, protocol acceleration and more.

When applying certain optimization technologies, especially caching, down the network path, these functionalities are lost or compromised. A cascade of mechanisms and interfaces has to be constructed in order to compensate for the traffic not actually passing in the core. The result – heavy investments and revenue-generation techniques are nullified.

Bandwidth optimization mechanisms must, therefore, be designed to maintain the core-network functions – not by applying compensation mechanisms, which introduce complexity and require endless updates, but merely by leaving the traffic flowing through the core as is.

Co-existence with content providers

We are witnessing the accelerated tension and clashing between network operators and content providers. The operators claim content providers monetize on the former’s assets, whereas the content providers claim control over their content is hijacked by the operators. The operators, trying to minimize the load caused by OTT (Over the Top) traffic, seek optimization techniques, to the extent of serving the content locally using caching and telco-CDNs.

Yet, by locally serving content or manipulating content, the network operators interfere with the content providers’ business models – managing speeds, inserting ads, limiting session times and applying other business logic. Legal copyright aspects, and inconformity with standards directives, are also brought up in this tug war.

Optimization must therefore not jeopardize the already-fragile co-existence between network operators and content providers. Selected optimization methods must provide solid optimization factors for the operator on one hand, but maintain the content provider’s control over the content on the other.

And again, not by building a house of cards of interfaces to the different content providers and compensation mechanisms – content validity check, fake metering, speed sampling. Network optimization is not an antivirus – it should not be updated with every new web site, video format, or business logic in the Internet.

IP traffic coverage

One of the main elements of considerations for operators, when choosing an optimization solution, is how much of the operator’s traffic will be eventually addressed by the chosen optimization solution. Various techniques can demonstrate excellent savings for the traffic they handle, yet the portion of this traffic is low. All ALGs operate on specific portions of the IP traffic, and therefore apply to merely a part of the bandwidth.

Content providers, struggling to avoid caching and video compression, develop mechanisms to make life tough for OTT optimization solutions. Thus much of the Internet content is not handled by many optimization solutions. A recent study by a tier-1 provider showed that although caching can demonstrate over 30% hit-rate in theory, in actual traffic it provides merely 4% savings due to technical and legal considerations.

IP backbone networks handle the overall IP traffic, and refrain from fragmenting traffic according to its content or application. An optimization solution, to be deployed within the backbone network, must also provide a safety net for 100% of the IP traffic.

Scaling up

10, 40, 100Gbps links are already reality. Network nodes oversee ever-increasing flow of traffic. Optimization systems, deployed within the backbone network nodes, will need to crunch similar throughputs.

Alas, most optimization mechanisms cannot scale to such bandwidth. Many that can stack up to these bandwidths require excessive computational resources, and endless storage. Implementing such solutions, does not make any operational sense.

Optimization solutions in the Zetabyte era must be scaled to par with other networking equipment. A 10Gbps line should require no more than a 1RU device. 40 and 100Gbps down the road must be handled in a compact solution, or even plugged into existing networking equipment.

There is a limit to brute force scaling – merely throwing more ports and fiber; and we are rapidly reaching this limit. Data optimization, already a common reality in expensive long-haul lines, is becoming a must have in tier-1 backbone networks. Smarter ways to move data around will soon proliferate.

Yet, the serious barriers must be removed before introducing data optimization into backbone networks. What works for point-to-point few-Gbps links, will simply not work for hundreds-of-nodes multi-Tbps networks.

*Dr. Yair Shapira serves as DiViNetworks’ senior Vice President of Marketing & Business Development. DiViNetworks is a world-known provider of bandwidth optimization solutions, with dozens of commercially deployed systems within major backbone networks.

Dr. Shapira joined DiViNetworks in 2009, after serving as VP Marketing of Jungo (acquired by NDS). Prior to Jungo Dr. Shapira served as VP Business Development and CTO at Flash Networks, a leading provider of mobile optimization systems. Dr. Shapira also sat on the Board of Directors of Koor Technologies, an early-stage VC, and provided strategic and technological consulting services to various companies and VCs.

Dr. Shapira earned his B.A. in Mathematics and Physics from the Hebrew University, and earned his Ph.D. in Applied Math from the Technion.

No comments:

Post a Comment