Showing posts with label performance testing. Show all posts
Showing posts with label performance testing. Show all posts

Tuesday, October 6, 2015

Sandvine NFV Reaches 1.1 Tbps Throughput w/10 RU of Servers


Sandvine announced that "NFV Now a Reality for Operators of Any Size 

.. the Sandvine Policy Traffic Switch (PTS) Virtual Series has established a new Network Functions Virtualization (NFV) performance benchmark by achieving 1.1 Tbps of throughput performance.

.. The test conducted by Sandvine utilized a Dell™ PowerEdge™ M1000e Blade Enclosure, 14 Dell™ PowerEdge™ M630 Blade ServersIntel® Xeon® processors E5 v3, Intel® Ethernet Converged Network Adapters X540 and X710, the Data Plane Development Kit (DPDK) providing exceptional packet processing performance, and the Intel® Open Network Platform Reference Architecture (Intel® ONP)". 



Related posts:
  • Sandvine States 155 Gbps NFV Performance (Beats Procera by 5 Gbps) - here
  • [Infonetics]: Gradual Transition to NFV-Based DPI; Allot and Sandvine Lead the Market  - here.


".. the 1.1 Tbps benchmark was achieved using a traffic mix that uses various packet sizes and flow-types which is representative of real-world network conditions. Additionally, each Dell PowerEdge M630 Blade Server utilized only one of its dual sockets, with the Intel® Xeon processor utilizing approximately 60% of its processing capability. This extra compute room ensures that it is possible for operators to implement the same advanced traffic measurements and intelligent broadband use cases that they have previously done on purpose built hardware.



See "Sandvine Virtual Series Achieves 1.1 Tbps of NFV Performance" - here.

Tuesday, November 25, 2014

T-Mobile US: Speed Tests Look too Good ?


It turns our that T-Mobile spent some efforts in service management facilities to detect when a customer accesses speed tests sites, so a speed limit (for customers exceeding their data plan cap) could be temporary removed, and "non-accurate" performance information would be shown!

The Federal Communications Commission announced that T-Mobile US has ".. agreed to take steps to ensure that customers who run mobile speed tests on the carrier’s network will receive accurate information about the speed of their broadband Internet connection, even when they are subject to speed reductions pursuant to their data plans.

[Related post - "T-Mobile to Shape Misusage Over LTE" - here]

T-Mobile offers several data plans that feature a designated allotment of high-speed data. After a customer uses the monthly high-speed data allotment, that customer will receive data at a reduced speed limited to either 128 kbps or 64 kbps, depending on the customer’s data plan, for the remainder of the monthly billing cycle [here]

These speed reductions are specified in T-Mobile’s agreements with customers,and T-Mobile customers do not receive overage charges for exceeding their data caps. In June, T-Mobile began exempting the use of certain speed test applications, which allow consumers to measure the speed of their Internet connection, from customers’ monthly high-speed data allotments.

OOKLA's SpeedTest Report - mobile carriers, US, November 2014 data

Currently, customers who have their speeds reduced after exceeding their monthly high-speed data cap cannot easily understand the results of exempted speed tests. When these customers run speed tests that T-Mobile has exempted from data caps, they receive information about T-Mobile’s full network speed,and not the actual reduced speed available to these customers at that time. 

The FCC was concerned thatthis could cause confusion for consumers and prevent them from obtaining information relevant to their use of T-Mobile services. The FCC and T-Mobile have agreed that T-Mobile will begin implementing the agreement immediately and will fully implement it within 60 days.

See "T-Mobile To Improve Disclosures for Consumers Using Mobile Speed Tests"- here.

Friday, August 30, 2013

Ixia Scales-up Testing with New Load Module

 
Xcellon-Multis 
100/40/10GE module
Ixia announced a "new family of products designed to enable enterprises and service providers to handle more traffic and increasingly sophisticated services. The XcellonTM-Multis solution delivers enterprises and service providers the flexibility to test complex scenarios involving high-capacity 10GE, 40GE or 100GE networking products at higher speeds and with a higher return on investment ..  Ixia customers can now do more with less by using the Xcellon-Multis family, which scales up to hundreds of 40GE and 100GE ports".

" Ixia's Xcellon-Multis load module family comprises the industry's highest density 40G and 100G higher speed Ethernet (HSE) test equipment, providing more flexible test coverage and 4x100GE, 12x40GE, or dualrate 40GE/100GE, all in a single-slot load module".
 
See "Ixia Innovation Enables Customers to Deliver Sophisticated Network Services With Less Investment" - here.

Friday, August 16, 2013

Ascom LTE Testing Selected by Swisscom (CHF1.2M) & 2 US MNOs ($10M)


Ascom Network Testing announced that " .. As part of its nation-wide 4G/LTE rollout program, Swisscom has selected Ascom to provide an upgrade of its existing benchmarking and network quality of service monitoring with a total value of more than CHF 1.2 million. The contract is part of a network overhaul initiated by Swisscom to handle future growth in mobile data transmission"

" .. Ascom’s TEMS Portfolio is used in most of the European countries to do first validation of 4G/LTE network performance and was recently also selected by two leading US operators for 4G/LTE benchmarking and service performance monitoring projects worth more than USD 10 million".

See "Swisscom selects Ascom Network Testing for 4G/LTE-Network Benchmarking" - here.

Wednesday, March 20, 2013

[Survey]: 79% of Top European Retail Sites do not use CDN


A vast majority of the top European retail sites do not use CDN, according to a new survey by Radware and Level 3. 

The survey finds that " .. 3 out of 4 of Europe’s top 400 retail websites take more than 3 seconds to load, failing to meet online shoppers’ performance demands. Numerous user experience studies have found that most online shoppers will abandon a page after waiting 3 seconds for it to load. 

The survey’s key findings include: 
  • The median load time for first-time visitors was 7.04 seconds.
  • 1 out of 4 sites took more than 10 seconds to load.
  • 79% of sites did not use a content delivery network (CDN)
  • 78 out of 400 sites do not use text compression


See "Radware and Level 3 Announce Key Findings on Page Speed of Europe's Top 400 Retail Websites" - here.

Thursday, January 10, 2013

PCRF Performance: Openet vs. Comverse Vs. Others


Both Openet and Comverse provided some information on their PCRF product performance. Both vendors show the same performance on the system (chassis) level, 200,000 TPS (assuming similar test definitions conditions and methods were used).

Some other vendors provided their numbers earlier - Tango Telecom, BroadHop (now Cisco) and Bridgewater Systems (now Amdocs) - each claiming to have the best results.

See below a short comparison of all tests published so far. Test scenarios,conditions and hardware configuration/cost are not the same in all tests. Probably even the definitions of TPS (or PDP contexts setups by some vendors) are not equal as well.

See also "Heavy Reading - '[performance boosted] policy management can play a vital role a telco’s overall product and service strategy'" - here.

Vendor
Date
Published
Tester
System Performance (TPS)
Subscribers/
System
Blade Performance
Blog Post
Openet
Jan 9, 2013
Internal
200,000

25,000

Comverse
Jan 9, 2013
EANTC
200,000
31.5M
15,000

Tango Telecom
February 6, 2012
Internal
192,000

12,000

Cisco
(BroadHop)
November 17,2010
EANTC
28,000
20M

Amdocs (Bridgewater)
September 15, 2010
EANTC





Back to the recent announcements:

Openet announced that "Technical innovation by Openet engineers related to protocol handling, core processing, and memory utilization have achieved up to 100 percent performance improvement:
  • More than double PCRF performance. Openet engineering advances achieved more than 200,000 TPS within a single policy system in lab testing. This measurement includes the in-memory database interaction and represents more than double performance versus any other policy product on the market.
     
  • More than 25,000 TPS per blade for rating decisions. Openet’s product ecosystem includes real-time charging integrated with policy, essential to help operators create, manage and support modern business models.

    See "Openet Technical Breakthroughs Exceed Performance Standards" - here.  
Comverse provides test results "Conducted by the European Advanced Networking Test Center (EANTC) ..Test results indicate that the DMM Policy Manager is distinguished by having the highest independently proven transactions per second (TPS) figures in the industry per single chassis: 
  • Verified 31.5 million simultaneously active subscribers in a single DMM Policy Manager chassis
     
  • Measured more than 200,000 transactions per second in a single DMM Policy Manager chassis in all scenarios, including advanced LTE use case
     
  • Support of 15,000 TPS per blade with linear scalability
     
         See "Independent Testing Confirms Comverse Mobile Internet Performance Leadership" - here.




    Sunday, October 21, 2012

    Study: LTE isn't the Performance Savior for Mobile Commerce Sites


    A recent study by Strangeloop Networks reveals that "a typical ecommerce site takes 11+ seconds to load, one-third of site owners don’t have a mobile-specific site, and LTE isn’t the performance saviour it’s been touted as"

    "This past summer Strangeloop Networks .. measured the performance of top ecommerce sites – both m.sites and full sites – as these sites load on newer and older Android and iOS phones and tablets .. over both 3G and LTE networks".

    Main findings are:
    • Desktop vs smartphone performance - The median page (full site, not m.site) took 11+ seconds to load for both the Galaxy S and iPhone 4 over 3G
    • LTE was 27% faster than 3G - the average load time for pages on 3G was 11.7 seconds, compared to 8.5 seconds for LTE 
    • One in three don’t have a mobile-specific site
    • 32% of owners don’t let visitors go to the full site

    See "Your mobile site is slower than you think" - here; the report is available here (registration required).

    Thursday, September 6, 2012

    The FCC Adds Mobile Broadband to its Performance Testing

     
    It is no longer just cable and DSL carriers that will be tested for broadband performance (and compared to advertised speeds?) by the FCC.
     
    The US regulator announced that " .. The National Broadband Plan (NBP), developed by the FCC, made recommendations to improve the availability of information for consumers about their broadband service. The FCC has undertaken a series of projects as part of its Consumer Empowerment Agenda to realize this charge, including launching a  broadband speed test app and, most significantly, undertaking a comprehensive effort—in partnership with industry, the public research community, and other stakeholders—to provide the first detailed and accurate measurements of fixed broadband service performance in the United States".

    See "FCC: Cablevision has 'improved remarkably in a flight to quality'" - here.
     
    "The FCC now proposes a program to develop information on mobile broadband service performance in the United States utilizing the collaborative model underlying the success of its fixed broadband program. As the Measuring Broadband America program has proven, the broadband performance data produced by the statistically sound methodology of the program allows comparisons and analyses that are valuable to consumers and spur competition among service providers"

    Welcome to the Mobile Broadband age. 
     
    See "FCC TO LAUNCH MOBILE BROADBAND SERVICES TESTING AND MEASURMENT PROGRAM"- here.

    Thursday, August 23, 2012

    NI Announcements: Neustar Launches Web Performance Management Solution


    Neustar announced the release of "Neustar® Web Performance Management, a new solution that brings together Neustar Website Monitoring and Neustar Website Load Testing for the first time in a single, comprehensive platform. With this offering, companies that rely on web performance to drive revenue now have a simple, easy-to-use platform to consistently monitor site performance and quickly mitigate performance-related issues, ultimately protecting their customers’ online experience".

    "The benefits of Neustar Web Performance Management are: Automated Issue Escalation and Resolution .. Real-user or Synthetic Monitoring .. Full-service or On-demand Load Testing ..External Views and Rich Reporting"



    See "Neustar launches New Web Performance Management Solution to Protect Customer Experience and Online Revenues" - here.

    Friday, August 3, 2012

    BroadForward Provides Diameter Performance Test Results


    BroadForward (here) joins the recent trend of publishing policy management performance data, following similar announcements by Openet, Tango, BroadHop, Amdocs (during Bridgewater days) and Tekelec.
     
    The vendor of interfacing software announced that it has ".. completed benchmark tests showing the BFX Broadband Policy Gateway capable of handling over 100,000 Diameter messages per second on a single 1U server".

    Raymond van der Laan (pictured), Head of Engineering of BroadForward, said: “Interface gateways such as BFX handle multiple connections on different interfaces in the operator’s network. With high performance and low latency being crucial attributes, only very lean and efficient middleware layers are therefore found suitable for modern broadband systems and networks .. We have set a new Diameter performance benchmark. At a CPU utilization of 80%, BFX is capable of handling over 100,000 Diameter messages per second, on a single 1U server (see chart below)".



    See "BroadForward sets new Diameter performance benchmark for the BFX Broadband Policy Gateway" - here.

    Wednesday, July 4, 2012

    More on Openet's Recent Performance Reports


    Openet issued a press release few days ago announcing the ".. benchmark test results that prove Openet delivers the industry’s best performance for Policy and Charging Control and related functions. These results highlight why more operators choose Openet to enable market agility and to successfully deliver complex services at the lowest possible TCO".

    "Results establish Openet as dominant in performance across all Policy and Charging Control (PCC) functions .. Record setting policy – With up to 11,000 TPS per Policy blade, Openet sets industry performance records and enables operators to quickly manage network resources with real-time, personalized policies based on service, subscriber, or context".

    See "Independent Tests Prove Openet Performance Leadership" - here. Similar announcements were made before by Tango, BroadHop, Amdocs (during Bridgewater days) and Tekelec.


    See also "Modeling Policy Server Scale and ROI" - here

    I spoke with Shane O'Flynn (pictured), Global VP Engineering, to learn more on the test details:
    • The test was performed by IBM (here), in its Hursley (UK), Raleigh and Poughkeepsie(US) locations, based on the network architecture used by Openet's larger customers - "a large US tier1 operator" (could be %#&# ?) and a "Multi-Country European operator").
       
    • The hardware used to the test was IBM Blade Center HS22 (NEBS compliant) and HS23 platforms (see diagram), with the following spec:
    ID
    Specification
    Purpose
    1
    ITE with 1 x 2.7GHz E5 2680, 24GB Memory (6 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
    Hosts the PCRF and solidDB
    session store.
    2
    ITE with 1 x 2.7GHz E5 2680, 24GB Memory (6 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
    Hosts the database. This server
    will require a connection to
    the SAN.
    3
    1 ITE with 2 x 2.7GHz E5 2680, 48GB Memory(12 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
    Hosts the PCEF load
    simulator

    • Shane explained that the TPS results reported by Openet, unlike some of its competitors, reflect end-to-end measurements, including all processing elements - application, data base and web servers. The chart below shows CPU consumption as the TPS increases. "We can run the CPU to full saturation but our general rule of thumb is a paired config both running at 40% with the capacity to failover to 80% with room for a spike so that it matches a real life environment", says Shane.

    • The test was performed with an Oracle data bases as well as IBM in-memory database, SolidDB.

    Saturday, April 21, 2012

    Spirent Acquires MU Dynamics for $40M

     
    As I had some posts on Spirent (see "Spirent Announces 10GE Mobility, Bandwidth Management and Network Intelligence Testing Module" -here) and Mu Dynamics (see "Policy/DPI Testing Announcements: MU Dynamics Tests Carriers' Application Aware Networks" - here), I thought their merger may be interesting:

    Spirent announced that it has ".. entered into a definitive agreement to acquire privately held Mu Dynamics, Inc. (“Mu”), based in Sunnyvale, California, for a cash consideration of $40.0 million. Mu is a security testing pioneer, offering innovative solutions that enable faster, higher quality deployments of Cloud applications and applications-aware networks".

    Dave Kresse (pictured), CEO of Mu Dynamics said: “Mu has built a powerful software solution set for our customers, enabling them to test security and application-aware networks .. We’re excited that this transaction will accelerate the global deployment of our solutions, and enable their integration into Spirent’s leading performance test solutions”.

    "Spirent expects to consolidate $9.0 to $10.0 million in revenue post-acquisition in 2012, with a positive return on sales. For the first full year post-acquisition in 2013, revenues are expected to be in the range of $17.0 to $19.0 million operating at Spirent’s average return on sales, resulting in a positive enhancement to earnings and achieving an attractive return on investment, in line with Spirent’s objectives".

    See "Spirent communications plc signs definitive agreement to acquire Mu Dynamics, inc." - here.

    Wednesday, April 18, 2012

    AT&T Vs. Verizon LTE Performance - It's not only about Speed


     
    Yet another "reality check" for LTE (see also - "LTE and QoE" - here). Bill Moore, president RootMetrics, compares AT&T and Verizon Wireless LTE network performance in an GigaOm article.

    "As our tests show, just because a carrier advertises a market as LTE-enabled doesn’t mean that you will always be on its LTE service. Moreover, the consumer experience of a carrier’s network is impacted by data failure rates. If you’re in the middle of uploading a file or downloading a movie and lose your data connection, those LTE speeds are meaningless. Looking solely at LTE misses the forest for the trees. It doesn’t give you a true picture of a network’s real-world performance".

    "Our head-to-head comparison of these two networks measures performance over the first quarter of 2012 and across multiple markets, throwing Verizon’s more mature LTE network together with AT&T’s nascent one to see what performance each offers consumers. We’ve stripped out all non-LTE test results to give you unvarnished, no-additives-included LTE speed".




    "LTE is only one piece of a much more complicated puzzle of how consumers actually experience their data networks. It’s the hot topic, but it shouldn’t be the only topic"

    See "Solving the LTE Puzzle: Comparing LTE Performance" - here.

    Monday, February 20, 2012

    [Guest post]: PCRF Test Methodology

    By Don Wuerfel*, Director of Engineering, Developing Solutions

    Introduction

    As the wireless industry moves toward a unified IP network that will carry both voice and data traffic, the Policy and Charging Rule Function (PCRF) will take on an increasingly important role in managing a service provider's network resources. The PCRF will be used to authorize a subscriber's bandwidth allocation based on multiple factors including the subscriber’s past usage, the level of service a subscriber has purchased, and the amount of resources currently available in the network.

    With the planned adoption of Voice over LTE (VoLTE), the PCRF's role in the service provider's network will become increasingly vital. It is therefore essential that the PCRF be fully tested to establish its be-havior, performance, and capacity prior to deployment in a live network. Although we'll emphasize the importance of PCRF testing in this white paper, it's equally important to test all Network Elements (NE) in the network, therefore the testing principles and philosophies that we'll discuss can be applied to most any NE.

    Two equally important yet uniquely different test methodologies are frequently employed for validating a NE prior to deployment in a live network. We'll refer to the first methodology as NE Testing in a Test Network and we'll refer to the second methodology as NE Testing in Isolation. Each methodology provides unique and valuable insight into the performance and behavior of a NE that is being evaluated and tested.

    NE Testing in a Test Network 

    This first method (also referred to as end-to-end testing) usually consists of building a test network in a lab using all the same nodes that are deployed or planned for deployment in the service provider's live network. For obvious reasons the test network would generally contain a scaled down set of the hardware that the live network would contain. The goal of the test network would be to have an environment which would support doing all the same types of activities that would occur in the live network, but on a much smaller scale. When NEs in the core network are being tested using a test network, the test will usually consist of simulated User Equipment (UE) and Radio Access Network (RAN) communicating with simulated network servers. The following figure shows a graphical representation of this type of test network setup.



    Testing in this manner is an absolutely essential part of any NE qualification plan and it provides some of the following key benefits:
    • Ensures interoperability of equipment in the network. 
    • Provides a environment to evaluate the user's experience. 
    NE Testing in Isolation

    Another widely used technique for testing an NE is to test the NE in isolation. Testing in isolation means that the NE is isolated from other NEs and tested solely with test equipment in a highly controlled environment. This test equipment is commonly referred to as an emulator or load generator because it emulates other NEs and it typically produces load on the NE under test. The following image shows an example of how the PCRF may be tested in isolation mode.


    As you'll notice, it is possible for the test equipment to be attached to more than one application interface (i.e. Gx, Rx, or S6) on the NE under test. In addition to allowing each of the application interfaces to be tested, this also allows the test equipment to verify that actions generated on one application interface of the NE will result in the NE generating an appropriate action on another application interface with acceptable timing.

    In this white paper we'll further explore some of the benefits that can be gained from testing the PCRF or other NEs in isolation. Our intention is not to suggest that this is a superior method of testing, but rather to highlight the benefits that can be realized from adding isolation testing to a qualification plan.

    Benefits of Testing an NE in Isolation

    • Cost 

    PCRF performance is exponentially increasing to a capacity of tens of millions of subscribers with transaction rates in excess of 100,000/second. Building and maintaining a test network that can fully load a PCRF can be cost prohibitive. It would take numerous other NEs in addition to a significant amount of test equipment to build a test network that would be capable of fully loading the PCRF. When using an emulator to test the PCRF in isolation, the PCRF can be fully loaded with much less equipment and in many cases a single emulation server would be adequate, which results in less capital being spent.

    • Direct Control
       
    Using an emulator to test an NE like the PCRF allows the test operator to create a precise test that can produce the specific scenario that's desired. There's no need to modify the configuration of multiple NEs and UE simulators to get the desired results.

    An emulator will also allow tests to be reproduced quickly and with a high degree of accuracy. An operator can create a library of previously executed tests with the knowledge of how those tests previously performed. This allows the operator to verify that future software loads have not degraded quality or performance.

    • Negative Testing

    By using an emulator you'll be able to create scenarios that you would rarely encounter in a real-world. However, when these errors do occur, a NE should handle them gracefully. An emulator will provide the ability to create invalid messages or message sequences. For example, an emulator can generate invalid AVP values, insert or remove AVPs which would violate the protocol's specification, respond with errors, delay responses, and numerous other protocol violations.

    • Capacity and Performance

    Determining the maximum transaction rate and subscriber capacity of a PCRF is vital to insure that it will hold up to the demands of a live network. An emulator will allow the operator to setup tests that will cause maximum capacity and load to be produced on the NE. Going even further, the operator will be able to exceed the capacity of the NE and verify that it can gracefully recover from overload conditions.

    • Easier to Determine Point of Failure

    When using an end-to-end test network, finding the root cause of a failure can be difficult to isolate. This is partially due to the fact that every network element does its best to survive network problems and absorb unsustainable transaction bursts. A network element's ability to survive error conditions or overload scenarios is a highly desirable characteristic in the live network, but when trying to validate individual NEs, you don't want other network elements to compensate for or to conceal its weaknesses. Each network element will typically have its own statistics collection capability which may or may not be consistent with other equipment that's in the network. An emulator will give you detailed statistics at each application interface. You will also be able to monitor statistics across application interfaces and correlate those statistics to a particular test activity.

    What to Look for in a Policy Control Validation Solution?

    When evaluating a test solution for a PCRF or other NE, make sure it has some, and preferably all, of the following features. They will provide you with the ability to push the NE to its limits and beyond. Pushing your NE to the limit will help you understand exactly what your NE is capable of and minimize unpleasant surprises once it's deployed in the live network.

    • Ability to reproduce events 

    Using an emulator will allow the operator to quickly reproduce a network event or sequence of events. Attempting to recreate a scenario in a test network using real NEs could take countless hours of reconfiguration. Some NEs are not designed for quickly changing configurations and therefore it could take many man hours of work to change a configuration across all the NEs involved in a test scenario.

    • Results verification

    Additionally, the emulator should be able to verify that an action on one application interface will result in an appropriate action on another application interface. For example, you'll want to ensure that requests on the Rx interface will result in the appropriate rules being pushed to the PCEF across the Gx interface.

    • Ability to push a PCRF to its maximum capacity and performance 

    As mentioned previously, today's PCRF devices are exponentially increasing their capacity toward handling tens of millions of simultaneously active subscribers while processing tens of thousands of transactions per second. A good network emulator should be able to achieve those capacities with a single piece of hardware rather than using multiple servers. Using a single piece of hardware also allows the emulator to host the entire subscriber pool rather than dividing subscribers into groups that must be separately configured and emulated across multiple nodes.

    • Ability to coordinate events across application interfaces

    To adequately test a PCRF, you need more control over the testing than just generating messages at some predetermined rate into each application interface. It is a requirement to synchronize messages on multiple application interfaces. For example when testing a PCRF, the emulated CSCF needs to wait until the emulated PCEF receives a successful IP-CAN Initialization answer before it generates its AAR message.

    • Ability to capture and store test detailed results 

    A fundamental requirement of any test equipment is its ability to store results for off-line processing and analysis. Maintaining historical information from previous tests will allow you to establish a base-line for how well the NE performs. This can be useful for ensuring that future releases of the NE do not significantly degrade its performance or capacity.

    The statistics that are captured by the emulator should have sufficient detail to allow correlation of information. Those statistics should also be synchronized so that statistics from one application interface can be correlated with statistics on another application interface for a given interval of time. Historical information should also be maintained by the emulator to help build a complete picture of how the NE under test performed over the entire time that the test was active.

    • Ability to create both fixed and realistic interaction

    Many network issues are discovered by randomizing the behavior of the tests. Being able to set up tests that don't always generate the same sequence of events is an important component of any test plan. Look for an emulator that can create complex load profiles to accurately reproduce live network scenarios.

    • Ability to inject errors

    An emulator should be capable of injecting a variety of error scenarios that would be seen in the live network. Using an emulator will allow you to inject error messages and sequences into the NE and thereby verify that it can detect and recover from the error scenarios.

    • Turnkey Yet Flexible

    Look for an emulator which has knowledge about the applications that you're interested in testing. The emulator should be aware of the message contents and message sequences that are used in each of the applications you plan to use. This will allow you to get a quick start in your testing, but at times you'll still need to get involved in functional testing, therefore an emulator should also have the flexibility to customize virtually any aspect of the application protocol. This should include the ability to perform negative testing.

    Also look for the ability to share subscriber data across application interfaces. Since you'll most likely be emulating multiple network elements that are simultaneously attached to the PCRF, those network elements should all be working with the same pool of subscribers. This can save countless hours of configuration.

    • Expansion Capability 

    Find an emulator that will continue to meet your needs as your requirements change. An emulator that minimizes its dependency on custom hardware will be better positioned to keep up with Moore's Law over the long term.

    Other Considerations

    Although we will not spend any time discussing another methodology, it is worth noting that it is also possible to create a hybrid test environment in which a test network using real NEs is supplemented with emulators that emulate additional NEs. By using a combination of real and emulated NEs, the operator can create a baseline level of load while also using real network equipment for certain aspects of the test.

    Summary

    A key component of building a fast and reliable wireless network includes extensive testing of the NEs that make up the network. Multiple testing methodologies exist and using multiple methodologies will provide the best possible coverage. We've focused on the value of testing NEs in isolation, but it's vital to include multiple methodologies in any comprehensive test plan.

    Selecting full featured test equipment will enhance test coverage and ensure the service provider that it is getting the needed performance, capacity, features, and stability in the equipment that it deploys in its network.

    See White Paper - here.



    _________


    *Don Wuerfel has over 20 years of experience in the Data and Telecommunications industry.  He has held engineering and management positions at McData, DSC Communications, and Spirent Communications, and is Director of Engineering at Developing Solutions, Inc.

    Friday, February 17, 2012

    Monday's Guest Post: How to Test PCRF before Deployment

     
    A new guest post will be published on Monday. In his article, "PCRF Test Methodology", my 13th guest, Don Wuerfel, will take a closer look at the benefits of using multiple test methodologies to qualify a Policy and Charging Rule Function (PCRF) or other Network Element (NE) prior to deployment. The article focuses on the value of isolation testing of the PCRF and how it can create scenarios that are otherwise difficult to create in an end-to-end testing environment.

    "Using an emulation server to test a NE is the best method of characterizing the performance and reliability of a single node. By directly connecting the Emulation Server to the System Under Test (SUT) you gain considerable cost benefits for large scale performance testing, the precise control to introduce many different scenarios, and simplified root cause analysis" says Don. 

    Stay tuned.

    If you like to propose a guest post, please send me a proposed subject, abstract and the author details.

    Thursday, February 9, 2012

    Tango Claims the Lead in Policy Server Performance

      
    Tango Telecom joins the policy server vendors publishing their performance numbers (see BroadHop and Bridgewater/Amdocs). I am not sure if one can compare product performance based on these statements, but transparency and visibility, as we know, are important qualities.

    Tango announced ".. the completion of a record breaking Policy performance benchmark on the latest generation of COTS blade servers using Xeon 6 core CPUs .. The Tango iAX™ Data Policy Server achieved sustained performance of over 12,000 PDP contexts established per second per blade, at 70% CPU utilisation. The platform also demonstrated unprecedented scalability, to over 192K contexts/second from a single blade server chassis".

    Kieran Kelly, CTO of Tango Telecom, commented: “.. The next wave of policy solutions, as well as offering sophisticated features such as load aware policy and dynamic pricing, must be able to scale way beyond what is currently possible. To put this exceptional benchmark in context, just a single fully equipped blade server, with the Tango iAX™ Data Policy Server, could establish data sessions for over 11M data subscribers in a single minute”.

    See "Tango shrinks the world" - here.

    Friday, October 28, 2011

    Spirent Announces 10GE Mobility, Bandwidth Management and Network Intelligence Testing Module

    Spirent announced "..  TestCenter HyperMetrics mX [datasheet - here] modules enable carriers and their network equipment suppliers to navigate the complexity of converged network elements, ensuring that they perform at scale with realism .. Spirent TestCenter HyperMetrics mX modules greatly simplify and accelerate high-scale mobility, core network, mobile backhaul, routing, access and application testing .. Spirent HyperMetrics mX ensures:
    • Mobility with high scalability of mobile data sessions, delivering high-performance multi-play applications with seamless mobility between 2G/3G and LTE networks, all critical for an uninterrupted user quality of experience (QoE)
       
    • Bandwidth Management with emulation of [6]millions of subscribers under real-world network conditions, including an unprecedented number of applications being delivered concurrently, such as QoE-aware live IP video streaming along with voice, web based applications
       
    • Network Intelligence with the delivery of the mobile multiplay experience by combining high performance stateful traffic, high-scale routing, access and mobile control plane on a single module
    See "Spirent Ensures Performance and Scalability of 4G/LTE Networks" - here. See also "DPI Testing: Sandvine Chooses Spirent" - here.

    Tuesday, October 18, 2011

    Modeling Policy Server Scale and ROI

         
    A month ago I covered the deployment of policy management by Bharti-Airtel (the largest MNO in India with over 170M subscribers). Bharti's IT director said (here) then that "..The challenge that we have is that our scale is huge".

    While most networks are significantly smaller, policy management scaling is an issue vendors and operators need to handle. In addition to the growth in data consumption we also see new use-cases and M2M applications (here, here) that will add load to policy servers.

    Scaling was addressed by policy server vendors as a competitive advantage for some time now (BroadHop, Bridgewater). Diameter routing/load balancing technology and products (here) were added in recent months to help policy management to scale.

    However, something was missing - what are the expected performance requirements?

    A new white paper by Graham Finnie (pictured), Chief Analyst, Heavy Reading. (commissioned by BroadHop) builds a model to help set performance goals: "In the past two years, policy management caught fire as network operators sought better ways to manage the way bandwidth is allocated and congestion is handled. Now, many are looking to move on from these early deployments, and seeking to put policy at the heart of their traffic management and service development strategies. But as policy deployments scale up, it raises major new issues for operators. Can policy servers cope as new use cases are added? What will it cost? And can the new business case really stack up?"

    Conclusions:
    • A 3GPP network of 10 million mobile subscribers will scale, based on our assumptions, from handling about 2,200 TPS in the Base Case to almost 23,000 TPS in the Mature Case, which is assumed to be after three years.
       
    • We modeled the TPS impact of beginning the transition to an all-LTE network. This also assumes a network with 10 million subscribers, in which the customers are gradually transitioning to LTE, and that during this transition, wireline customers are also transferred to the same policy environment. In this case, policy scales from nearly 25,000 TPS at the end of Year 1 to more than 75,000 TPS at the end of Year 3.
       
    • the cost of policy is only a small proportion (around 2 percent) of the overall cost of a new LTE build over a five-year build period in the next-generation policy case – and given the high strategic value of policy, this suggests to us that the return on investment (ROI) is likely to be positive in the short to medium term
    See press release "BroadHop and Heavy Reading Study Finds Policy Servers Must Scale Up Massively as Mobile Operators Move into the LTE Era" - here ; White paper - here.