Wednesday, July 4, 2012

More on Openet's Recent Performance Reports

Openet issued a press release few days ago announcing the ".. benchmark test results that prove Openet delivers the industry’s best performance for Policy and Charging Control and related functions. These results highlight why more operators choose Openet to enable market agility and to successfully deliver complex services at the lowest possible TCO".

"Results establish Openet as dominant in performance across all Policy and Charging Control (PCC) functions .. Record setting policy – With up to 11,000 TPS per Policy blade, Openet sets industry performance records and enables operators to quickly manage network resources with real-time, personalized policies based on service, subscriber, or context".

See "Independent Tests Prove Openet Performance Leadership" - here. Similar announcements were made before by Tango, BroadHop, Amdocs (during Bridgewater days) and Tekelec.

See also "Modeling Policy Server Scale and ROI" - here

I spoke with Shane O'Flynn (pictured), Global VP Engineering, to learn more on the test details:
  • The test was performed by IBM (here), in its Hursley (UK), Raleigh and Poughkeepsie(US) locations, based on the network architecture used by Openet's larger customers - "a large US tier1 operator" (could be %#&# ?) and a "Multi-Country European operator").
  • The hardware used to the test was IBM Blade Center HS22 (NEBS compliant) and HS23 platforms (see diagram), with the following spec:
ITE with 1 x 2.7GHz E5 2680, 24GB Memory (6 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
Hosts the PCRF and solidDB
session store.
ITE with 1 x 2.7GHz E5 2680, 24GB Memory (6 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
Hosts the database. This server
will require a connection to
the SAN.
1 ITE with 2 x 2.7GHz E5 2680, 48GB Memory(12 x 4GB), BE3 LOM, 1 x Ninja 8Gb Qlogic HBA
Hosts the PCEF load

  • Shane explained that the TPS results reported by Openet, unlike some of its competitors, reflect end-to-end measurements, including all processing elements - application, data base and web servers. The chart below shows CPU consumption as the TPS increases. "We can run the CPU to full saturation but our general rule of thumb is a paired config both running at 40% with the capacity to failover to 80% with room for a spike so that it matches a real life environment", says Shane.

  • The test was performed with an Oracle data bases as well as IBM in-memory database, SolidDB.

1 comment:

  1. Interesting figures but they seem to be based on a best case scenario and not a real world performance. There is no lookup on an external system for quota balances as there often is in reality. There is no mention of any simulator injecting latency in the transactions or complex logic in the decision process. I would be more interested to see figures with external lookups and complex logic. My gut feeling is that they will end up with 5K TPS per blade with a more realistic test scenario