Interop New York City 2012!


While attending Interop New York a couple weeks ago, I caught myself reflecting on Interop experiences throughout my career.   Since my first Interop in San Jose, CA in October, 1991 (I was only twelve at the time  –really!)  tons of things have changed, but many remain the same.  For one, Interop was and still is a really big deal.  As a huge repository for vendor-neutral technology forums, educational sessions and wide ranging product displays.  Interop has helped us experience scads of new and cool “stuff” right there on the show room floor.  In my nubile beginnings as a Systems Engineer, my company’s senior SEs had been so emphatic we attend Interop that we would pay for the trip ourselves if the company wouldn’t sponsor us!

This year at Interop New York, IXIA|Anue had a presentation:  Big Data, Big Visibility, Big Control by Larry Hart, Chip Webb and Todd Kaloudis of OpnetCheck it out here, or download the whitepaper on Big Data.  Contrast this with Interop  ‘91, where one of the coolest   exhibitions was the SNMP Managed Toaster with its very own toaster MIB.  This wasn’t just the original toaster from the year before that controlled when to cook the bread but 1991’s toaster MIB had been extended to show off SNMP Get-Next requests so a little Lego crane could put the bread into the toaster all via network management software from the showroom floor.

Twenty one years ago we had monitored toasters.  Today, we use Network Monitoring Switches like the Net Tool Optimizer to direct traffic to multiple monitoring devices doing Application Performance Monitoring, SIEM attack detection, network diagnostics, Web Customer Experience monitoring and more.  We manage localized tool farms in huge data centers or distributed multi-interface devices like LTE MMEs and probes.  We have an incredibly cool Enterprise MIB of our own that won’t toast whole wheat but will tell you how many PCI non-compliant protocol packets just floated in off your core network TAP and let you redirect that traffic to additional tools while changing the filter based on your conditions with our flexible API.  Get-Next on that table!  Quite the Interop evolution – from toasters to network clouds.

I also had the chance to talk to people from well, everywhere; literally.  Maybe because it was the Big Apple or companies understanding that Interop is the place to go.  Here management and engineers alike can listen and learn from the best in the industry and view products and trends in network technologies.  The show is loaded with users from all walks of life and Interop provides the valuable chance to see and speak to others doing things they need to do too.

In ’91, Interop seemed like a lot of Silicon Valley types; today, it’s gone global and anyone who’s anyone has spread their wings with an international presence.   Today, global is local.  Whether its WebEx meetings about moving Big Data around the cloud or sitting on a marshmallow stool in our cool IXIA|Anue booth helping someone from Azerbaijan design a monitoring infrastructure to meet their needs, Interop lets people like me share experiences with others like we were next door neighbors.

Check out the Ixia booth at Interop NYC, as photographed with the nifty CamWow application.  The booth featured the IXIA|Anue Net Tool Optimizer, and we announced our Advanced Feature Module 16 (AFM16) at the show.

Interop was great and NYC was friendly to us – we spoke with a lot of network engineers and data center managers, some of whom were already familiar with the network monitoring switch.  Those who were not familiar were very interested, noting the power it puts at their fingertips.

What Keeps the Network Monitoring Team Awake at Night, and How Ixia Anue’s AFM16 Can Help


The hype surrounding the iPhone 5 release demonstrates one thing – the seemingly unlimited demand from users to do more with their mobile devices is nowhere near satiated. With more and more use of cloud networks, application delivery, and networks-over-the-Internet, the promise and problems of Big Data paradigms will become obvious and well known to all operators – challenges that include capture, storage, search, sharing, analysis, and visualization.

Big Data is large amounts of information that exceed the demands of a typical network because of the size and speed of the data traveling over the network. Big Data is different from traditional IT in many ways, but it still requires monitoring. Key focus areas when managing Big Data are application behavior, network performance and security. The measure of  network monitoring to optimize network traffic sent to monitoring tools lies in the ability to improve the effectiveness and performance of monitoring tools.

Network monitoring is critical to network visibility in the enterprise data centers and the IP-based segments of telecommunication service provider networks. The importance of monitoring is reflected in the large investments large enterprises and service providers make in monitoring tools and the staff to manage them. These network monitoring teams face several challenges, including the following:

  1. Demand for higher data rates outpaces the ability of monitoring tools to keep up
  2. Complying with strict privacy regulations
  3. Increased scrutiny of network performance
  4. Containing the cost of network monitoring


  1. 1.     Demand for Higher Data Rates Outpaces Gains in Monitoring Tool Performance

With networks growing in speed and little budget to upgrade tools, network engineers are looking for ways to get better performance from their existing monitoring tools. Some of the issues that drain tool performance include:

More than 50% of packets arriving at a monitoring tool could be duplicate packets. Also, some tools only need the packet header for analysis, in which case most of the data that arrives at the tool are useless. The challenge is to remove the performance-robbing packets from the traffic before they reach the monitoring tool.

  1. 2.     Complying with Strict Privacy Regulations

Businesses that handle sensitive user information are obligated (by SOX, HIPAA, PCI, etc.) to keep such data secure. Some tools provide the ability to trim sensitive data from packets before they are analyzed or stored. However, this comes at the expense of precious computing and tool bandwidth resources. The challenge is to offload the privacy function from tools so they can focus all resources on the analysis and storage for which they were intended.

  1. 3.     Increased Scrutiny of Network Performance

Network performance is under great scrutiny in some network applications, such as high-frequency-trading. For such applications, there are purpose-built monitoring tools that can time each packet as it traverses the network. These latency-sensitive tools depend on timing the packets as close to source as possible. However, such proximity implies exclusive access to the network, which is not practical. The challenge is to deliver both the packet and its timing data to the latency-sensitive tools without compromising access for other network monitoring tools.

  1. 4.     Containing the Cost of Network Monitoring

Network engineers are under pressure to do more with less, including in network monitoring. Successful network monitoring teams not only find ways to save costs, they also find smart ways to leverage their current investment.

There is a rich selection of tools on the market to monitor IP networks. However, in certain parts of the IP-based service provider networks, most of these tools are rendered useless, because the IP traffic is encapsulated using MPLS, GTP, and VNTag. The challenge is to find a way to expose the tunneled IP traffic so widely available tools, including tools the organization already owns, can be deployed.

Ixia Anue has just introduced its Anue Advanced Feature Module 16, which provides advanced packet processing technologies to eliminate redundant network traffic, secure sensitive data and enhance monitoring tool effectiveness. It is the industry’s first high-density, high-capacity advanced packet processing solution designed specifically to enhance network and application performance at large enterprises and data centers.

To learn more about Ixia’s Anue AFM16 and the Anue Net Tool Optimizer, stop by booth #531 at Interop New York, currently taking place at the Javits Convention Center. At the event, Ixia’s Larry Hart and Chip Webb will lead an industry conversation titled “Big Data, Big Visibility, Big Control” on Oct. 3 at 2 p.m. ET.

For more information on the AFM16 module, see the press release

Interop 2012, Anue – Life in the Fast Lane

I’ve been to a lot of Interops, but Interop 2012 Las Vegas was by far the most exhilarating. I joined Anue over a month ago after a 12-year stint leading product development teams at Dell. Dell was definitely life in the fast lane. Now I’m living life in the faster lane! Just think, one month in for me and we announce Ixia will be acquiring Anue. Resultantly, our booth was a hotbed for the inquisitive.

The visionary CEO and management team at Ixia are making an amazing move with this acquisition that will position Ixia in both the pre-deployment arena to a player in production network optimization solutions for data center, cloud providers, telecoms and service providers. While enterprises have often used Ixia in their lab environments, this extends their reach into the production network side of the enterprise in one bold move.

From Anue’s standpoint, the acquisition is going to quickly make us a strong contender in international markets. Our technology is easy enough to understand – we deliver the right data to the right tools at the right time. Yet, not enough have heard about this market. Well, they will now, as Ixia’s reach, both in the US and in other international markets, is extensive and will benefit Anue.

Anue Systems Interop Booth 2012On top of that, Anue’s CTO, Chip Webb, had a session on the Big Data track Monday.  It was interesting –the reality is that you can’t do Big Data without a high-performance, secure network. Anue provides the visibility that enables Big Data to work well. And, yes, Big Data really is a different world. Big Data actually makes Anue’s Network Tool Optimizer technology even more important. Not only is security and performance monitoring important with Big Data, the movement of applications, trend analysis and business intelligence necessitates application behavior monitoring.   One thing that struck me about Interop 2012 overall is a refreshed enthusiasm in IT technologies that deliver business value. To do that you have to be living life in faster lane – seems to me Anue fit right in.

Anue Big Data Presentation at Interop

Interop Las Vegas: Enterprise Cloud Summit - Big Data


Our Anue Systems CTO, Chip Webb, will be interviewed by Jeremy Edberg, Lead Cloud Reliability Engineer from Netflix, at the Enterprise Cloud Summit – Big Data at Interop next week.

Big Data is a technology that is emerging fast, due to its extreme business value. People are really excited about its potential.  Big Data really is different.   I like this description of Big Data from O’Reilly Radar: Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.

Enterprise Cloud Summit - Big Data at Interop Las Vegas

Workloads are bigger, application architecture is different, and applications can be really sophisticated, such as genetic research (protein folding), wind tunnel simulation (one of my personal favorites), stock trading, and web log analysis (think 10s of millions of people visiting websites for targeted advertising.)  Organizations have been collecting logs for a long time, but now have a framework to analyze this mass amount of data and make money from it.

What Is Apache Hadoop?

Whether it is your own implementation of Hadoop on your own infrastructure, or you are using tools like Karmasphere to give analysis capabilities to non-technical users, there are a lot of applications (MapReduce, Pig, Hive and others) and network (how your clusters talk and troubleshooting) work and you want to make sure that your data is secure since it is usually sensitive data.

Three distinct monitoring areas emerge with Big Data: application behavior, network and security.  The tough part is that different groups will be doing monitoring using different tools.  Without a network monitoring switch between the data center production network and monitoring and security tools, a lack of network data access points will force compromises between the different groups, resulting in suboptimal monitoring, which can lead to outages and incidents.Don't Compromise and Learn more about Anue Systems - Net Tool Optimizer 5288 Product Overview

The network monitoring switch aggregates and filters data from across the network so that any number of monitoring tools can get exactly the data they need – no more, no less.  Instead of worrying about limited network access points (SPANs for port mirroring and TAPs) and forcing monitoring requirements from different groups and tools to need to be painstakingly prioritized, you don’t need to compromise.The different groups and tools responsible for monitoring Big Data: application behavior, network and security can all get what they need WITHOUT SACRIFICE, using a network monitoring switch.

Please consider attending Enterprise Cloud Summit – Big Data.  Chip’s time on the agenda is 2:45pm  May 7,  in Mandalay Bay, Lagoon D. Stop by the Anue Systems booth #527.

Standards and How they Prevent “Dropped” Mobile Calls


I interviewed Chip Webb, the Anue CTO, for this blog. He just came back from WSTS in Colorado, which is the Workshop on Synchronization and Timing in Telecommunications.


Chip is an expert who participates in ITU SG15 / Q13 – which is a shorthand way of saying the International Telecommunication Union’s Study Group 15, Question 13. The ITU is an organization based on public-private partnership that is part of the United Nations and has three main sectors: ITU-T for telecom, ITU-R for radio communication and ITU-D for developing countries. Study group 15 within ITU-T deals with telco optical transport and access technologies, and Q13 is a sub-committee, or “Question,” that focuses on timing and synchronization of telco networks.


ITU membership represents a cross-section of the global International Communications Technology (ICT) sector – from the world’s largest manufacturers and carriers to small, innovative players who work with new and emerging technologies to leading R&D institutions and academia. ITU membership includes 193 countries and over 700 private-sector entities and academic institutions. ITU is headquartered in Geneva, Switzerland, and has 12 regional and area offices around the world.

More than 300 experts from around the world attend SG15 plenary meetings in Geneva, which are held every nine months. Q13 usually holds two smaller, “interim,” meetings between the plenaries, at which a smaller, more focused group of approximately 30 experts meet to discuss contributions and shape future standards for timing and synchronization.

Read more about the ITU here

Chip explains the need for synchronization like this: Power is essential to telecom networks – without power- the systems won’t work. It is easy to understand the need for power because everyone uses electricity and understands what happens when it fails. In the same way, proper synchronization is essential to modern telecom networks. But it takes a little more explanation because most people don’t encounter the need for microsecond-level accuracy in their day-to-day life. Even Olympic athletes aren’t measured that precisely.

Chip explains the speed of wireless networks: “Let’s think about the speed of light. Most people know that the speed of light is really, really fast and that nothing can travel faster than light. In fact, light and radio waves travel about one foot in a billionth of a second. In metric, that’s about one meter in three billionths of a second. So the next time you see a one-foot ruler, you should think ‘that’s a nanosecond’ – or one billionth of a second. Now, think about the progress of technology over the last few decades. Today’s microprocessors operate at unimaginably high speeds – measured in billions of clock cycles per second. Both the speed of light and the speed of microprocessors are things in our lives that are so fast it is hard to understand them in everyday terms. It is hard to imagine things at such extreme scales. Wireless networks also operate at speeds so high that they are hard to comprehend.”

Chip gave me a simple real-world analogy to help explain why synchronization is so important for wireless networks. Imagine you are flying to a distant country. Chances are good that to get there, you will need to make a connection at a “hub” airport. If one of your flights is delayed, you might miss a connection at the “hub.” That would be a serious disruption to your trip. Similarly, as you travel down a highway using your mobile phone, it communicates wirelessly with one base station after another. The handoff process from one base station to the next is analogous to making a flight connection at a “hub” airport. If the timing of one base station is incorrect, your call might be dropped. While flight delays are measured in minutes or hours, wireless network synchronization is measured in tiny fractions of a second – nanoseconds or microseconds.

So without proper synchronization, communication equipment may operate on the wrong frequency or at the wrong time. This can lead to poor-quality calls or even complete loss of service. Data hungry devices, like Apple’s iPhone and the new iPad, are driving the need for wireless data services. Wireless service providers are upgrading their networks to support this growing demand with new wireless protocols such as Long Term Evolution (LTE). The synchronization requirements for LTE networks are, in some cases, even more stringent than the older systems they replace.

Along with the upgrade to LTE, wireless service providers are also upgrading their backhaul networks with newer Ethernet and TCP/IP and replacing their existing synchronization networks. These new backhaul networks, which connect the base stations to the central switching equipment, use new synchronization protocols and techniques, like as Synchronous Ethernet (SyncE) and Precision Time Protocol (PTP).

At Q13, Chip has been working with other experts to develop metrics and test methods for these new synchronization networks. Stay tuned for Chip’s upcoming blog series, which will introduce this ongoing work, and discuss “Lucky Packets”, which don’t have anything to do with gambling in Las Vegas or Monte Carlo!  View Chip’s presentation on Slideshare.