High Speed Ethernet (40G Networks) and You


We hear from customers that the transition to 40G/100G networks is driven purely from cost savings and ease of management.  It seems no one really wants to run x8 10G connections (especially fiber) in an ether channel when you can simply run x2 40G connections – the idea is to get away from 8 port link aggregation groups and to a more manageable number.  In addition, 40G networks have reduced physical space requirements, less cable to run, less transceivers to buy and maintain, and less power to operate the data center versus their predecessors.

But moving from 1G or 10G to 40G introduces all kinds of interesting nuances people haven’t really thought through yet.

40G is more than just a bigger pipe when it comes to monitoring network data.  The luxury of dumping a whole bunch of network data on security and performance monitoring tools and letting them sort it out goes away.  There’s just too much data.

In this brave new world, network data must be delivered to tools to suit their specific needs in order to get the best results.  Strike that, to get the tools to work at all, never mind good performance, they have specific data dietary requirements.

There really aren’t security and performance monitoring tools that exist for 40G networks.  They haven’t been built yet.  This makes life pretty tricky if you are implementing 40G, or plan to do so in the next couple of years.

Another issue is that security and monitoring tools are often implemented in appliances that simply don’t have the processing power to handle 40G bandwidth.  So even if the software is modified to handle 40G, the box isn’t ready.  In addition, monitoring tools are typically processor and disk bound, which raises an interesting point: Processing capacity improvement follows Moore’s law, doubling every 24 months.  Bandwidth consumption has been doubling every 12 to 18 months, according to the Ethernet Alliance.

What you need to do is deliver just the data that your network tools need.  The first step is being able to filter out the data the tools do not need.

In Anue terms, this includes ingress filtering, which filters at the input of the network monitoring switch (Anue’s NTO) and discard packets that are not of interest to any monitoring tool.

Then you need to need center stage filters, which Anue calls Dynamic Filtering.  Dynamic Filtering addresses problems that occur when some packets meet the filter criteria of multiple tools and must be sorted out properly for each tool to do its job.

Finally, you need egress filtering, which filters at the output of the network monitoring switch.  For example, the egress filter could drop HTTP traffic for a particular tool and have no impact on other tool ports.  Filtering is going to solve a lot of your problems, but not all.

You’ll also need to do some load balancing to spread the analysis work across your tools, since they are not capable of “drinking from the 40G firehose.”  In Anue vernacular, load balancing is not the traditional splitting of network traffic into equal loads across multiple tool ports.  That approach can hinder data packet analysis by session-level: for example with VOIP monitoring, you need to analyze data collectively based on session.

Anue’s approach to load balancing is achieved by using layer 2, 3 and 4 packet header information to identify and deliver related traffic to the same physical tool port, maintaining the integrity of the sessions.

Here’s how load balancing might work.  Say I have four 10G Computer Associates Web Monitors that I need to use to monitor a 40G network.  I set up a load balancing port group of the four 10G web monitors as below.

Intelligent Load Balancing with Anue Systems NTO - the network monitoring switch

Now I’m set up to monitor 40G, using my 10G tools.  And, instead of simplistically dividing the network traffic across the four web monitors, I can set up criteria, such as dividing up IP addresses across the monitors, or setting VLAN address ranges for each tool, as required to keep the session integrity as discussed above.

 Intelligent Load Balancing with Anue Systems NTO - the network monitoring switch

Kate Brew About Kate Brew

Kate Brew manages product marketing activities for the Anue Systems Net Tool Optimizer™ (NTO) product line. She is an industry expert in security, virtualization and green data centers. Prior to Anue Systems, Kate held security product management positions with Tivoli/IBM, Citrix, GE-Interlogix and e-Security. In addition, Kate participated as a Computer Security Institute panelist and presented at industry events including the Enterprise Management Summit, Managing Enterprise Networks, ServerTech, Citrix Summit, Citrix Synergy and Planet Tivoli.
Kate has a Bachelor of Science degree in industrial and systems engineering from Georgia Tech, and is a member of the Capital of Texas ISSA chapter. She recently attained VMware’s VTSP 5 certification.


  1. [...] for instance a recent Ethernet Alliance report detailing a historical doubling of bandwidth consumption every year. This suggests web hosting customers will use 400 percent more bandwidth by the year 2016 than they [...]

Leave a Comment