Integrating Ixia Anue Net Tool Optimizer with Splunk Big Data


Unstructured data accounts for as much as 80% of most companies’ network traffic and stored information. This “Big Data” traditionally took too long or cost too much to process and analyze. However, emerging Big Data initiatives stand to transform vast untapped resources from a costly storage challenge into vital business intelligence used in marketing, product development, stock trading, genetic research, and more.

Big Data promises huge gains in productivity and competitive advantage by distributing the massive workload of preparing data for analysis among large numbers of servers. To make it all work, IT departments need greater visibility into networks and applications in order to prioritize, filter, and synthesize information.

Predictably, monitoring the performance of the network, applications, and security becomes more challenging as Big Data projects scale.

Introducing Splunk with Ixia Anue NTO

Big Data is being captured and analyzed as never before by a new generation of software vendors such as Splunk. Splunk gathers and collates massive amount of data from disparate sources and provides the ability to search that data for information of interest. Millions and millions of data points are available for analysis.

Such searches sift through the massive amounts of Big Data. The results are invaluable for isolating network troubles. Capturing the actual source packets is also helpful for root cause analysis. However, due to the enormous number of data points gathered it can be impractical, cumbersome, and expensive to correlate and store source packets relating to every single data point.

The ability to capture packets for targeted search results is ideal, and Ixia Anue Network Visibility Solutions has developed integration for Splunk which enables such intelligent packet capture.

Using the integration, Splunk is able to signal the Anue Net Tool Optimizer (NTO) as to when, where, and what it should forward to a packet recorder tool. Search strings of interest are identified by the Splunk user, and when they occur, Splunk automatically passes the desired search string argument (e.g., IP address of rogue host) to the Anue NTO which then dynamically filters and forwards only the relevant network packets to selected packet recorder/analyzer.

With this capability an audit trail of the needed packets are now available (without using up unnecessary storage resources). In addition, the integration maintains a log file of the targeted captures making is easier to find the relevant packets at a later date.

big data tracking

A new document, Technical Note: Integration of Anue with Splunk Overview, is available for interested parties. This document offers insight into using the integrated products for targeted searches.

Additional Resources:

Technical Note: Integration of Anue with Splunk Overview

Big Data Solutions

Splunk Solutions

Interop New York City 2012!


While attending Interop New York a couple weeks ago, I caught myself reflecting on Interop experiences throughout my career.   Since my first Interop in San Jose, CA in October, 1991 (I was only twelve at the time  –really!)  tons of things have changed, but many remain the same.  For one, Interop was and still is a really big deal.  As a huge repository for vendor-neutral technology forums, educational sessions and wide ranging product displays.  Interop has helped us experience scads of new and cool “stuff” right there on the show room floor.  In my nubile beginnings as a Systems Engineer, my company’s senior SEs had been so emphatic we attend Interop that we would pay for the trip ourselves if the company wouldn’t sponsor us!

This year at Interop New York, IXIA|Anue had a presentation:  Big Data, Big Visibility, Big Control by Larry Hart, Chip Webb and Todd Kaloudis of OpnetCheck it out here, or download the whitepaper on Big Data.  Contrast this with Interop  ‘91, where one of the coolest   exhibitions was the SNMP Managed Toaster with its very own toaster MIB.  This wasn’t just the original toaster from the year before that controlled when to cook the bread but 1991’s toaster MIB had been extended to show off SNMP Get-Next requests so a little Lego crane could put the bread into the toaster all via network management software from the showroom floor.

Twenty one years ago we had monitored toasters.  Today, we use Network Monitoring Switches like the Net Tool Optimizer to direct traffic to multiple monitoring devices doing Application Performance Monitoring, SIEM attack detection, network diagnostics, Web Customer Experience monitoring and more.  We manage localized tool farms in huge data centers or distributed multi-interface devices like LTE MMEs and probes.  We have an incredibly cool Enterprise MIB of our own that won’t toast whole wheat but will tell you how many PCI non-compliant protocol packets just floated in off your core network TAP and let you redirect that traffic to additional tools while changing the filter based on your conditions with our flexible API.  Get-Next on that table!  Quite the Interop evolution – from toasters to network clouds.

I also had the chance to talk to people from well, everywhere; literally.  Maybe because it was the Big Apple or companies understanding that Interop is the place to go.  Here management and engineers alike can listen and learn from the best in the industry and view products and trends in network technologies.  The show is loaded with users from all walks of life and Interop provides the valuable chance to see and speak to others doing things they need to do too.

In ’91, Interop seemed like a lot of Silicon Valley types; today, it’s gone global and anyone who’s anyone has spread their wings with an international presence.   Today, global is local.  Whether its WebEx meetings about moving Big Data around the cloud or sitting on a marshmallow stool in our cool IXIA|Anue booth helping someone from Azerbaijan design a monitoring infrastructure to meet their needs, Interop lets people like me share experiences with others like we were next door neighbors.

Check out the Ixia booth at Interop NYC, as photographed with the nifty CamWow application.  The booth featured the IXIA|Anue Net Tool Optimizer, and we announced our Advanced Feature Module 16 (AFM16) at the show.

Interop was great and NYC was friendly to us – we spoke with a lot of network engineers and data center managers, some of whom were already familiar with the network monitoring switch.  Those who were not familiar were very interested, noting the power it puts at their fingertips.

To SPAN or to TAP – That is the question!


Ixia Network Visibility Solutions welcomes a guest blogger today, Tim O’Neill from LoveMyTool.

Network engineers and managers need to think about today’s compliance requirements and the limitations of conventional data access methods. This article is focused on TAPs versus port mirroring/SPAN technology.

SPAN is not all bad, but one must be aware of its limitations. As managed switches are an integral part of the infrastructure, one must be careful not to establish a failure point. Understanding what can be monitored is important for success. SPAN ports are often overused, leading to dropped frames due to the fact that LAN switches are designed to groom data (change timing, add delay) and extract bad frames as well as ignore all layer 1 & 2 information. Furthermore, typical implementations of SPAN ports cannot handle FDX monitoring and analysis of VLAN can also be problematic.

Moreover, when dealing with data security compliance, the fact that SPAN ports limit views and are not secure transporting monitored traffic through the production network could prove itself to be unacceptable in the court of law.

When used within its limits and properly focused, SPAN is a valuable resource to managers and monitoring systems. However, for 100% guaranteed views of network traffic, passive network TAPs are a necessity for meeting many of today’s access requirements as we approach larger deployments of 10 Gigabit and up. It’s in this realm that SPAN access limitations become more of an issue.

SPANs vs. TAPs

Until the early 1990s, using a TAP or test access port from a switch patch panel was the only way to monitor a communications link. Most links were WAN so an adaptor like the V.35 adaptor from Network General or an access balum for a LAN was the only way to access a network. In fact, most LAN analyzers had to join the network to really monitor it.

As switches and routers developed, along came a technology we call a SPAN port or mirroring port; and with this monitoring was off and running. SPAN generally stands for Switch Port for Analysis and was a great way to effortlessly and non-intrusively acquire data for analysis. By definition, a SPAN Port usually indicates the ability to copy traffic from any or all data ports to a single unused port but also usually disallows bidirectional traffic on that port to protect against backflow of traffic into the network.

Analyzers and monitors no longer had to be connected to the network. Engineers could use the SPAN (mirror) port and direct packets from their switch or router to the test device for analysis.

Is a SPAN port a passive technology?  No!

Some call a SPAN port a passive data access solution – but passive means “having no effect” and spanning (mirroring) does have measurable effect on the data. Let’s look at the facts.

  • Spanning or mirroring changes the timing of the frame interaction (what you see is not what you get).
  • The spanning algorithm is not designed to be the primary focus or the main function of the device, like switching or routing, so the first priority is not spanning and if replicating a frame becomes an issue, the hardware will temporally drop the SPAN process.
  • If the speed of the SPAN port becomes overloaded, frames are dropped.
  • Proper spanning requires that a network engineer configure the switches properly and this takes away from the more important tasks required by network engineers. Many times configurations can become a political issue (constantly creating contention between the IT team, the security team and the compliance team).
  • The SPAN port drops all packets that are corrupt or those that are below the minimum size, so all frames are not passed on. All of these events can occur and no notification is sent to the user; so there is no guarantee that one will get all the data required for proper analysis.


In summary, the fact that SPAN ports are not a truly passive data access technology, or even entirely non-intrusive, can be a problem for data security compliance monitoring or lawful intercept. Since there is no guarantee of absolute fidelity, it is possible or even likely that evidence gathered by this monitoring process will be challenged in the court of law.

Are SPAN ports a scalable technology?  No!

When we had only 10Mbps links and a robust switch (like ones from Cisco), one could almost guarantee they could see every packet going through the switch. With 10Mbps fully loaded at around 50% to 60% of the maximum bandwidth, the switch backplane could easily replicate every frame. Even with 100Mbps one could be somewhat successful at acquiring all the frames for analysis and monitoring, and if a frame or two here and there were lost, it was no big problem.

This has all changed with Gigabit and 10 Gigabit technologies, starting with the fact that maximum bandwidth is now twice the base bandwidth – so a Full Duplex (FDX) Gigabit link is now 2 Gigabits of data and a 10 Gigabit FDX link is now 20 Gigabits of potential data.

No switch or router can handle replicating/mirroring all this data, plus handle its primary job of switching and routing. It is difficult if not impossible to pass all frames (good and bad one) including FDX traffic at a full-time rate, in real time at non-blocking speeds.

Adding to this FDX need, we must also consider the VLAN complexity and finding the origin of a problem once the frames have been analyzed and a problem detected.

From Cisco’s own white paper, “On SPAN Port Usability and Using the SPAN Port for LAN Analysis,” the company warns “the switch treats SPAN data with a lower priority than regular port-to-port data.” In other words, if any resource under load must choose between passing normal traffic and SPAN data, the SPAN loses and the mirrored frames are arbitrarily discarded. This rule applies to preserving network traffic in any situation. For instance, when transporting remote SPAN (RSPAN) traffic through an Inter Switch Link (ISL), which shares the ISL bandwidth with regular network traffic, the network traffic takes priority. If there is not enough capacity for the remote SPAN traffic, the switch drops it. Knowing that the SPAN port arbitrarily drops traffic under specific load conditions, what strategy should users adopt so as not to miss frames? According to Cisco, “the best strategy is to make decisions based on the traffic levels of the configuration and when in doubt to use the SPAN port only for relatively low-throughput situations.”

Hubs? How about them?

Hubs can be used for 10/100 access but they have several issues that one needs to consider. Hubs are really half duplex devices and only allow one side of the traffic to be seen at a time. This effectively reduces the access to 50% of the data.

The half duplex issue often leads to collisions when both sides of the network try to talk at the same time. Collision loss is not reported in any way and the analyzer or monitor does not see the data. The big problem is if a hub goes down or fails, the link it is on is lost. As such, hubs no longer fit as an acceptable, reliable access technology and do not support Gigabit or above access and should not be considered.

Today’s “REAL” Data Access Requirements

To add more complexity and challenges to SPAN port as a data access technology, consider the following:

  • We have entered a much higher utilization environment with many times more frames in the network.
  • We have moved from 10Mbps to 10Gbps Full Duplex – today many have even higher rates of 40 and 100Gbps.
  • We have entered into the era of data security, legal compliance and lawful intercept, which require that we monitor all of the data and not just “sample” the data – with the exception of certain very focused monitoring technologies (e.g., application performance monitoring).

These demands will continue to grow, as we have become a very digitally focused society. With the advent of VoIP and digital video we now have revenue-generating data that is connection-oriented and sensitive to bandwidth, loss and delay. The older methods need reviewing and the aforementioned added complexity requires that we change some of the old habits to allow for “real” 100% Full Duplex real-time access to the critical data.

In summary, being able to provide “real” access is not only important for data compliance audits and lawful intercept events; it is the law. Keeping our bosses out of jail has become very high priority these days; but I guess it depends on how much you like your boss.

When is SPAN port methodology “OK”?

Many monitoring products can and do successfully use SPAN as an access technology. These are effective for low-bandwidth application layer events like conversation analysis, application flows and connection information, and for access to reports from call managers, etc., where time based or frame flow analysis is not needed.

These monitoring requirements utilize a small amount of bandwidth and grooming does not affect the quality of the reports and statistics. The reason for their success is that they keep within the parameters and capability of the SPAN port and do not need every frame for successful reporting and analysis. In other words, a SPAN port is a very usable technology if used correctly and, for the most part, the companies that use mirroring or SPAN are using it in well-managed and tested methodologies.


Spanning (mirroring) technology is still viable for some limited situations, but as one migrates to FDX Gigabit and 10 Gigabit networks, and with the demands of seeing all frames for data security, compliance and lawful intercept, one must use “real” access TAP technology to fulfill the demands of today’s complex analysis and monitoring technologies. With today’s large bandwidths the TAP should feed an advanced and proactive filtering technology for the clearest of view!

If the technology demands are not enough, network engineers can focus their infrastructure equipment on switching and routing and not spend their valuable resources and time setting up SPAN ports or rerouting data access.

In summary, the advantages of TAPs compared to SPAN/mirror ports are:

  • TAPs do not alter the time relationships of frames – spacing and response times are especially important with RTPs like VoIP and Triple Play analysis including FDX analysis.
  • TAPs do not introduce any additional jitter or distortion nor do they groom the flow, which is very important in all real-time flows like VoIP/video analysis.
  • VLAN tags are not normally passed through the SPAN port so this can lead to false issues detected and difficulty in finding VLAN issues.
  • TAPs do not groom data nor filter out physical layer errored packets.
  • Short or large frames are not filtered/dropped.
  • Bad CRC frames are not filtered.
  • TAPs do not drop packets regardless of the bandwidth.
  • TAPs are not addressable network devices and therefore cannot be hacked.
  • TAPs have no setups or command line issues so getting all the data is assured and saves users time.
  • TAPs are completely passive and do not cause any distortion even on FDX and full bandwidth networks.
  • TAPs do not care if the traffic is IPv4 or IPv6; it passes all traffic through.


So should you use a TAP to gain access to your network frames? Now you know the differences, and it is up to you to decide based on your goals!

The four main Types of TAPs – provided by Garland Technologies are:

Breakout TAPs are the simplest type of TAP. In their most basic form they have four ports. The network traffic travelling in one direction comes in port A and is sent back out port B unimpeded. Traffic coming from the other direction arrives in port B and is sent back out port A, also unimpeded. The network segment does not “see” the TAP. At the same time the TAP sends a copy of all the traffic to monitoring ports C & D of the TAP. Traffic travelling from A to B in the network is sent to one monitoring port and the traffic from B to A is sent out the other, both going to the attached tool.

IMPORTANT: Make sure the TAP incorporates a failsafe feature. This will ensure that if the TAP were to lose power or fail, the network will not be brought down as a result.

Aggregating TAPs provide the ability to take network traffic from multiple network segments and aggregate, or link bond, all of the data to one monitoring port. This is important because you can now use just one monitoring tool to see all of your network traffic. With the addition of filtering capability in the TAP you can further enhance your tools efficiency by only sending the data it needs to see.

Regeneration TAPs facilitate taking traffic from a single network segment and sending it to multiple ports. This allows you to take traffic from just one point in the network and send it to multiple tools. Therefore different teams in your company like security, compliance, or network troubleshooting can see all the data at the same time for their own requirements. This leads to no team contention over available network monitoring point availability.

Bypass TAPs allow you to place network devices like IPS/IDS, data leakage prevention (DLP), firewall, content filtering and security devices, that need to be installed inline, into the network while removing the risk of introducing a point of failure. With a bypass TAP, failure of the inline device, reboots, upgrades, or even removal and replacement of the device can be accomplished without taking down the network. In applications requiring inline tools, bypass TAPs save time, money and network downtime.

In the next part I will review and compare VACLs, RSPAN and Cloud TAPs.

Want more on TAPs, SPAN ports, even comparative tests and Sharkfest classes? Visit

To read an excellent paper on Full Duplex TAP basics go to:

Here’s a little bit about Tim:

Tim O’Neill - The “Oldcommguy™”
Technology Website -
Committee Chairman for Cyber Law Enforcement training and Cyber Terrorism
For Georgia State Senator John Albers
Please honor and support our Troops, Law Enforcement and First Responders!
All Gave Some – Some Gave All – All deserve our Respect and Support!

What Keeps the Network Monitoring Team Awake at Night, and How Ixia Anue’s AFM16 Can Help


The hype surrounding the iPhone 5 release demonstrates one thing – the seemingly unlimited demand from users to do more with their mobile devices is nowhere near satiated. With more and more use of cloud networks, application delivery, and networks-over-the-Internet, the promise and problems of Big Data paradigms will become obvious and well known to all operators – challenges that include capture, storage, search, sharing, analysis, and visualization.

Big Data is large amounts of information that exceed the demands of a typical network because of the size and speed of the data traveling over the network. Big Data is different from traditional IT in many ways, but it still requires monitoring. Key focus areas when managing Big Data are application behavior, network performance and security. The measure of  network monitoring to optimize network traffic sent to monitoring tools lies in the ability to improve the effectiveness and performance of monitoring tools.

Network monitoring is critical to network visibility in the enterprise data centers and the IP-based segments of telecommunication service provider networks. The importance of monitoring is reflected in the large investments large enterprises and service providers make in monitoring tools and the staff to manage them. These network monitoring teams face several challenges, including the following:

  1. Demand for higher data rates outpaces the ability of monitoring tools to keep up
  2. Complying with strict privacy regulations
  3. Increased scrutiny of network performance
  4. Containing the cost of network monitoring


  1. 1.     Demand for Higher Data Rates Outpaces Gains in Monitoring Tool Performance

With networks growing in speed and little budget to upgrade tools, network engineers are looking for ways to get better performance from their existing monitoring tools. Some of the issues that drain tool performance include:

More than 50% of packets arriving at a monitoring tool could be duplicate packets. Also, some tools only need the packet header for analysis, in which case most of the data that arrives at the tool are useless. The challenge is to remove the performance-robbing packets from the traffic before they reach the monitoring tool.

  1. 2.     Complying with Strict Privacy Regulations

Businesses that handle sensitive user information are obligated (by SOX, HIPAA, PCI, etc.) to keep such data secure. Some tools provide the ability to trim sensitive data from packets before they are analyzed or stored. However, this comes at the expense of precious computing and tool bandwidth resources. The challenge is to offload the privacy function from tools so they can focus all resources on the analysis and storage for which they were intended.

  1. 3.     Increased Scrutiny of Network Performance

Network performance is under great scrutiny in some network applications, such as high-frequency-trading. For such applications, there are purpose-built monitoring tools that can time each packet as it traverses the network. These latency-sensitive tools depend on timing the packets as close to source as possible. However, such proximity implies exclusive access to the network, which is not practical. The challenge is to deliver both the packet and its timing data to the latency-sensitive tools without compromising access for other network monitoring tools.

  1. 4.     Containing the Cost of Network Monitoring

Network engineers are under pressure to do more with less, including in network monitoring. Successful network monitoring teams not only find ways to save costs, they also find smart ways to leverage their current investment.

There is a rich selection of tools on the market to monitor IP networks. However, in certain parts of the IP-based service provider networks, most of these tools are rendered useless, because the IP traffic is encapsulated using MPLS, GTP, and VNTag. The challenge is to find a way to expose the tunneled IP traffic so widely available tools, including tools the organization already owns, can be deployed.

Ixia Anue has just introduced its Anue Advanced Feature Module 16, which provides advanced packet processing technologies to eliminate redundant network traffic, secure sensitive data and enhance monitoring tool effectiveness. It is the industry’s first high-density, high-capacity advanced packet processing solution designed specifically to enhance network and application performance at large enterprises and data centers.

To learn more about Ixia’s Anue AFM16 and the Anue Net Tool Optimizer, stop by booth #531 at Interop New York, currently taking place at the Javits Convention Center. At the event, Ixia’s Larry Hart and Chip Webb will lead an industry conversation titled “Big Data, Big Visibility, Big Control” on Oct. 3 at 2 p.m. ET.

For more information on the AFM16 module, see the press release

Automatic MTTR


Forrester Highlights Anue NTO Network Monitoring Switch as a Core Technology for every Data CenterRecently a potential customer in the financial industry shared the challenges they face when trying to quickly diagnose and fix problems within their data center network. I listened intensely as they explained how determining root cause could drag on for weeks and in some cases months. Fortunately we were able to help them by implementing our Automated Response Technology. You can see a good example of how it works by watching this brief video.


In the video you will see how together, the LogMatrix NerveCenter and the Anue Net Tool Optimizer™ (NTO), along with your existing network monitoring tools and management systems, can significantly improve network reliability as well as reduce network Mean Time to Repair – MTTR. This is accomplished by intelligently collecting network traffic and automatically routing it to the right monitoring tool when network problems or anomalies occur. When NerveCenter detects the problem, it alerts the Anue NTO to direct the affected network traffic to a particular monitoring tool. When the anomaly no longer exists, the data capture or monitoring can be automatically stopped as well.

Businesses and Organizations Rely on Networks

Optimize Your Network Visibility: Anue Net Tool OptimizerTM helps improve network visibility and protect your investments in network monitoring tools.Network traffic and application traffic problems can be difficult and time-consuming to troubleshoot, especially if the problem is intermittent. Unresolved network issues result in unsatisfied users, customers, and management. Businesses rely on networks performing. When they don’t work or even work optimally not only are users frustrated, but operations and sales can be negatively impacted as well.


An automated approach to network monitoring can reduce the time it takes to identify and resolve network and application issues.  Intelligent data gathering is critical when troubleshooting network problems. Watching the video was a real eye-opener for the customer because previously they had no idea it could be so easy to attain the right data to solve their network problems. I’m just happy we were able to help.

Interop 2012, Anue – Life in the Fast Lane

I’ve been to a lot of Interops, but Interop 2012 Las Vegas was by far the most exhilarating. I joined Anue over a month ago after a 12-year stint leading product development teams at Dell. Dell was definitely life in the fast lane. Now I’m living life in the faster lane! Just think, one month in for me and we announce Ixia will be acquiring Anue. Resultantly, our booth was a hotbed for the inquisitive.

The visionary CEO and management team at Ixia are making an amazing move with this acquisition that will position Ixia in both the pre-deployment arena to a player in production network optimization solutions for data center, cloud providers, telecoms and service providers. While enterprises have often used Ixia in their lab environments, this extends their reach into the production network side of the enterprise in one bold move.

From Anue’s standpoint, the acquisition is going to quickly make us a strong contender in international markets. Our technology is easy enough to understand – we deliver the right data to the right tools at the right time. Yet, not enough have heard about this market. Well, they will now, as Ixia’s reach, both in the US and in other international markets, is extensive and will benefit Anue.

Anue Systems Interop Booth 2012On top of that, Anue’s CTO, Chip Webb, had a session on the Big Data track Monday.  It was interesting –the reality is that you can’t do Big Data without a high-performance, secure network. Anue provides the visibility that enables Big Data to work well. And, yes, Big Data really is a different world. Big Data actually makes Anue’s Network Tool Optimizer technology even more important. Not only is security and performance monitoring important with Big Data, the movement of applications, trend analysis and business intelligence necessitates application behavior monitoring.   One thing that struck me about Interop 2012 overall is a refreshed enthusiasm in IT technologies that deliver business value. To do that you have to be living life in faster lane – seems to me Anue fit right in.

Anue Big Data Presentation at Interop

Interop Las Vegas: Enterprise Cloud Summit - Big Data


Our Anue Systems CTO, Chip Webb, will be interviewed by Jeremy Edberg, Lead Cloud Reliability Engineer from Netflix, at the Enterprise Cloud Summit – Big Data at Interop next week.

Big Data is a technology that is emerging fast, due to its extreme business value. People are really excited about its potential.  Big Data really is different.   I like this description of Big Data from O’Reilly Radar: Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.

Enterprise Cloud Summit - Big Data at Interop Las Vegas

Workloads are bigger, application architecture is different, and applications can be really sophisticated, such as genetic research (protein folding), wind tunnel simulation (one of my personal favorites), stock trading, and web log analysis (think 10s of millions of people visiting websites for targeted advertising.)  Organizations have been collecting logs for a long time, but now have a framework to analyze this mass amount of data and make money from it.

What Is Apache Hadoop?

Whether it is your own implementation of Hadoop on your own infrastructure, or you are using tools like Karmasphere to give analysis capabilities to non-technical users, there are a lot of applications (MapReduce, Pig, Hive and others) and network (how your clusters talk and troubleshooting) work and you want to make sure that your data is secure since it is usually sensitive data.

Three distinct monitoring areas emerge with Big Data: application behavior, network and security.  The tough part is that different groups will be doing monitoring using different tools.  Without a network monitoring switch between the data center production network and monitoring and security tools, a lack of network data access points will force compromises between the different groups, resulting in suboptimal monitoring, which can lead to outages and incidents.Don't Compromise and Learn more about Anue Systems - Net Tool Optimizer 5288 Product Overview

The network monitoring switch aggregates and filters data from across the network so that any number of monitoring tools can get exactly the data they need – no more, no less.  Instead of worrying about limited network access points (SPANs for port mirroring and TAPs) and forcing monitoring requirements from different groups and tools to need to be painstakingly prioritized, you don’t need to compromise.The different groups and tools responsible for monitoring Big Data: application behavior, network and security can all get what they need WITHOUT SACRIFICE, using a network monitoring switch.

Please consider attending Enterprise Cloud Summit – Big Data.  Chip’s time on the agenda is 2:45pm  May 7,  in Mandalay Bay, Lagoon D. Stop by the Anue Systems booth #527.