Cloud security: Don’t cut your own hair

 

Several of Anue’s fastest-growing customers are cloud providers. As the transition from traditional in-house computing to cloud computing is occurring, one common objection raised is information security. The really interesting thing about the objection is that experience is proving to be the opposite.

In fact, Symantec’s 2011 State of Cloud Survey Organizations found that customers are conflicted about security—rating it as a top goal as well as a top concern, with respect to moving to the cloud. Eighty-seven percent of respondents are confident that moving to the cloud will not impact and may even improve their security status. At the same time, cloud was identified as a top concern for potential risks, including malware, hacker-based theft and loss of confidential data.

Security Tough guy asks: Are you part of the Cloud Security Alliance?The Cloud Security Alliance 2010 publication, Top Threats to Cloud Computing v1.0 identifies the following potential threats:
- Abuse and Nefarious Use of Cloud Computing
- Insecure Application Programming Interfaces
- Malicious Insiders
- Shared Technology Vulnerabilities
- Data Loss/Leakage
- Account, Service & Traffic Hijacking

My theory is this. While cloud computing DOES introduce some additional business considerations, on the balance it greatly improves IT security for organizations, especially non-technical SMBs.

Cloud providers take security incredibly seriously. If you don’t believe me, try to get a tour of a cloud provider data center. You will be lucky if they even tell you the city the data center is in. If you do get a tour, it might require vetting, including a check of your criminal record and background.

Managed cloud providers also hire some serious security talent. Unlike an SMB, where the IT guy may have the luxury to think about security between urgent, business-driven tasks, cloud providers typically have teams of full-time security professionals. Many of these folks come from organizations with three letter acronyms that are extremely security-conscious, and they aren’t shy about investing money in security. Their day job is to proactively monitor network behavior, and they have many tools (such as Anue’s NTO) at their fingertips to do so.

When a cloud server is attacked, cloud providers have the necessary technology to isolate and contain the situation, proactively protecting their customers. Any network behavior that deviates from normal patterns is identified and sets off alarms, often provoking automated responses and setting off further investigation by the security team.

Hacker News - your source to better know about how to securely prepare your network

Cloud providers have diligence in maintaining secure infrastructure, whereas an SMB is unlikely to be able to keep on top of maintenance. Most incidents involve known vulnerabilities that the IT group just hasn’t had a chance to patch. As Casper Manes says in his (excellent) post on patch management on The Hacker News, “I’ve spent most of the past decade in information security, with a pretty big focus on incident response. It never ceases to amaze me how many security incidents (pronounced hacks) customers suffer as a result of unpatched systems.”  The SMB is in a particularly bad position here, with limited resources and too much work for network administrators in the first place. Cloud providers, on the other hand, have skilled personnel, automation and the tools they need to do a great job on patch management.

It’s like cutting your own hair – a job usually best left to professionals.

Standards and How they Prevent “Dropped” Mobile Calls

 

I interviewed Chip Webb, the Anue CTO, for this blog. He just came back from WSTS in Colorado, which is the Workshop on Synchronization and Timing in Telecommunications.

 

Chip is an expert who participates in ITU SG15 / Q13 – which is a shorthand way of saying the International Telecommunication Union’s Study Group 15, Question 13. The ITU is an organization based on public-private partnership that is part of the United Nations and has three main sectors: ITU-T for telecom, ITU-R for radio communication and ITU-D for developing countries. Study group 15 within ITU-T deals with telco optical transport and access technologies, and Q13 is a sub-committee, or “Question,” that focuses on timing and synchronization of telco networks.

 

ITU membership represents a cross-section of the global International Communications Technology (ICT) sector – from the world’s largest manufacturers and carriers to small, innovative players who work with new and emerging technologies to leading R&D institutions and academia. ITU membership includes 193 countries and over 700 private-sector entities and academic institutions. ITU is headquartered in Geneva, Switzerland, and has 12 regional and area offices around the world.

More than 300 experts from around the world attend SG15 plenary meetings in Geneva, which are held every nine months. Q13 usually holds two smaller, “interim,” meetings between the plenaries, at which a smaller, more focused group of approximately 30 experts meet to discuss contributions and shape future standards for timing and synchronization.

Read more about the ITU here

Chip explains the need for synchronization like this: Power is essential to telecom networks – without power- the systems won’t work. It is easy to understand the need for power because everyone uses electricity and understands what happens when it fails. In the same way, proper synchronization is essential to modern telecom networks. But it takes a little more explanation because most people don’t encounter the need for microsecond-level accuracy in their day-to-day life. Even Olympic athletes aren’t measured that precisely.

Chip explains the speed of wireless networks: “Let’s think about the speed of light. Most people know that the speed of light is really, really fast and that nothing can travel faster than light. In fact, light and radio waves travel about one foot in a billionth of a second. In metric, that’s about one meter in three billionths of a second. So the next time you see a one-foot ruler, you should think ‘that’s a nanosecond’ – or one billionth of a second. Now, think about the progress of technology over the last few decades. Today’s microprocessors operate at unimaginably high speeds – measured in billions of clock cycles per second. Both the speed of light and the speed of microprocessors are things in our lives that are so fast it is hard to understand them in everyday terms. It is hard to imagine things at such extreme scales. Wireless networks also operate at speeds so high that they are hard to comprehend.”

Chip gave me a simple real-world analogy to help explain why synchronization is so important for wireless networks. Imagine you are flying to a distant country. Chances are good that to get there, you will need to make a connection at a “hub” airport. If one of your flights is delayed, you might miss a connection at the “hub.” That would be a serious disruption to your trip. Similarly, as you travel down a highway using your mobile phone, it communicates wirelessly with one base station after another. The handoff process from one base station to the next is analogous to making a flight connection at a “hub” airport. If the timing of one base station is incorrect, your call might be dropped. While flight delays are measured in minutes or hours, wireless network synchronization is measured in tiny fractions of a second – nanoseconds or microseconds.

So without proper synchronization, communication equipment may operate on the wrong frequency or at the wrong time. This can lead to poor-quality calls or even complete loss of service. Data hungry devices, like Apple’s iPhone and the new iPad, are driving the need for wireless data services. Wireless service providers are upgrading their networks to support this growing demand with new wireless protocols such as Long Term Evolution (LTE). The synchronization requirements for LTE networks are, in some cases, even more stringent than the older systems they replace.

Along with the upgrade to LTE, wireless service providers are also upgrading their backhaul networks with newer Ethernet and TCP/IP and replacing their existing synchronization networks. These new backhaul networks, which connect the base stations to the central switching equipment, use new synchronization protocols and techniques, like as Synchronous Ethernet (SyncE) and Precision Time Protocol (PTP).

At Q13, Chip has been working with other experts to develop metrics and test methods for these new synchronization networks. Stay tuned for Chip’s upcoming blog series, which will introduce this ongoing work, and discuss “Lucky Packets”, which don’t have anything to do with gambling in Las Vegas or Monte Carlo!  View Chip’s presentation on Slideshare.

How a Cosmetics Industry Retailer uses a Network Monitoring Switch

For this blog I interviewed Pat Malone, Anue’s VP of US Sales.

   This is Pat:
Pat Malone – Vice President, U.S. Sales

When most people think about a cosmetics company they usually don’t think about the worldwide distribution facilities or the mobile transaction handling that is required to support them. They probably don’t think about the data network behind the manufacturing and product delivery engine. As Pat points out, like telcos, government and financial enterprises, a large part of the value of the company is their data, specifically the knowledge of their customers’ preferences, their partners and available inventory in distribution centers around the world.

Customer satisfaction and partner support are paramount with high transaction volume and relatively low individual item costs.

cosmetic and personal care company Anue 5288 Net Tool OptimizerTM (NTO)

Anue recently installed some of our high-end network monitoring switches at a cosmetic and personal care company based in the US. The challenge was to economically gain visibility into the network and increase the performance of their applications. The company relies on an extended global sales force and needs to track millions of transactions – which determine the compensation of their sales associates and the brand loyalty of their clients.

When a trouble ticket from customers or employees is received, the company needs to be able to go back in time and analyze what happened. For that reason, they use a Computer Associates (CA) GigaStor, which provides long term storage and enables historic transaction analysis.  The GigaStor has a capacity of up to 576TB of data. The combination of the Anue network monitoring switch and the Gigastor makes the CA products more valuable. Our client now has a complete view of all traffic, not just one or two monitoring points.

Pat points out that our client is a large organization with a complex network that includes optical TAPs on approximately 50 network links (a mixture of both 1G and 10G, internal and external) and they needed full-time monitoring visibility. Their challenge was to figure out how to aggregate the data from all of the TAPs and provide it to their GigaStor. They needed to be able to filter out packets that were irrelevant (such as backup traffic) to avoid saving unnecessary data in the GigaStor, which would shrink their back-in-time analysis window.

Anue Systems was able to address their needs with our new high density network monitoring switch – the 5288 NTO. With this technology, we allowed them to aggregate traffic flows from all 50 TAPs to provide to their GigaStor, as well as other tools such as Intrusion Detection to support monitoring and security.   An added bonus to the project has been customer experience, which they measure with TEALEAF, which is their Customer Experience Management solution.

According to Pat, Asia is their fastest growing customer market. Due to the success in the US, they are now deploying network monitoring switches in their Far East data center.

LTE and Mobile Carriers

For this blog, I interviewed Kevin Przybocki, one of Anue Systems’ founders. We talked about Long Term Evolution (LTE.)

 This is Kevin:
Kevin Przybocki Anue Systems Founder

If you watch even a modest amount of TV or Hulu, it’s likely you’ve heard about 4G. Mobile carriers talk about 4G networks quite a bit in TV ads because it’s 10 times faster than the predecessor – 3G networks. A type of a 4G wireless network, known as LTE, is used by Mobile Carriers to deliver the next step in user experience.    You can read up on the technical details here.   Although there is some quibbling about 4G requirements, this will be solved with a future release called LTE Advanced.

This gladiator will accept no excuses from his Mobile carrier.

In any case, according to Kevin, the average Mobile customer does not care about the technical details – they care about Quality of Experience (QoE). We do care about Service Level.  It is a benefit that LTE is 10 times faster, but we don’t want to have service quality problems such as dropped calls or not being able to stream videos – no matter how many G’s are associated with the service.

So Kevin believes this is where companies like Anue come into play. What we provide is a network monitoring switch– an innovation in network management and monitoring –that allows performance monitoring tools to get exactly the right network data they need for analysis. With the network monitoring switch, Mobile carriers can aggregate information across scarce network ports, and define and filter data required for analysis to large numbers of performance and security monitoring tools.

The network monitoring switch’s “magic sauce,” according to Kevin, is that it filters the data and de-duplicates redundant packets. This is critical for effective and efficient data analysis and troubleshooting – instead of being flooded with data packets that may or may not apply to its interests, the monitoring tool gets just what it needs. If the tool is monitoring VoIP, it only gets VoIP traffic versus everything else that might be traversing the network segment.

Kevin points out another interesting feature of a network monitoring switch = its capability to load balance, which is particularly handy as carriers move to Higher Speed Ethernet (HSE).  Whereas today most networks run on 1G or 10G networks, the future is 40G/100G networks. This introduces the problem of not being able to monitor HSE networks because the tools haven’t been developed yet. With the network monitoring switch, you can perpetuate the life of your existing tools by allowing them to share the load of the higher speed networks.

The dogs illustrate load balancing. And as Kevin pointed out, the stick represents a HSE network that neither dog would be capable of carrying on its own.  Moving away from the dog analogy, the network monitoring switch allows multiple tools to share monitoring workload, so even 1G/10G-capable tools can work together to provide monitoring for 40G networks, for example.

This concludes my interview of Kevin, which gives you a high level overview of a network monitoring switch and how it can help Mobile carriers keep up with increasing demands and expectations from customers with changing network technologies.

To read more please visit us to learn all the technical details.

Fifth Annual Global State of the Network Study

 Fifth Annual Global State of the Network Study

Key Statistics from study done by Anue partner, Network Instruments:

  • Moving apps to the cloud: 60% anticipate half of their apps will run in the cloud within 12 months
  • Video is mainstream: 70% will implement video conferencing within a year
  • Bandwidth demand driven by video: 25% expect video will consume half of all bandwidth in 12 months
  • Chief application challenge: 83% were most challenged by identifying the problem source
  • Increased bandwidth demands: 33% expect bandwidth consumption to increase by more than 50% in next two years

It was especially interesting to me to read about Performance and Bandwidth Management: 

As applications become more complex and tiered, the ability to resolve service delivery issues grows. Eighty-three percent of respondents said the largest application troubleshooting challenge was identifying the problem source. Whereas, more than two-thirds of respondents predicted network traffic demands would increase by 25%-50% within two years

View the whole study:
http://www.networkinstruments.com/assets/pdf/statenetwork_study_2012.pdf

How does your network team deal with the challenge of quickly locating the problem source?  Join Anue and Network Instruments with partner M&S Technologies for the Gaining Data Center Visibility webinar on April 18, where you’ll learn strategies and tips for:

  • Improving visibility in your datacenter
  • Developing cost-effective network monitoring
  • Streamlining problem resolution

Space is limited, so register today.

Gloom, Doom and the Chief Cloud Officer (CCO)

2012 RSA Conference, San FranciscoI was having lunch with a friend and telling him about some of the conversations I had at RSA this year.  Attitude seemed to be, “the war is over and the good guys lost.”  While I do believe a SMB trying to do their own information security is like a child armed with a broken toothpick fighting a pack of grown thugs with AK-47s, I don’t think a defeatist attitude is the best idea.  Although there was a palpable feeling of dread that the “bad guys” are getting more evil faster than the good guys can ever hope to react.  Here is a somewhat humorous collection of woeful quotes collected by NetworkWorld at RSA:

Click here to read NetworkWorld RSA quotes

Some of my favorites are:

“People in our line of work have been going through hell.”

Art Coviello, executive chairman, RSA

Or

Check this one out: “Security moves from failure to failure.”

Whitfield Diffie, on the Zen of security.

Add to this the fact that SMBs are not bastions of great security practices.  It is not uncommon to have privileged server passwords written on a whiteboard in plain sight for everyone’s convenience.

Third party cloud computing is becoming the only way SMBs can attain reasonable information security.  It’s funny, way back when cloud computing emerged, security was the big worry.  IT professionals were concerned with lack of protection of their data in the cloud.  The reality turned out to be far different – current commercial cloud offerings provide world class security that SMBs, even conscientious ones, can’t achieve on their own.

Now, large enterprises can and do take security seriously.  I don’t need to explain to anyone that this doesn’t stop large enterprises from being exploited.  In fact, it’s becoming hard to find a large enterprise that hasn’t had an embarrassing security or privacy “incident.”  Those are just the ones that are made public – it’s probably like roaches where there are 100 for every one you see.

Here is an interesting argument that Risk Management types were out in force at RSA.  http://www.readwriteweb.com/enterprise/2012/03/redrawing-the-battle-lines-wha.php  Business types, interested in architecting secure solutions from the outset rather than “fixing” them later.  It’s also more of a pragmatic way to think about information security.  It’s kind of like accepting the good guys lost and figuring out a way how that is going to be OK.

Enter the need for a Chief Cloud Officer (CCO), responsible for managing multiple third party cloud vendors, monitoring of performance and security and ownership of service agreements.  It’s not a completely crazy idea.  In fact, if you Google “Chief Cloud Officer” you will find this: http://money.cnn.com/galleries/2011/fortune/1106/gallery.csuite_executives_future.fortune/3.html

The CCO will be a business guy more than a tech geek.  He will need a team of people to help him do things like:

  • Define and implement cloud security policy
  • Monitor network performance and security
  • Monitor application performance
  • Bill cloud vendors when they do not meet their SLAs
  • Bill cloud vendors when they have security incidents
  • Manage cloud vendor relationships
  • Deal with security and privacy issues and incidents
  • And so on

 

This creates a new category of jobs in IT security.  That is the cheerful scenario – a grimmer version of this future has cloud management reporting into General Counsel.  That would mean a lawyer would be essentially running IT!  But if you think about cloud security issues, they are going to be more and more a business and legal issues.

Let’s put that aside for the moment.  This CCO notion introduces an opportunity for companies like Anue Systems to enter into a new market allowing multi-tenant cloud network security monitoring.  We can solve the problem of lack of visibility into cloud implementations.  It is an intriguing idea, because it allows enterprises to use all the security and performance monitoring tools they have now and give up very little logical control.  The only control they lose is physical control, which is not that interesting any more now that everything is becoming virtualized.

Contrast this with the cowboy mentality rampant in enterprises now – lines of business running around spinning up virtual machines at will and engaging with cloud third parties as they like, with no concern for policy or governance.  At least this way, the security team will be able to establish and enforce a proper policy, and continue to add value with security and compliance expertise.

An even bigger reason than Green to reduce energy consumption in the data center

According to the Forrester report, Updated Q3 2011: Power and Cooling Heat Up the Data Center, energy costs make up 70% of the approximate operations costs of a data center facility. Perhaps more interestingly, the report states that the data center rack is becoming the bottleneck in forward progress due to increased density and reduced energy and greater space efficiency.

Data center racks aren’t the most socially shared topic around, but it is interesting that with all of our impressive technology, the rack itself may be hampering our progress in achieving improved density.

Standard rack cooling has limits due to airflow limitation in data centers, as confirmed in the Forrester Research. A standard rack is able to mount equipment that draws in excess of 25 KW per rack when populated with dense servers or blade technology. It is difficult in practice to cool a rack that dissipates more than 8 KW of power due to airflow limitations. This often leads to data centers becoming space-inefficient with wasted rack space in order to achieve adequate heat dissipation.

Imagine explaining to your boss why your data center racks are only one third full, but you still need to buy more in order to accommodate additional equipment. This would clearly qualify as a “difficult conversation.”

With these requirements in mind, Anue Systems designed its 5288 Net Tool Optimizer™ (NTO) as a high-density, energy-efficient and space-saving solution for network monitoring in data centers. The 5288 NTO provides 1G, 10G and 40G Ethernet support with a non-blocking architecture and 640 Gbps total bandwidth while consuming only 4 watts per port and a total space requirement of 2U. The 64 10G ports can be used for network port connections or monitoring tool connections in a 2U form factor with a total power consumption of 260 watts. That’s 4 watts per port.

The 5288 NTO, in a 40G network, can handle 16 40G ports for network connections or monitoring tool connections. In addition, network monitoring switches like the 5288 NTO are typically implemented in data center racks. With the 5288 NTO consuming 260 watts or less, depending on network configuration, it is contributing a small amount of heat to be dissipated in the rack.

So with an airflow-dictated budget of 8000 watts for a 16U rack, and the 5288 NTO only consuming a fraction of capacity in its 2U form factor, producing only 3.2% of the energy budget that can be dissipated in the rack according to the Forrester research on standard racks in data centers. A comparable network monitoring switch from a well-known competitor of Anue’s requires 895 watts to power 96 ports, and requires 14U in rack space.

Monitoring Virtual Data Centers: It’s Business as Usual Now

Virtualization is wonderful. It is green, it saves space, and it saves money.

There is a flip side to all this goodness. While highly efficient, virtualization creates a major problem for network engineers managing the performance and service level of the network. That problem is a lack of data access for monitoring and debugging.  To date, virtualized environments have been veritable “black holes” from a network engineering perspective.

In vSphere 5.0, VMware greatly enhanced their vNetwork Distributed Switch (VDS) with NetFlow support and Port Mirroring – the ability to SPAN encrypted virtual traffic to the physical world, and decrypts it.  It allows monitoring in a virtual environment, both intra-host and inter-host.

What does this mean?

Network engineers can now use existing network monitoring and security tools in virtualized environments.

With this new feature, monitoring tools now have visibility into both the physical and virtual. Monitoring can be set up at Ingress or Egress to the VM.  If you want to monitor traffic going out of a virtual machine towards the VDS, it’s Ingress traffic.

There’s a really nice blog on the VMware site that offers how-to information on setting up the vDS: http://blogs.vmware.com/networking/2011/08/vsphere-5-new-networking-features-port-mirroring.html.

There’s also a nice video that shows Wireshark monitoring VM-VM traffic using the vDS:  http://www.ntpro.nl/blog/archives/1825-Video-How-to-setup-a-vSphere-5-Port-Mirror.html

Monitoring Optimization for Virtual Switches

While network engineers have been struggling to get visibility into virtualized data centers, the need for monitoring, compliance and security has actually  increased.  Anue Systems offers a network monitoring switch, the Net Tool Optimizer (NTO), which provides improved network visibility by aggregating data for network performance and security tools.  VMware’s new port mirroring functionality allows the Anue NTO to combine both physical and virtual data for a holistic network view.

Let’s have a look at how this might work.

Configuring the Anue control panel to aggregate input from the VDS

This is the main diagram.   I set up a VDS network port (P03).


Click Image to enlarge

Then I configure the VDS network port.

I can set up specific filter criteria

Just like in physical networks, filtering is important. The NTO’s sophisticated filtering capabilities make it possible to deliver just the data needed for analysis to network tools.

One problem we see with port mirroring in a virtual switch is the generation of redundant packets. The NTO offers line-rate packet de-duplication, which is the ideal solution to this problem.  The NTO also provides packet trimming, which helps enhance security by removing unnecessary payload before delivering data to security and monitoring tools.

So, with virtualization now “Business as Usual”, we have a holistic view of the physical and virtual, and a way to get just the right data to the right monitoring, debugging and security tools.

Nuts and Bolts of Performance Testing

Performance Testing was always a part of Software Testing whether explicitly stated or not. The world of software testing is becoming smarter and mature. With the changing landscape of Computing and an Industry-wise paradigm shift towards SOA/web based applications has brought to the forefront a focus on Performance Testing. The expectations sought from this kind of testing are measurable numbers.

Performance Testing is about measuring the performance of an application under test, whether the application is an embedded, desktop, or distributed enterprise application. However, these are the enterprise-based applications/architectures, wherein lies the prime focus of performance testing. Expectations sought from AUT is to measure the performance numbers and to ensure that it conforms to the expectations.

Goals of Performance Testing
The business goal of performance testing is to measure the application performance and ensure that the numbers conform to the Service Level Agreements. Goals can be internal (if the application is an in house project), or external (when the SLA’s define the objectives).

External Goals
Conforming to the Service Level Agreements (SLA’s). The SLA’s at the highest level consists of the following parameters (whether or not explicitly stated). Application Response time. Application throughput. Maximum number of concurrent users Resource utilization in terms of various performance counters for example: CPU, RAM, network I/O, and disk I/O. Soak Tests1 under varied workload patterns including normal load conditions, excessive load conditions, and conditions in between. This can include increase in the number of users, amount of data and so on.

Internal Goals
Application Crash. The application crash translates into a condition where the application either hangs or stops responding to requests. Some of the symptoms of breaking point include 503 errors with a “Server Too Busy” message, and errors in the application event log that indicate that the ASPNET worker process recycled because of potential deadlocks. Symptoms and causes of application failure under stress conditions. Recoverability options, whether the application recovers after a crash or not.

Soak Testing is about measuring application performance over long periods of test run typically one would expect in a real production/live environment.

More importantly to ensure that there is no data loss when the application crashes and application recovers gracefully. Known issues/bugs in the AUT.

Performance Objectives
Most of the performance tests depend on a set of predefined, documented, and agreed-upon performance objectives. Knowing the objectives from the beginning helps make the testing process more efficient. You can evaluate your application’s performance by comparing it with your performance objectives. One should by all means just run Ad-hoc tests randomly without any specific objectives (Old Principle: how many bugs were discovered just by executing the test cases).

As a rule of thumb, the following are the performance expectations from the Application Under Test:

  1. Application Response Time. This is the most fundamental parameter which ideally is the second nature of the performance tester. Application Response time is the amount of time taken to respond to a request. You can measure response time at the server or client as follows:
  2. Response Time at the server. This is the time taken by the server to complete the execution of a request. This does not include the client-to-server latency, which includes additional time for the request and response to cross the network.
  3. Response Time at the client. The latency measured at the client includes the request queue, plus the time taken by the server to complete the execution of the request and the network latency. You can measure the latency in various ways. Two common approaches are time taken by the first byte to reach the client (time to first byte, TTFB), or the time taken by the last byte of the response to reach the client (time to last byte, TTLB). Generally, you should test this using various network bandwidths between the client and the server.

By measuring latency, you can gauge whether your application takes too long to respond to client requests.

Application Throughput
Throughput is the number of requests that can be served by Application Under Test per unit time. It is measured in terms of transactions per second or orders per second. The throughput varies largely due to the type of load applied, volume of load applied etc. The various examples include credit card transactions, the number of concurrent users, volumes of downloads and so on.

A larger parameter, however, also happens to be the network connection. For example, in terms of numbers lets say, there are 1000 users with an average page request data of 5k for every 5 minutes. The throughput would be = 1,000 x (5×1024×8) / (5 x 60).

______________________

Article Source: http://EzineArticles.com/

Virtualization: Monitoring Is A Huge Challenge

Earlier this week, Network World Senior editor Denise Dubie published a very insightful article, “How far has virtualization come?“. In it, she reviews the results of three annual surveys conducted at Interop, all focused on virtualization technology deployments and adoption in the data center.

The survey, which was conducted by Network Instruments the past two years (NetQoS and NI conducted it together in 2008), revealed some very interesting data points.

  1. Virtualization, as we would have anticipated, has picked up a great deal of momentum both in the data center and on the desktop.
  2. The primary reason for adopting virtualization is cost savings. Yet, they also revealed that implementation costs are too high.
  3. The single biggest virtualization-related challenge is a lack of visibility to data streams, an inability to secure the infrastructure, and a lack of monitoring tools optimized for virtualized environments.

Based on the interplay of the first two key findings above, it is clear that high implementation cost is not a long-term problem. Obviously, today’s decision makers can see the “forest for the trees” so to say, in that the expected long-term cost savings will easily dwarf the high cost of entry.

What jumps out to me the most is the monitoring problem. This is not a unique challenge to virtualized environments, but rather, virtualization itself is causing some head-scratching regarding the monitoring itself. And really, for data centers that are already burdened with escalating tool costs, shrinking staffs, and a shortage of physical access points, it’s no wonder some of them might feel desperate to fix the monitoring quandary.

The reality of the situation is that there is a very easy answer to this problem: Monitoring Optimization. Monitoring optimization involves the adoption of an aggregation and replication switch, which can solve all of these problems if designed with a fully-integrated GUI. The key is to look at this as the monitoring of specific data flows or traffic, not specific links, switches, or access points.

With servers and other hardware switching over to “virtual” servers and hardware, one of the biggest issues is knowing where to place tools in the physical infrastructure. If you stick with the “old way” of placing a tool everywhere monitoring is required, you’ll need more tools than any reasonable enterprise can afford to buy or manage!

What you really need is a “virtual tool farm” to go with your virtual infrastructure. Don’t get me wrong; you’ll still be using the same hardware/software-based tools you have already deployed. But using Monitoring Optimization, you can extend a single tool to cover multiple data streams, and you can share traffic from a single data stream with any or all of the tools you desire. The best part is that you can do it all in a fully integrated GUI without ever having to use some time-intensive, cryptic IOS-like CLI (likely in a proprietary coding language from the vendor, as most of our competitors offer) to manage your most challenging filtering…it’s all drag and drop and windows-esque form entry.

If you count yourself among those who said monitoring is their biggest headache, I urge you to learn more about the Anue 5200 Series Net Tool Optimizer. Then reach out to us via our Contact Me form, and we’ll show you exactly how much easier this can be, and you will see a positive impact to your tool use and staff effort immediately upon deploying our solution.