Since I have covered TAP and SPAN/Monitoring switch ports, and some so called “filtering TAP devices” that are nothing but a SPAN device in sheep’s clothing in other posts, I will explain what a VACL is and how a VACL can or cannot be used. Briefly, I see it as an expensive, complex, and extremely limited data access technology. I do not see it as having any cost nor real viability as a packet or security visualization solution!
Unstructured data accounts for as much as 80% of most companies’ network traffic and stored information. This “Big Data” traditionally took too long or cost too much to process and analyze. However, emerging Big Data initiatives stand to transform vast untapped resources from a costly storage challenge into vital business intelligence used in marketing, product development, stock trading, genetic research, and more.
Big Data promises huge gains in productivity and competitive advantage by distributing the massive workload of preparing data for analysis among large numbers of servers. To make it all work, IT departments need greater visibility into networks and applications in order to prioritize, filter, and synthesize information.
Introducing Splunk with Ixia Anue NTO
Big Data is being captured and analyzed as never before by a new generation of software vendors such as Splunk. Splunk gathers and collates massive amount of data from disparate sources and provides the ability to search that data for information of interest. Millions and millions of data points are available for analysis.
Such searches sift through the massive amounts of Big Data. The results are invaluable for isolating network troubles. Capturing the actual source packets is also helpful for root cause analysis. However, due to the enormous number of data points gathered it can be impractical, cumbersome, and expensive to correlate and store source packets relating to every single data point.
The ability to capture packets for targeted search results is ideal, and Ixia Anue Network Visibility Solutions has developed integration for Splunk which enables such intelligent packet capture.
Using the integration, Splunk is able to signal the Anue Net Tool Optimizer (NTO) as to when, where, and what it should forward to a packet recorder tool. Search strings of interest are identified by the Splunk user, and when they occur, Splunk automatically passes the desired search string argument (e.g., IP address of rogue host) to the Anue NTO which then dynamically filters and forwards only the relevant network packets to selected packet recorder/analyzer.
With this capability an audit trail of the needed packets are now available (without using up unnecessary storage resources). In addition, the integration maintains a log file of the targeted captures making is easier to find the relevant packets at a later date.
A new document, Technical Note: Integration of Anue with Splunk Overview, is available for interested parties. This document offers insight into using the integrated products for targeted searches.
While attending Interop New York a couple weeks ago, I caught myself reflecting on Interop experiences throughout my career. Since my first Interop in San Jose, CA in October, 1991 (I was only twelve at the time –really!) tons of things have changed, but many remain the same. For one, Interop was and still is a really big deal. As a huge repository for vendor-neutral technology forums, educational sessions and wide ranging product displays. Interop has helped us experience scads of new and cool “stuff” right there on the show room floor. In my nubile beginnings as a Systems Engineer, my company’s senior SEs had been so emphatic we attend Interop that we would pay for the trip ourselves if the company wouldn’t sponsor us!
This year at Interop New York, IXIA|Anue had a presentation: Big Data, Big Visibility, Big Control by Larry Hart, Chip Webb and Todd Kaloudis of Opnet. Check it out here, or download the whitepaper on Big Data. Contrast this with Interop ‘91, where one of the coolest exhibitions was the SNMP Managed Toaster with its very own toaster MIB. This wasn’t just the original toaster from the year before that controlled when to cook the bread but 1991’s toaster MIB had been extended to show off SNMP Get-Next requests so a little Lego crane could put the bread into the toaster all via network management software from the showroom floor.
Twenty one years ago we had monitored toasters. Today, we use Network Monitoring Switches like the Net Tool Optimizer to direct traffic to multiple monitoring devices doing Application Performance Monitoring, SIEM attack detection, network diagnostics, Web Customer Experience monitoring and more. We manage localized tool farms in huge data centers or distributed multi-interface devices like LTE MMEs and probes. We have an incredibly cool Enterprise MIB of our own that won’t toast whole wheat but will tell you how many PCI non-compliant protocol packets just floated in off your core network TAP and let you redirect that traffic to additional tools while changing the filter based on your conditions with our flexible API. Get-Next on that table! Quite the Interop evolution – from toasters to network clouds.
I also had the chance to talk to people from well, everywhere; literally. Maybe because it was the Big Apple or companies understanding that Interop is the place to go. Here management and engineers alike can listen and learn from the best in the industry and view products and trends in network technologies. The show is loaded with users from all walks of life and Interop provides the valuable chance to see and speak to others doing things they need to do too.
In ’91, Interop seemed like a lot of Silicon Valley types; today, it’s gone global and anyone who’s anyone has spread their wings with an international presence. Today, global is local. Whether its WebEx meetings about moving Big Data around the cloud or sitting on a marshmallow stool in our cool IXIA|Anue booth helping someone from Azerbaijan design a monitoring infrastructure to meet their needs, Interop lets people like me share experiences with others like we were next door neighbors.
Check out the Ixia booth at Interop NYC, as photographed with the nifty CamWow application. The booth featured the IXIA|Anue Net Tool Optimizer, and we announced our Advanced Feature Module 16 (AFM16) at the show.
Interop was great and NYC was friendly to us – we spoke with a lot of network engineers and data center managers, some of whom were already familiar with the network monitoring switch. Those who were not familiar were very interested, noting the power it puts at their fingertips.
Ixia Network Visibility Solutions welcomes a guest blogger today, Tim O’Neill from LoveMyTool.
Just as a doctor uses your body temperature, blood pressure, pulse rate, etc. to evaluate your health, a network manager should have parameters, statistics or other value references to quickly evaluate the health and success of their network. Referred to as benchmarking, this practice allows you to establish acceptable statistics or values that can be used to monitor the “well being” of your network. Once network levels operate outside the tested “norms”, the network manager, just like a doctor, can treat the “condition” accordingly.
Some use the OSI stack for consideration and layer differentiation, while others use the TCP/IP stack for differentiation. It really does not matter as long as you get definitive and comparative results. The graphic below shows the OSI model compared to the TCP/IP stack and associated protocols all the way up to the application layers – after all, applications are the reason we have a network in the first place.
The first level to consider is the physical layer or transport layer. To evaluate this level you must have a promiscuous NIC card for the layer you are reviewing to establish your criteria for success. Some of the different physical layers are Ethernet, WAN (i.e. Frame Relay, T/E1, T/E3, cable, XDSL, etc.) and WiFi. For any of these physical layers you need to review them individually or at a conversion point, such as the TAP (Telecommunications Access Point) at your main Ethernet connection. This is the most common visualization point for evaluating a network.
When evaluating and establishing any measure point or stat, one must consider delta times, time of day, day of week, day of month, etc. as the evaluation points can change due to many variables. For example, maybe the accounting segment is usually very busy at the close of a month or quarter. Just as you know that your pulse rate will be higher just after you run, for your network it’s important that every evaluation point have all the necessary variables accounted for in the comparison equation.
When looking at the physical layer with a real TAP for access, make sure you are using a competent datascope (i.e. Network Instruments or Garland Tech). The screen shot below contains some of the values one might want to consider at the Ethernet physical layer.
What is important in the Physical Layer?
First, if you are using a Monitor port or SPAN off of a switch, you will not get all, or possibly any, of the Errors as the switch bus grooms and drops all errored frames. This can cause a number of issues. The timing or basic stats will also be offset, however the good frame counts will be close enough for comparison.
The Frame Error level is really part of the second layer of the OSI stack. The Data layer is really the Frame layer but there is more as you will see below.
Each physical layer will have its special set of important values and revelance to how the information was acquired and which method was used to aquire it.
Below are some WiFi physical layer stats for comparison from a AirPCAP card from CACE Techology (now RiverBed Technology) with Wireshark (Gerald Combs).
This is an example of the basic Wireless information from Wireshark that can be available for later comparison and evaluation.
A Flow Graphic overview from Wireshark:
The next layer to consider is the data layer…In the TCP/IP model the physical layer and data layer are considered as one layer. This is acceptable as long as there is not an intermediate physical layer like WiFi. Also the data layer has many other valuable statistics for consideration above the physical layer statistics, such as VLAN with Channel info, Subnets, IP addresses and Packet/Protocol types.
Once you get above the physical and data links you get into the heart of the TCP/IP stack. There are any number of statistics that you can use for benchmarking from the basic to the complex. See below for examples.
Connection and protocol:
You can use any or all (although all is a bit much) of these as benchmarks for your network. You can use a SPAN/monitor port for a benchmark of who is connected to whom, and types of protocol being used. The SPAN can also be used for many other studies, except those that are time based or those that require checking frame errors. A TAP is the best and most technically sound method for accessing your frames for benchmarking.
A very good stat on % packets from Wireshark:
So much potential … so little time!
The best starting point is to find what is important in your network from the physical layer up and create evaluation statistics to regularly monitor the health of your network.
Find the “pulse”, “temperature” etc. of your network and compare those baseline statistics against the different levels of your network, at any given time, to see if there are issues you need to address. The next step is to determine why your baseline changed.
Do not be blindfolded… get into your network, look at it as many ways as you can and find the essential weighting points. Use them regularly to evaluate and verify the success of your network while being alerted to potential issues. There are no set rules just look at your network layers with a good analyzer or even several different analyzers – from Wireshark to commercial analyzers. Filter your view with a tool like the Ixia NTO and make it your network stethoscope! Regular visits to your doctor make sense. Keeping tabs on the ongoing health of your network does too.
I wish you great success….
Here’s a little bit about Tim:
Tim O’Neill - The “Oldcommguy™”
Technology Website - www.lovemytool.com
Committee Chairman for Cyber Law Enforcement training and Cyber Terrorism
For Georgia State Senator John Albers
Please honor and support our Troops, Law Enforcement and First Responders!
All Gave Some – Some Gave All – All deserve our Respect and Support!
The hype surrounding the iPhone 5 release demonstrates one thing – the seemingly unlimited demand from users to do more with their mobile devices is nowhere near satiated. With more and more use of cloud networks, application delivery, and networks-over-the-Internet, the promise and problems of Big Data paradigms will become obvious and well known to all operators – challenges that include capture, storage, search, sharing, analysis, and visualization.
Big Data is large amounts of information that exceed the demands of a typical network because of the size and speed of the data traveling over the network. Big Data is different from traditional IT in many ways, but it still requires monitoring. Key focus areas when managing Big Data are application behavior, network performance and security. The measure of network monitoring to optimize network traffic sent to monitoring tools lies in the ability to improve the effectiveness and performance of monitoring tools.
Network monitoring is critical to network visibility in the enterprise data centers and the IP-based segments of telecommunication service provider networks. The importance of monitoring is reflected in the large investments large enterprises and service providers make in monitoring tools and the staff to manage them. These network monitoring teams face several challenges, including the following:
- Demand for higher data rates outpaces the ability of monitoring tools to keep up
- Complying with strict privacy regulations
- Increased scrutiny of network performance
- Containing the cost of network monitoring
- 1. Demand for Higher Data Rates Outpaces Gains in Monitoring Tool Performance
With networks growing in speed and little budget to upgrade tools, network engineers are looking for ways to get better performance from their existing monitoring tools. Some of the issues that drain tool performance include:
- Duplicate packets
- Inspecting data irrelevant to the tool
More than 50% of packets arriving at a monitoring tool could be duplicate packets. Also, some tools only need the packet header for analysis, in which case most of the data that arrives at the tool are useless. The challenge is to remove the performance-robbing packets from the traffic before they reach the monitoring tool.
- 2. Complying with Strict Privacy Regulations
Businesses that handle sensitive user information are obligated (by SOX, HIPAA, PCI, etc.) to keep such data secure. Some tools provide the ability to trim sensitive data from packets before they are analyzed or stored. However, this comes at the expense of precious computing and tool bandwidth resources. The challenge is to offload the privacy function from tools so they can focus all resources on the analysis and storage for which they were intended.
- 3. Increased Scrutiny of Network Performance
Network performance is under great scrutiny in some network applications, such as high-frequency-trading. For such applications, there are purpose-built monitoring tools that can time each packet as it traverses the network. These latency-sensitive tools depend on timing the packets as close to source as possible. However, such proximity implies exclusive access to the network, which is not practical. The challenge is to deliver both the packet and its timing data to the latency-sensitive tools without compromising access for other network monitoring tools.
- 4. Containing the Cost of Network Monitoring
Network engineers are under pressure to do more with less, including in network monitoring. Successful network monitoring teams not only find ways to save costs, they also find smart ways to leverage their current investment.
There is a rich selection of tools on the market to monitor IP networks. However, in certain parts of the IP-based service provider networks, most of these tools are rendered useless, because the IP traffic is encapsulated using MPLS, GTP, and VNTag. The challenge is to find a way to expose the tunneled IP traffic so widely available tools, including tools the organization already owns, can be deployed.
Ixia Anue has just introduced its Anue Advanced Feature Module 16, which provides advanced packet processing technologies to eliminate redundant network traffic, secure sensitive data and enhance monitoring tool effectiveness. It is the industry’s first high-density, high-capacity advanced packet processing solution designed specifically to enhance network and application performance at large enterprises and data centers.
To learn more about Ixia’s Anue AFM16 and the Anue Net Tool Optimizer, stop by booth #531 at Interop New York, currently taking place at the Javits Convention Center. At the event, Ixia’s Larry Hart and Chip Webb will lead an industry conversation titled “Big Data, Big Visibility, Big Control” on Oct. 3 at 2 p.m. ET.
For more information on the AFM16 module, see the press release
I spent a few days at ConSec ’12 this week and heard a lot about Bring Your Own Device (BYOD). It is a rapidly growing phenomenon that enterprise security experts are grappling with. BYOD is becoming accepted by many companies of all sizes. Interestingly, it often begins when a senior executive pops by IT with an iPad or a Mac and insists on using that device instead of a corporate standard. Then the floodgates open. People tend to like the freedom of choice and the convenience of BYOD.
Security risk with BYOD
Did you know that when you access corporate email on the mobile device you own, there are countless security risks? For example, if your phone is stolen, it is surprisingly easy to gain access to all the data on the device. If you have the email password stored, well, all of your email is available to the hacker. They can steal anything and even worse yet, – impersonate you. In fact, a good hacker in possession of your device, can decrypt your stored passwords in a matter of minutes.
If you think that a remote wipe will take care of this – think again. A remote wipe requires that the device is powered on. So, if the bad guy powers it off and removes the SIM card –remote wipe won’t be wiping anything.
If you use your device for personal purposes, you might download some fun apps and games. There is nothing that guarantees these applications are not malware. And it’s possible they behave well for 6 months and then become malware.
Employee-owned devices are extremely difficult to control or trust. The key seems to be to develop a strategy where the device is known and expected to be EVIL. Enterprise IT needs to focus on protecting what really matters – the corporate network, applications, and most of all, business-critical data.
Monitor for anomalies
Enterprises need to focus on monitoring for anomalies that can strike its key assets:
- The corporate network
- Business-critical applications
- Business-critical data
With BYOD, the risk of network contamination and information leakage significantly increases due to poorly developed or malicious apps, the increased attack surface of all of these devices and fun-loving human nature. Ixia is in the business of providing network visibility with products such as the Anue NTO, which can really help secure production networks.
In the past, IT managed users with a work-owned device, which was most likely configured and locked down. Today, IT is faced with users with as many as three devices- laptops, iPads and Phones/smartphones- all out of their control. That is triple the devices, and all present a tasty attack surfaces plus an increase in in network bandwidth requirements. Oh dear.
So, you might develop a policy that IT must control and monitor all devices that are used for business purposes. Good luck on that – the privacy and legal issues in the US get sticky. In EMEA and other regions with stricter privacy policies for their citizens, forget about it. Scenario: you have a security incident and you need to force wipe out an employee’s iPhone – and you wipe out the last picture of grandpa before he died. The jury would tear up right there.
And do you really want to deal with the drama around confiscating an employee’s personal device and invading his privacy and finding scantily-clad pictures of his fiancée? Oh dear.
The answer is to focus on securing what really matters: enterprise data, network and applications. Lock down and monitor what really counts to your business. Expect employee-owned devices to be Evil, and you will not be disappointed.
Having said all that, there is a new category of products called Mobile Device Management (MDM) that can enforce device policy, encrypt local data and secure contained partitions. It is a nascent category, but there are already over 40 companies moving in to solve mobile device security concerns. In addition, at ConSec ’12 AT&T was talking about a new technology to provide a “toggle” feature, where there are two settings – one for work purposes and one for personal purposes. With this, you might be able to effectively carry out information security practices for the device.
More to come soon…
We did a survey at the Ixia booth at VMworld ’12 to find out more about what VMware practitioners want and need from vendors like us. We surveyed over 150 people, and some of our findings were surprising given the audience: people clearly drinking the virtualization Koolaid. Have to admit, I expected far more right wing VMware practitioners, viewing anything physical as a complete waste of time. In fact, about 20-25% of the people I spoke with were network engineers! I had several great conversations about the abyss that used to exist between network and security pros and the “VMware guys.”
The abyss is disappearing. Surprise! Not really, considering that enterprises are moving from the old days where the virtualization team was off “doing their own thing” to the new reality where networking and security pros have to be brought into the fold, because mission critical apps are being virtualized now. Gone are the days where virtualization teams held network engineers in contempt and network engineers responded by dropping a big trunk for the virtualization team and walking away from the whole situation.
Here are our survey results from VMworld 2012:
- 98% of all respondents thought visibility into VMware environments is critical to their success. This was not that much of a surprise.
- Moving forward, 82.4% of respondents plan on using a mix of physical and virtual monitoring tools, which was a surprise to me at this event – I expected a far greater number to answer purely virtual.
- A whopping 32.4% were already using the vSphere Distributed Switch, which was only introduced last year with vSphere 5.0. Only 9.4% never plan to use it, and only 23.6% were unfamiliar with it. This was a bit of a surprise, since VDS is only available with Enterprise Plus licensing, and it’s still pretty new.
- When asked if they would use a virtual TAP from a third party versus the capabilities provided by VMware and Cisco to acquire information from a virtual environment for analysis with physical tools like IDS, another surprise!, only 13.5% would use a third party vTap. In conversation, the only reason people gave for using a third party was better support.
- 84.6% saw a network monitoring switch as a critical infrastructure component for virtualization.
- Our theory was the issue in moving packets from the VMware environment to the physical environment would hinge on the host physical NIC. 31.7% were concerned with utilization of the limited pNICs on a host. 41.2% were concerned with bandwidth. From conversations with practitioners, it appears that the density of VMs on hosts drove the response – if they had higher VM density they were more concerned with bandwidth. Only 8.1% of respondents didn’t plan to move packets from VMware to physical tools.
Good event for us, and I love Surprises like this!
I just got back from a Mediterranean cruise, where I went to both Greece (Athens, Corfu, Santorini and Mykonos) and Croatia (Dubrovnik), among other places. The difference between the Greek cities and Dubrovnik was remarkable. The Greek people I met seemed, defeated while the Croatians struck me as the most enterprising and optimistic people that I have encountered, which shows the importance of optimism and visibility.
It seemed everyone in Dubrovnik “had a shingle out” to make money. If they had a boat, they were looking for tourists to take on a tour. If they had a house, they were looking for a boarder. If they had neither, they were looking to sell handmade crafts for a profit. Remarkable, it was their resilience in a tough economy, and their willingness to just work hard.
Contrast this with people in the Greek cities. They seemed defeated and despondent. There were very few entrepreneurs actively vying for the tourist dollars. The desperation and hopelessness was palpable, which is striking, given the tours we were taking highlighted the grandeur of ancient Greek cultures. Greece’s per capita income is impressive, but it appears that the statistic may be deceptive. A climate with limited visibility and a perception of deception – a lack of visibility – can’t be a good long-term strategy.
So why am I blogging about this? From my perspective, the Greek people have lost visibility and corruption has ensued.
A couple of months ago Ixia acquired Anue Systems for the express purpose of adding production network visibility into their portfolio of products. While we can’t solve Greece’s problems, I sure wish we could. Visibility could get rid of that “deceptive image.”
The Croatians are painfully aware of political dispute, and have had their share of suffering. While they have been through significant negativity and political dispute, what is remarkable is their response. It seems they choose to view the glass as half full, and to endeavor to get that glass full.
Unfortunately, for global politics, a network visibility vendor like us can’t offer the technology to give the Greeks the visibility they need. It is admittedly an oversimplification of a very complex political situation, but visibility and clarity into any situation – be it network performance or a national economy – has extreme value. Deception and lack of visibility is a poor long-term strategy.
When there is a lack of visibility, it appears corruption follows – according to Wikipedia, “Greece has the EU’s second worst Corruption Perceptions Index after Bulgaria, ranking 80th in the world, and lowest Index of Economic Freedom and Global Competitiveness Index, ranking 119th and 90th respectively. Corruption, together with the associated issue of poor standards of tax collection, is widely regarded as both a key cause of the current troubles in the economy and a key hurdle in terms of overcoming the country’s debt problem.”
As a technology vendor, we bring value to our customers by providing improved network visibility. Increased network visibility is beneficial in reducing internal enterprise politics, which can become a sort of internal corruption.
In looking at this situation and the contrast between Croatia and Greece, simply put the way to defeat useless politics and corruption is to see the facts clearly. Ixia’s network visibility solutions help network engineers and security professionals see the true facts and avoid “internal corruption.” Hopefully the Greeks will find a way to do likewise and regain some of their ancient grandeur.
Common network monitoring issues can make even the toughest member of your team cry. The continually increasing demands for a more secure network is putting a strain on IT capabilities. How can data centers increase network security without adding expensive infrastructure? The key to delivering higher quality network security without making large capital investments into IT infrastructure is to get more out of the network monitoring equipment that you already have in place.
That can be easier said than done, however. Improving network security will require a dedicated effort to collect and analyze information about network operation so that areas for improvement can be recognized and implemented. As outlined in this white paper, realizing significant network monitoring performance improvement comes with its own set of challenges.
First, network switches typically only have oneSPANport that can be used to connect network monitoring tools. This severe limitation in the ability to connect monitoring tools means that network traffic cannot be captured or analyzed in a comprehensive way because only one network monitoring tool at a time can be connected to the network switch. One monitoring tool simply cannot provide the thorough and complete network traffic analysis needed to realize significant performance improvement. It takes a full complement of monitoring tools to fully analyze how a network operates.
Network speeds continue to increase, but sometimes monitoring tools are not upgraded to keep up with the network such as, a 10G monitoring tool connected to a 40G network link. Outdated network monitoring tools can be overwhelmed, leading to lost information that could be critical in analyzing network security.
Duplicate packets generated by SPANports are another common issue leading to overwhelmed monitoring tools. Many SPANports generate duplicate network traffic packets that are then sent to monitoring tools. This excess data causes problems for many monitoring tools like capture devices – significantly reducing their capability and effectiveness. Plus, it is more difficult for network engineers to successfully use monitoring devices since so much of the data consists of duplicate packets that provide no value, not to mention inaccurate reports.
Using a network monitoring switch can solve common networking issues by providing features to:
- Connect various diverse monitoring tools to a singleSPANport
- Remove duplicate packets generated by theSPANport
- Direct the right traffic to the right tool to prevent data overflows and dropped packets using filter and traffic controls
The right investment into improved monitoring can bring big returns and ensure you get the most out of your network monitoring efforts – painless and without tears!
Most people in business focus on application performance, results, bottom line stuff. Security people, while they truly want to be business enablers, have to the Yang to the performance Yin to balance the equation. White hat (good guy) security people are a unique breed – smart enough to be the bad guy and make a lot of money, but compelled by morals, ethics, something in their makeup, to instead choose to foil the bad guys. Sadly, in many cases they are perceived as bad guys by users with their efforts to maintain security. Put that one in the life is unfair category.
I attended SANS cloud security training in Austin a few weeks ago. It was taught by Vern Williams (all- around great security guy), and attended by the likes of Northrop Grumman, Veterans Affairs, and Electronic Arts, plus some consultants, and even a CPA and an attorney. However, knowing security guys, and having presented to NSA in the past, it would not surprise me if some of the attendees were not from where they said, or if they did not use their real names. Read on to understand why this is actually a good thing.
There is not an IT security soul out there who is not frustrated and appalled by the behavior of some IT users. Users do really bad things. They write down passwords, or cleverly put them in a text file named “passwords,” and, worst of all, are susceptible to social engineering, in addition to being gullible and willing to click on “OK” or a link in an email, no matter what the offer is, in order to get their jobs done. Business users are the yin. They need performance and results, stat.
While sometimes unpleasant, somebody has to put a stop to users inflicting damage on themselves and the business. Enter the security guys. Security people have a native Deny All perspective. They are the yang to the “busy bee” user.
Looking for What’s Wrong
So, to illustrate this point I’ll use a class exercise we did to evaluate a prospective cloud provider’s contract. The amazing thing was that one of the teams remarked on the fact that our team had a slide on what was positive about the proposed contract. The thought of a positive aspect of a contract never entered their minds.
The reason is simple: Perspective. Security guys are trained to look for what is wrong or suspicious. The only reason our team had that slide is that I’m a product management type, looking for a balanced view. Security guys should not have a balanced view. They need to relentlessly hunt for vulnerabilities, flaws, loopholes, badly written code, suspicious behavior, anomalous events, human error, cyber terrorism, exploits, evil intentions – I think you get the picture. Somebody has to do this, as the bad guys are getting more and more evil, and it’s not for kicks anymore – the bad guys are after your money and reputation.
The class reinforced my assumption that the Cloud is a Very Good thing for SMBs from a security perspective. SMBs typically view security as a part time job for the poor guy who is maintaining the network and applications. Security needs to be a fulltime job, and cloud service providers (CSPs) typically have legions of dedicated security professionals. They know what they are doing, and it’s their reputation on the line.