November 23, 2024

Will Technology Replace Security Analysts?

Posted on September 15, 2014 by in Security

Recently, at a round table discussion, I heard someone make the statement, “In five years, there will be no more security analysts. They will be replaced by technology.” This is not the first time I have heard a statement along these lines. I suppose that these sorts of statements are attention grabbing and headline worthy, but I think they are a bit naïve to say the least.

Taking a step back, it seems to me that this statement is based on the belief or assumption that operational work being performed within the known threat landscape of today can be fully automated within five years. I don’t know enough about the specific technologies that would be involved in that endeavor to comment on whether or not that is an errant belief. However, I can make two observations, based on my experience, which I believe are relevant to this specific discussion:

• Operational work tends to focus on known knowns

• The threat landscape of today, both known and unknown, will not be the threat landscape of tomorrow

The work that is typically performed in a security operations setting follows the incident response process of: Detection, Analysis, Containment, Remediation, Recovery, and Lessons Learned. Detection is what kicks off this process and what drives the day-to-day workflow in a security operations environment. If we think about it, in order to detect something, we have to know about it. It doesn’t matter if we learn of it via third party notification, signature-based detection, anomaly detection, or any other means.

The bottom line is that if we become aware of something, it is by definition “known”. But what percentage of suspicious or malicious activity that may be present within our organizations do we realistically think is known? I don’t know of a good way to measure this, since it involves a fair amount of information that is unknowable. I do, however, think we would be naïve to think it is anywhere near 100%.

If we take a step back, the ramifications of this are quite striking. In essence, most of the work we are performing today involves what is likely a mere fraction of what ought to concern us. Even if technology could automate all of today’s security operations functions within five years’ time, that still leaves quite a bit of work undone.

I think we would also be naïve to think that the threats of today, both known and unknown will be the threats of tomorrow. If I think back five or ten years, I’m not sure how many of us foresaw the degree to which intrusions involving large-scale payment card theft would become an almost regular occurrence. Granted, theft of sensitive information has been an issue for quite some time, but not to the degree that it has been in the recent past. Payment card theft is now a threat that many organizations take very seriously, whereas five or ten years ago, it may have been a threat that only certain specific organizations would have taken seriously. This is merely an example, but my main point here is that we can’t view today’s threat landscape as a base upon which to build predictions and make assertions for the future.

In my experience, analysts can provide unique value that may not be obvious to those who have not worked in the role. For those who don’t know, I worked as an analyst for many years before moving over to the vendor side. It is from that experience that I make this point.

In a mature security operations setting, there will be a certain workflow and process. Some organizations will be in a more mature place, while other organizations will be in a less mature place. Regardless of where an organization finds itself, there will always be room to improve. Alongside performing the tasks required by the process, a good analyst will make the process better and improve the maturity of the organization. This can happen in many ways, but here are a few different approaches that I have often seen:

• Improving existing alerting

• Identifying automation opportunities

• Performing gap analysis

• Implementing new alerting

Any work queue will have both signal (true positives) and noise (false positives). A mature, efficient security operations program will have a high enough signal-to-noise ratio so as to allow for a reasonable chance at timely detection of incidents. Regardless of the signal-to-noise ratio, alerting that populates the work queue can always be improved. As the people most familiar with the ins and outs of various different alerts, analysts play an important role here. The analyst can provide unique perspective regarding tuning and improving alerts to make them less noisy and ensure they keep up with the times.

It is certainly true that some alerts follow a nearly identical sequence of events each time they are vetted, qualified, and investigated. These cases are good candidates for automation, but they won’t identify themselves. A skilled analyst is needed to identify those repetitive manual tasks best suited for automation. Automation is a good thing and should be leveraged whenever appropriate, but it will never replace the analyst.

With automation comes newly liberated analyst cycles. Those cycles can and should be used to hunt, dig, and perform gap analysis. Hunting and digging help to identify unknown unknowns – network traffic or endpoint activity for which the true nature is unknown. Gap analysis serves to identify points within the organization where proper network and endpoint telemetry may not exist. All these activities help provide a window into the unknown. After all, today’s unknown may be tomorrow’s breach.

When unknown unknowns are discovered, they should be studied to understand their true nature. This process turns them into new known knowns. And it is from this pile that new alerting is continuously formulated. The analyst is an invaluable resource in turning unknown unknowns into known knowns. Based on my experience, there is no shortage of unknown unknowns waiting to be investigated.

A good analyst is hard to find and is a critical resource within a mature security operations function. Although it may be tempting to envision a world where the analyst has been fully automated, this does not seem particularly reasonable. Rather, the work of the analyst can and must evolve over time to keep pace with the changing threat landscape.

Joshua Goldfarb (Twitter: @ananalytical) is Chief Security Strategist of the Enterprise Forensics Group at FireEye and has over a decade of experience building, operating, and running Security Operations Centers (SOCs). Before joining nPulse Technologies, which was acquired by FireEye, as its Chief Security Officer (CSO), he worked as an independent consultant where consulted and advised numerous clients in both the public and private sectors at strategic and tactical levels. Earlier in his career Goldfarb served as the Chief of Analysis for US-CERT where he built from the ground up and subsequently ran the network, physical media and malware analysis/forensics capabilities. Goldfarb holds both a B.A. in Physics and a M.Eng. in Operations Research and Information Engineering from Cornell University.

Previous Columns by Joshua Goldfarb:


SecurityWeek RSS Feed

OpenDNS Adds Targeted Attack Protection to Umbrella Security Service

Posted on July 9, 2014 by in Security

OpenDNS has enhanced its cloud-based network security service Umbrella with new capabilities designed to protect organizations against targeted attacks, the company announced on Tuesday.

The company says its monitoring systems are capable of detecting malicious traffic from the first stages of a potential targeted attack by comparing customers’ traffic to activity on OpenDNS’s global network. By providing predictive intelligence on the attackers’ network infrastructure, OpenDNS enables organizations to block attacks before any damage is caused.

OpenDNS LogoMany organizations are capable of identifying single-stage, high-volume cyberattacks, but the “noise” generated by these types of attacks makes it more difficult to detect highly targeted operations, the company explained.

According to OpenDNS, its services address this issue by providing real-time reports on global activity and detailed information for each significant event. The reports can be used by enterprises to identify ongoing or emerging targeted attacks based on whether or not the threats have a large global traffic footprint, or if they’re detected for the first time.

In order to make it easier for security teams to investigate an incident, OpenDNS provides information on the users, devices and networks from which malicious requests are sent. Information on the attackers’ infrastructure can be useful for predicting future threats and for blocking components that are being prepared for new attacks. 

“Enterprises today are challenged to keep up with the volume of attacks that are targeting their networks. Not only is the efficacy of today’s security tools declining, but when they do identify a threat they lack the context that is critical to blocking it,” said Dan Hubbard, CTO of OpenDNS. “The ability to determine the relevance and prevalence of an attack is key to prioritizing response, remediating infected hosts, and understanding the scope of the threat.”

The new capabilities are available as part of the Umbrella service based on a per user, per year subscription.

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Insider vs. Outsider Threats: Can We Protect Against Both?

Posted on June 26, 2014 by in Security

Media reports affirm that malicious insiders are real. But unintentional or negligent actions can introduce significant risks to sensitive information too. Some employees simply forget security best practices or shortcut them for convenience reasons, while others just make mistakes.

Some may not have received sufficient security awareness training and are oblivious to the ramifications of their actions or inactions. They inadvertently download malware, accidentally misconfigure systems, or transmit and store sensitive data in ways that place it at risk of exposure.

Insider ThreatsPersonnel change too. Companies hire new employees, and promote and transfer individuals to new roles. They augment staff with temporary workers and contractors. New leadership comes onboard. Many of these insiders require legitimate access to sensitive information, but needs differ with changing roles, tenure, or contract length. It’s extremely challenging to manage user identities and access privileges in this environment, not to mention the people themselves. A person who was once trustworthy might gradually become an insider threat – while another becomes a threat immediately, overnight.

New technologies and shifting paradigms further complicate matters. The evolving trends of mobility, cloud computing and collaboration break down the traditional network perimeter and create complexity. While these new tools and business models enhance productivity and present new opportunities for competitive advantage, they also introduce new risks.

At the same time, you can’t ignore outsider threats which are responsible for the lion’s share of breaches. Since 2008, the Verizon Data Breach Investigations Report has shown that external actors – not insiders – are responsible for the vast majority of the breaches they investigated. Some of the top reasons why breaches were successful include: weak credentials, malware propagation, privilege misuse, and social tactics. These are precisely the types of weaknesses that trace back to the actions (or inactions) of insiders.

The question isn’t whether to focus on the insider or outsider threat. The question is how to defend against both – equally effectively.

What’s needed is a threat-centric approach to security that provides comprehensive visibility, continuous control, and advanced threat protection regardless of where the threat originates. To enable this new security model, look for technologies that are based on the following tenets:

Visibility-driven: Security administrators must be able to accurately see everything that is happening. When evaluating security technologies, breadth and depth of visibility are equally important to gain knowledge about environments and threats. Ask vendors if their technologies will allow you to see and gather data from a full spectrum of potential attack vectors across the network fabric, endpoints, email and web gateways, mobile devices, virtual environments, and the cloud. These technologies must also offer depth, meaning the ability to correlate that data and apply intelligence to understand context and make better decisions.

Threat-focused: Modern networks extend to wherever employees are, wherever data is, and wherever data can be accessed from. Keeping pace with constantly evolving attack vectors is a challenge for security professionals and an opportunity for insider and outsider threats. Policies and controls are essential to reduce the surface area of attack, but breaches still happen. Look for technologies that can also detect, understand, and stop threats once they’ve penetrated the network and as they unfold. Being threat-focused means thinking like an attacker, applying visibility and context to understand and adapt to changes in the environment, and then evolving protections to take action and stop threats.

Platform-based: Security is now more than a network issue; it requires an integrated system of agile and open platforms that cover the network, devices, and the cloud. Seek out a security platform that is extensible, built for scale, and can be centrally managed for unified policy and consistent controls. This is particularly important since breaches often stem from the same weaknesses regardless of whether they result from insider actions or an external actor. This constitutes a shift from deploying simply point security appliances that create security gaps, to integrating a true platform of scalable services and applications that are easy to deploy, monitor, and manage.

Protecting against today’s threats – whether they originate from the inside or the outside – is equally challenging. But they have a lot in common – tapping into many of the same vulnerabilities and methods to accomplish their missions. There’s no need to choose which to prioritize as you allocate precious resources. With the right approach to security you can protect your organization’s sensitive information from both insiders and outsiders.

Marc Solomon, Cisco’s VP of Security Marketing, has over 15 years of experience defining and managing software and software-as-a-service platforms for IT Operations and Security. He was previously responsible for the product strategy, roadmap, and leadership of Fiberlink’s MaaS360 on-demand IT Operations software and managed security services. Prior to Fiberlink, Marc was Director of Product Management at McAfee, responsible for leading a $ 650M product portfolio. Before McAfee, Marc held various senior roles at Everdream (acquired by Dell), Deloitte Consulting and HP. Marc has a Bachelor’s degree from the University of Maryland, and an MBA from Stanford University.

Previous Columns by Marc Solomon:


SecurityWeek RSS Feed

Cyber Risk Intelligence: What You Don’t Know is Most Definitely Hurting You

Posted on June 20, 2014 by in Security

Cyber Risk Intellitence

Growing up, one of my father’s favorite sayings was “luck favors the prepared.”

I must have heard it a thousand times over the years. It was almost always spoken just after some sad scenario where I had failed to stay alert, informed and aware, thus my ending up at a loss. Sometimes a big loss. It was his belief that, if you’re always broadly observant of things that affect your life, good things have a better chance of happening to you. He has always been right.

Nowadays, I find myself applying this lesson to cybersecurity and cyberdefense.

More than just nifty tools and solutions, robust IT budgets, threat intelligence firehoses and rigid security policies, I’m learning over and over again that practical, habitual day-in/day-out awareness is invaluable at helping you avoid becoming a victim of cybercrime – and lessening the impact when cybercrime inevitably happens to you and your organization.

Cybercrime is all around us.

One day it may become second nature to stay constantly informed about cyber risks facing us and our businesses. We’re certainly not there yet. Sooner or later, we may all need to get used to the idea of constantly consuming data about our risks and vulnerabilities in order to act safer. It’s likely sooner rather than later. To really accomplish this type of awareness, though, takes the right levels of information. Not just data. In fact, we’re all awash in data. But more on that later.

What we need is high-quality cybercrime information that’s comprehensive, yet also focused and simple to digest. Information that’s current, consistent, intuitive, continuous and, most importantly, easy to draw conclusions from that have meaning specific to you, your business and the decisions you face. It’s what I call “complete context.”

And there’s more.

To truly benefit from this sort of information takes more than just the info itself. Just as my father also told me, it takes focus, effort and commitment. Every day. Something he just called “hard work.”

Current Data + Contextually-Relevant Info + Continuous Awareness + Hard Work = Practical Solutions

Of course, the familiar modern-day version of my father’s favorite is “Chance favors a prepared mind” said by Louis Pasteur, French microbiologist, father of Pasteurization, and father of the Germ Theory of Disease. For Pasteur, the saying meant that, by staying diligently informed of all things surrounding your problem space, you’ll more quicker see solutions for tough problems.

For years and years he labored at the microscope, observing, collecting data and analyzing. But it was his devotion to basic research on more than just the problem itself – and the quick delivery of practical applications based on what he learned –  that led him to his biggest breakthroughs against unseen and deadly illnesses. Eventually, thanks to Pasteur’s way of working, we developed critical medicines such as antibiotics.

Studying a problem from every angle and every level always leads to more practical solutions and quicker (re)action.

Although Pasteur labored in the medical and biological fields, his work was in many ways analogous to modern cybersecurity. Today, scientists and researchers battle similar unseen forces, all around us, making us sick in various ways. Our networks and computers and mobile devices are constantly exposed to harmful pathogens and viruses. And, with the Target breach and things like Heartbleed, real people now know these things are fatal in their own way.

But in today’s world, we seem to have gone off track a bit in trying to cure our cyber ills.

In perhaps what was much the same as in Pasteur’s day, many smart people today labor to observe, collect data and draw conclusions. However, most of them, unlike Pasteur, are not able arrive at real practical breakthroughs that change the world.

Why is this the case?

For me, it’s mostly a simple answer:

We focus so much on looking down the barrel of individual microscopes, we get lost in all the low-level noise that’s far too focused on only a few dimensions of the problem.

Let me use Pasteur again to explain more simply.

Had Pasteur only observed the smallest bits floating around under his glass, he would’ve likely not been remembered in history. Instead, Pasteur gathered data about sick people, who they were, where they lived, how old they were, what gender, what symptoms they had, what prior illnesses they had been subject to, what their jobs were and what they had in common.

He observed animals, how they behaved, how long it took for them to become sick when they did, what they ate, where they lived and more. He even observed how rotting meat behaved, how it decomposed, how it compared to other plant and animal matter and on and on. He focused on all sides of the issue; the causes, the victims and, of course, their symptoms. Pasteur observed every facet of his problem set from high level to low, and turned basic data collection – from many dimensions at once and from all angles – into information he could use to draw practical conclusions.

Put simply, Pasteur had complete context by performing “intelligence gathering.” But, by focusing on more that just the threat itself, Pasteur was one of the first practitioners of risk analysis, or risk intelligence. It’s something we’ve only just begun to really apply to cyberdefense.

Continuous awareness of our own cyber risks compared to what’s possible and what’s happening around us right now is one of the missing pieces in current cyberdefense practices.

Today, we spend most of our cybersecurity efforts and dollars gathering massive amounts of data from millions of “microscoped” sources, but we rarely change perspectives or levels. We want to know what’s threatening us, but can’t seem to understand the picture is much bigger. Too rarely do we push back from the lenses trained only on data sets inside our specific organizations to pick our heads up and look around.

I like to call it “cyber navel gazing.”

You see, outside the microscope, there’s just so much other useful data – mostly not being stored and analyzed – that can be turned into helpful information, then into practical solutions.

Yet, we continuously employ 10s of 1000s of myriad tools, solutions and applications that comb through huge bins of raw packet data and endless streams of netflow and long-term signature repositories and terabytes of log files and interface dumps and more.

In fact, it’s as if all we do is peer through the scopes at our own micro worlds and draw conclusions that themselves lead to other tools begetting other massive piles of micro data.

Are these things all bad? Of course not. And they’re all part of fighting the fight against cyber disease. But in all of this we miss out on the bigger picture. Rarely do we store data, day in and day out, on what we’re getting hit with, how threats are occurring and what’s happening as a result. Neither are we matching that up to what our specific, individual symptoms are, who we are as targets, where we’re from, what types of companies we are, who our customers are, what technologies we’re using and on and on.

What would Pasteur say to us now if he were brought in to consult on our cyber sickness?

He’d probably just say, “Luck favors the prepared.” Then he’d tell us to start over. From the top this time.

Jason Polancich founder and Chief Architect at SurfWatch Labs. He is a serial entrepreneur focused on solving complex internet security and cyber-defense problems. Prior to founding SurfWatch Labs, Mr. Polancich co-founded Novii Design which was sold to Six3 Systems in 2010. In addition to completing numerous professional engineering and certification programs through the National Cryptologic School, Polancich is a graduate of the University of Alabama, with degrees in English, Political Science and Russian. He is a distinguished graduate of the Defense Language Institute (Arabic) and has completed foreign study programs through Boston University in St. Petersburg, Russia.

Previous Columns by Jason Polancich:


SecurityWeek RSS Feed

Mobile Ad Libraries Put Enterprise Data at Risk, Firm Says

Posted on June 4, 2014 by in Security

Mojave Networks Introduces Mobile Application Reputation Feature

Mojave Networks has added a new feature to the company’s professional and enterprise services in an effort to help organizations minimize the risks posed by the mobile applications used by their employees.

According to the company, organizations can use the new feature to discover potential risks by analyzing data collected and transmitted from mobile apps, and create policies for data loss prevention based on the information.

The new mobile application reputation offering, which is available immediately, includes features like customizable analytics, categorization of apps by risk level, application tracking, and integration with device management and network security solutions.

“The ‘bring your own device’ (BYOD) trend is transitioning to ‘bring your own applications’ (BYOA) as users download more and more apps to share data, increase productivity and stay connected,” noted  Garrett Larsson, CEO and co-founder of Mojave Networks.

“If any application running on a mobile device connected to the network is insecure, it can put highly sensitive corporate data at risk. Our new application reputation feature can help enterprises improve their mobile security posture by eliminating the risk of insecure applications.”

The company analyzes over 2,000 mobile apps every day by tracking 200 individual risk factors in 15 different categories. In addition to static and dynamic analysis, Mojave Networks said that it uses data from real-world usage of the tested applications to determine if an application is safe.

One risk that’s particularly problematic for enterprises is when private data is collected and sent to remote Web APIs, the company warned.

“Some of the most significant risk factors affecting corporate employees and individual mobile users, such as data loss and PII collection, occur not by the application itself, but within mobile advertising libraries and other library components such as social media or analytic tools,” Ryan Smith, Mojave’s lead threat engineer, explained in a blog post.

Based on the analysis of more than 11 million URLs to which mobile apps connect to, Mojave Threat Labs determined that business users connect to at least as many data-gathering libraries as consumers. During its analysis, the company found that 65% of applications downloaded by business users connect to an advertising network, and 40% of them connect to a social network API.

“It is critically important that users and IT Administrators understand what data is being collected from their devices, where it is being sent, and how it is being used. Given that the majority of the sensitive data being collected occurs within these third party libraries such as ad networks, social media APIs, and analytics tools, it is therefore important to fully understand each of the libraries included in your mobile apps,” Smith noted.

Founded in San Mateo, CA in 2011, Mojave Networks raised a $ 5 million round of funding in November 2013, in addition to launching a cloud-based, enterprise-grade solution that protects mobile devices starting at the network level. 

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Automated Traffic Log Analysis: A Must Have for Advanced Threat Protection

Posted on May 8, 2014 by in Security

If there is a silver lining to the series of high-profile targeted attacks that have made headlines over the past several months, it is that more enterprises are losing faith in the “magic bullet” invulnerability of their prevention-based network security defense systems.

That is, they are recognizing that an exclusively prevention-focused architecture is dangerously obsolete for a threat landscape where Advanced Persistent Threats (APTs) using polymorphic malware can circumvent anti-virus software, firewalls (even “Next Generation”), IPS, IDS, and Secure Web Gateways — and sometimes with jarring ease. After all, threat actors are not out to win any creativity awards. Most often, they take the path of least resistance; just ask Target.

As a result of this growing awareness, more enterprises are wisely adopting a security architecture that lets them analyze traffic logs and detect threats that have made it past their perimeter defenses – months or possibly even years ago. It is not unlike having extra medical tests spot an illness that was not captured by routine check-ups. Even if the news is bad (and frankly, it usually is), knowing is always better than not knowing for obvious reasons.

Network Security Automationm

However, while analyzing traffic logs is a smart move, enterprises are making an unwelcome discovery on their road to reliable threat detection: manual analytics is not a feasible option. It is far too slow, incomplete, expensive, and finding qualified professionals in today’s labor market is arguably harder than finding some elusive APTs; at last look on the “Indeed” job board, there were over 27,000 unfilled security engineer positions in the US alone.

The average 5,000 person enterprise can expect their FW/IPS/SWG to generate over 10 gigabytes of data each day, consisting of dozens of distinct incidents that need to be processed in order to determine if and how bad actors have penetrated the perimeter. All of this creates more than a compelling need for automated analysis of traffic logs, which allows enterprises to:

● Efficiently analyze logs that have been collected over a long period of time

● Process logs at every level: user, department, organization, industry, region

● Correlate the logs with malware communication profiles that are derived from a learning set of behaviors and represent a complete picture of how malware acts in a variety of environments

● Use machine learning algorithms to examine statistical features, domain and IP reputation, DGA detection, and botnet traffic correlation, etc.

● Adapt by using information about different targeted and opportunistic attacks from around the world (“crowdsourcing”) in order to get a perspective on the threat landscape that is both broader and clearer

Integrate credible and actionable threat data to other security devices in order to protect, quarantine, and remediate actual threats

● Get insight on how the breach occurred in order to aid forensic investigations and prevent future attacks

With this being said, does this mean that enterprises will finally be able to prevent 100% of the targeted attacks? No; there has never been a magic bullet, and this is unlikely to change in our lifetime. Any belief to the contrary plays directly into the hands of threat actors.

However, automated traffic log analysis can help enterprises reduce the number of infections, including those that they do not know about, yet are unfolding in their networks right now, before the compromise becomes a breach. And considering that it only takes one successful breach to create a cost and reputation nightmare that can last for years, the question is not whether automatic analysis makes sense, but rather, how can enterprises hope to stay one step ahead of the bag guys without it?

Related Reading: The Next Big Thing for Network Security: Automation and Orchestration

Related Reading: Network Security Considerations for SDN

Related ReadingMaking Systems More Independent from the Human Factor

Related ReadingSoftware Defined Networking – A New Network Weakness?

Aviv Raff is Co-Founder and Chief Technology Officer at Seculert. He is responsible for the fundamental research and design of Seculert’s core technology and brings with him over 10 years of experience in leading software development and security research teams. Prior to Seculert, Aviv established and managed RSA’s FraudAction Research Lab, as well as working as a senior security researcher at Finjan’s Malicious Code Research Center. Before joining Finjan, Aviv led software development teams at Amdocs. He holds a B.A. in Computer Science and Business Management from the Open University (Israel).

Previous Columns by Aviv Raff:


SecurityWeek RSS Feed