December 23, 2024

Automated Traffic Log Analysis: A Must Have for Advanced Threat Protection

Posted on May 8, 2014 by in Security

If there is a silver lining to the series of high-profile targeted attacks that have made headlines over the past several months, it is that more enterprises are losing faith in the “magic bullet” invulnerability of their prevention-based network security defense systems.

That is, they are recognizing that an exclusively prevention-focused architecture is dangerously obsolete for a threat landscape where Advanced Persistent Threats (APTs) using polymorphic malware can circumvent anti-virus software, firewalls (even “Next Generation”), IPS, IDS, and Secure Web Gateways — and sometimes with jarring ease. After all, threat actors are not out to win any creativity awards. Most often, they take the path of least resistance; just ask Target.

As a result of this growing awareness, more enterprises are wisely adopting a security architecture that lets them analyze traffic logs and detect threats that have made it past their perimeter defenses – months or possibly even years ago. It is not unlike having extra medical tests spot an illness that was not captured by routine check-ups. Even if the news is bad (and frankly, it usually is), knowing is always better than not knowing for obvious reasons.

Network Security Automationm

However, while analyzing traffic logs is a smart move, enterprises are making an unwelcome discovery on their road to reliable threat detection: manual analytics is not a feasible option. It is far too slow, incomplete, expensive, and finding qualified professionals in today’s labor market is arguably harder than finding some elusive APTs; at last look on the “Indeed” job board, there were over 27,000 unfilled security engineer positions in the US alone.

The average 5,000 person enterprise can expect their FW/IPS/SWG to generate over 10 gigabytes of data each day, consisting of dozens of distinct incidents that need to be processed in order to determine if and how bad actors have penetrated the perimeter. All of this creates more than a compelling need for automated analysis of traffic logs, which allows enterprises to:

● Efficiently analyze logs that have been collected over a long period of time

● Process logs at every level: user, department, organization, industry, region

● Correlate the logs with malware communication profiles that are derived from a learning set of behaviors and represent a complete picture of how malware acts in a variety of environments

● Use machine learning algorithms to examine statistical features, domain and IP reputation, DGA detection, and botnet traffic correlation, etc.

● Adapt by using information about different targeted and opportunistic attacks from around the world (“crowdsourcing”) in order to get a perspective on the threat landscape that is both broader and clearer

Integrate credible and actionable threat data to other security devices in order to protect, quarantine, and remediate actual threats

● Get insight on how the breach occurred in order to aid forensic investigations and prevent future attacks

With this being said, does this mean that enterprises will finally be able to prevent 100% of the targeted attacks? No; there has never been a magic bullet, and this is unlikely to change in our lifetime. Any belief to the contrary plays directly into the hands of threat actors.

However, automated traffic log analysis can help enterprises reduce the number of infections, including those that they do not know about, yet are unfolding in their networks right now, before the compromise becomes a breach. And considering that it only takes one successful breach to create a cost and reputation nightmare that can last for years, the question is not whether automatic analysis makes sense, but rather, how can enterprises hope to stay one step ahead of the bag guys without it?

Related Reading: The Next Big Thing for Network Security: Automation and Orchestration

Related Reading: Network Security Considerations for SDN

Related ReadingMaking Systems More Independent from the Human Factor

Related ReadingSoftware Defined Networking – A New Network Weakness?

Aviv Raff is Co-Founder and Chief Technology Officer at Seculert. He is responsible for the fundamental research and design of Seculert’s core technology and brings with him over 10 years of experience in leading software development and security research teams. Prior to Seculert, Aviv established and managed RSA’s FraudAction Research Lab, as well as working as a senior security researcher at Finjan’s Malicious Code Research Center. Before joining Finjan, Aviv led software development teams at Amdocs. He holds a B.A. in Computer Science and Business Management from the Open University (Israel).

Previous Columns by Aviv Raff:


SecurityWeek RSS Feed

Tags: , , , , , , , , , , ,

Comments are closed.