May 17, 2024

PCI Security Standards Council Releases Tokenization Product Guidelines

Posted on April 3, 2015 by in Security

The PCI Security Standards Council announced on Thursday the availability of guidelines designed to help organizations develop tokenization products.

Tokenization is the process in which sensitive information, such as payment card data, is replaced with a randomly generated unique token or symbol. Tokenization products, which can be software applications, hardware devices or service offerings, can help merchants reduce the risk of having their customers’ financial information stolen by malicious actors.

“Tokenization is one way organizations can limit the locations of cardholder data (CHD). A smaller subset of systems to protect should improve the focus and overall security of those systems, and better security will lead to simpler compliance efforts,” explained PCI SSC Chief Technology Officer Troy Leach.

There are several challenges to implementing tokenization, but reliable solutions already exist and representatives of the merchant community believe this could be an efficient approach to preventing payment card fraud and identity theft.

The Tokenization Product Security Guidelines released by the PCI Council have been developed in collaboration with a dedicated industry taskforce. The report focuses on the generation of tokens, using and storing tokens, and the implementation of solutions that address potential attack vectors against each component. The document also contains a classification of tokens and their use cases.

The recommendations in the guidelines are addressed to tokenization solution and product vendors, tokenization product evaluators, and organizations that want to develop, acquire or use tokenization products and solutions.

“Minimizing the storage of card data is a critical next step in improving the security of payments. And tokenization does just that,” said PCI SSC General Manager Stephen Orfei. “At the Council, we are excited about the recent advancements in this space. Helping merchants take advantage of tokenization, point-to-point encryption (P2PE) and EMV chip technologies as part of a layered security approach in current and emerging payment channels has been a big focus at this week’s PCI Acquirer Forum.”

The PCI Council has pointed out that the guidelines are supplemental and they don’t supercede or replace any of the requirements detailed in the PCI Data Security Standard (PCI DSS).

PCI DSS 3.0, which focuses on security instead of compliance, went into effect on January 1. Version 3.1 of the PCI DSS, expected to be released this month, targets the SSL (Secure Sockets Layer) protocol. Organizations must ensure that they or their service providers don’t use the old protocol.

Last week, the PCI Council published new guidance to help organizations conduct penetration testing, which is considered a critical component of the PCI DSS.

The Tokenization Product Security Guidelines are available for download in PDF format.

Subscribe to the SecurityWeek Email Briefing

view counter

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

XSS, XFS, Open Redirect Vulnerabilities Found on About.com

Posted on February 3, 2015 by in Security

About.com, the online resource website visited by tens of millions of users each month, is plagued by several types of potentially dangerous vulnerabilities, a researcher revealed on Monday.

According to Wang Jing, a PhD student at the Nanyang Technological University in Singapore, a large majority of the pages on About.com are vulnerable to cross-site scripting (XSS) and cross-frame scripting (XFS/iFrame injection) attacks.

The expert tested close to 95,000 About.com links with a script he developed and determined that at least 99.88% of them are vulnerable. The search field on the website’s homepage is also plagued by an XSS flaw which, according to Jing, means that all the domains related to about.com are vulnerable to XSS attacks.

In order to exploit XSS vulnerabilities, an attacker needs to convince the victim to click on a specially crafted link. XSS attacks can be used to alter the appearance of a website, access potentially sensitive information, and spy on users.

XFS attacks can be used to steal data from websites accessed by the victim. For the attack to work, a malicious actor must get the user to access a Web page he controls. Such vulnerabilities can also be exploited for distributed denial-of-service (DDoS) attacks, the expert noted.

Jing has also identified open redirect bugs on several About.com pages. The vulnerabilities can be leveraged to trick users into visiting phishing and other malicious websites by presenting them with a link that apparently points to an about.com page.

“The vulnerabilities can be attacked without user login. Tests were performed on Microsoft IE (10.0.9200.16750) of Windows 8, Mozilla Firefox (34.0) & Google Chromium 39.0.2171.65-0 ubuntu0.14.04.1.1064 (64-bit) of Ubuntu (14.04),Apple Safari 6.1.6 of Mac OS X Lion 10.7,” the researcher said in a blog post.

About.com was notified of the existence of the vulnerabilities back in October 2014, but so far the company hasn’t done anything to address them, the researcher said. About.com hasn’t responded to SecurityWeek’s requests for comment.

Poof-of-concept (PoC) videos for the XSS vulnerability on the About.com homepage and the open redirect flaw have been published by the researcher. 

Subscribe to the SecurityWeek Email Briefing

view counter

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Industry Reactions to Devastating Sony Hack

Posted on December 5, 2014 by in Security

The systems of entertainment giant Sony have been hacked once again, and although the full extent of the breach is not yet known, the incident will likely be added to the list of most damaging cyberattacks.

Feedback Friday for December 5, 2014

A group of hackers called GOP (Guardians of Peace) has taken credit for the attack and they claim to have stolen terabytes of files. Sony admitted that a large amount of information has been stolen, including business and personnel files, and even unreleased movies.

On Friday, security firm Identity Finder revealed that the attackers leaked what appears to be sensitive personal data on roughly 47,000 individuals, including celebrities.

North Korea is considered a suspect, but the country’s officials have denied any involvement, and Sony representatives have not confirmed that the attack was traced back to the DPRK.

Researchers from various security firms have analyzed a piece of malware that appears to have been used in the Sony hack. The threat is designed to wipe data from infected systems.

The FBI launched an investigation and sent out a memo to a limited number of organizations, warning them about a destructive piece of malware that appears to be the same as the one used in the attack against Sony.

Some experts believe the FBI sent out the alert only to a few organizations that were likely to be affected. Others have pointed out that the FBI doesn’t appear to have a good incident response plan in place.

And the Feedback Begins…

Cody Pierce, Director of Vulnerability Research at Endgame:

“The latest FBI ‘flash’ report warning U.S. businesses about potentially destructive attacks references malware that is not highly advanced. Initial reports associate the alert with malware that overwrites user data and critical boot information on the hard drive, rendering the computer effectively useless. Based on analysis of the assumed malware sample, no technology exists within the sample that would warrant a larger alert to corporations. Additional information, either present in the malware–like IP address or host information–or during the investigation, also likely made it clear who required advance notification. Because of the malware’s low level of sophistication as well as the reportedly targeted nature of the attacks, it is entirely reasonable that the FBI would only inform a small number of companies.

The goal of these coordinated alerts is to raise awareness to the most likely targets so that they can ensure their security readiness, without unnecessary burden to those unlikely to be affected. In this case, because the malware is targeted and not sufficiently advanced, the FBI’s approach is justified. Conversely, in the event that more sophisticated malware or a new attack vector had been discovered, greater communication would have been necessary. Based on the information available, the FBI made the right decision in issuing this particular alert.” 

Mark Parker, Senior Product Manager, iSheriff:

 “For many organizations in the midst of breach investigation, decisions are often made very quickly. Without the luxury of planning meetings and impact analysis, some of the things are done in a ‘from the cuff’ manner based upon the evidence in hand, which may in fact be incomplete. In the case of the FBI memo that was sent out, it was done in a manner that was clearly done hastily. The threat posed by the malware was significant and a quick decision was made to send out an alert.

 

While I wasn’t in the room, I am fairly certain from having been in similar rooms, and in similar situations, that a list of who should receive the alert was not a very long conversation, and the point was to get the information out as soon as possible. What this demonstrates is that both Sony and the FBI do not have a good incident response plan in place for this type of incident. All organizations should have an incident response plan in place that lays out this sort of information in advance so that time is not spent on such issues. A clear process for key decisions is a very important part of any incident response plan, as is a list of who should be contacted in different situations.”

Steve Lowing, Director of Product Management, Promisec:

“Given that Sony Pictures is releasing a movie next month that satirizes assassinating North Korea’s supreme leader Kim Jong-Un, and after learning about this release last June declared war on the company, it’s widely held that the North Korean government is behind the attack. It’s likely that this is true at least at a sponsorship level given the number of attacks on South Korean banks and various businesses over the course of the last year, with the likely attackers being the country’s cyber warfare army known as unit 121.

Unit 121 is believed to be operating out of a Shenyang China luxury hotel giving them easy access to the world with being an arm’s reach from North Korea. The main reason for this is China’s close proximity to North Korea, North Korea’s almost non-existent internet access and China’s far superior network and cyber hacking resources. This is yet another example of State sponsored hacktivism targeting companies directly.”

Jonathan Carter, Technical Director, Arxan Technologies:

“So far, the evidence seems to suggest that the Sony hack was accomplished via execution of malicious malware. Hackers typically conduct these attacks by somehow tricking the user into executing something that is malicious in nature from within a system that is sensitive in nature. The recent iOS Masque and WireLurker vulnerabilities clearly illustrate that the delivery and execution of malicious code can take some very clever approaches. In light of these recent revelations, it is reasonable to expect to see a rise in distribution of malware (disguised as legitimate B2E apps that have been modified) via mobile devices owned by employees that have access to sensitive backend systems.”

Vijay Basani, CEO of EiQ Networks:

“It is possible that the hackers accessed not only unreleased movies, but also gained access to user accounts, celebrity passport details, sensitive trade secrets and know how. This demonstrates that in spite significant investments in traditional and next-gen security technologies, any network can be compromised. What is truly required is a total commitment from the senior management to building a comprehensive security program that delivers pro-active and reactive security and continuous security posture.”

Craig Williams, Senior Technical Leader and Security Outreach Manager for Cisco’s Talos team: 

“The recent FBI ‘flash alert’ was published covering the dangers of a new wiper Trojan that has received quite a bit of media attention. There are a few key facts that seem to be overlooked by many of the early news accounts of this threat:

Cisco’s Talos team has historic examples of this type of malware going back to 1998.  Data *is* the new target, this should not surprise anyone – yet it is also not the end of the world.  Recent examples of malware effectively “destroying” data – putting it out of victims’ reach – also include Cryptowall, and Cryptolocker, common ransomware variants delivered by exploit kits and other means.

Wiping systems is also an effective way to cover up malicious activity and make incident response more difficult, such as in the case of the DarkSeoul malware in 2013.

Any company that introduced proper back-up plans in response to recent ransomware like Cryptolocker or Cryptowall should already be protected to a degree against these threats detailed by the FBI.  Defense-in-depth can also detect and defeat this type of threat.”

Carl Wright, general manager at TrapX Security:

“The FBI and other national government organizations have an alerting process that we are sure they followed to the letter. It is important for them to provide an early warning system for these types of attacks, especially in the case of the Sony breach, because of the severe damage that could ultimately be used against our nation’s critical infrastructure.

Timely information sharing must be completely reciprocal in nature, meaning, corporations also have to be willing to share their cyber intelligence with the government.

 

When we look at the significant incidents of 2014 and in particular Sony, we see that most enterprises are focusing efforts and investments on breach prevention. 2014 has clearly highlighted the need for corporations and government to include additional technological capabilities that better detect and interdict breaches before they can spread within an organization.”

Ian Amit, Vice President, ZeroFOX:

“The Sony breach is a tricky situation. How it occurred is still up for debate – possibly nation state? Possibly an insider? Possibly a disgruntled employee? Regardless, it’s clear the breach goes very deep. It has gotten to the point that Sony is outright shutting down its network. This means even the backups are either nonexistent or compromised, and the hackers likely got just about everything, making this one of the worst breaches ever at an organization of this size. The attack touches anyone involved with Sony – auditors, consultants, screenwriters, contractors, actors and producers. The malware might be contained on Sony’s servers, but the data loss is much further reaching. Make no mistake, this breach is a big one.

I am skeptical this attack is nation state-level attack. The idea that North Korea is retaliating against Sony for an upcoming film is a wildly sensationalist explanation. Hackers regularly cover their trails by leaving red herrings for the cleanup crew – indications that the Russians, Chinese, Israelis, North Koreans and your grandmother were all involved. A small script of Korean language is hardly damning evidence. Code can be pulled from a variety of sources and there is no smoking gun (yet) in the case of the Sony breach.”

Oliver Tavakoli, CTO, Vectra Networks:

“Any malware that destroys its host will have limited impact unless it is part of a larger coordinated attack. One or two laptops being wiped at Sony would be a nuisance, but large numbers of devices being wiped all at once is devastating. The latter style of attack requires an attacker to achieve a persistent network-level compromise of the organization before the wiper malware even becomes relevant.

The information released as part of the FBI alert bears this out. The malware sample detailed in the alert was compiled only days before it was used. This is a strong sign that Sony was compromised well before the time the malware was built, and the wiper malware was the coup de grâce at the end of the breach.

This is particularly significant when evaluating the FBI alert. Sharing indicators of compromise (IoC) is a good thing, and the industry needs more of this sharing. But we need to keep in mind that these particular indicators represent the absolute tail end of a much longer and widespread attack. In fact, some of the IoCs detailed in the alert are only observable once the wiper malware has begun destroying data. Obviously, this sort of indicator is much too late in the game, but too often is the only indicator that is available. What the industry needs badly are indicators of attack that reveal the compromise of the organization’s network at a point when security teams can still prevent damage.”

Kenneth Bechtel, Tenable Network Security’s Malware Research Analyst:

 “This type attack is not new, it’s been around for a long time, with multiple examples. The most recent similarity is the ransomware that’s been attacking systems. These attacks are often difficult to detect prior to the execution of the payload. The best thing is a good backup scheme as part of your response. Many times the answer to modern malware infections is to reimage the system. In case this occurs on your system, a reimage is often the best response. The only thing that reimaging would not solve is having most current data like documents and spreadsheet. It’s this combination of reimaging and restoring backups that is the most efficient response to the attack. While this ‘fixes’ the host, network forensics should be done to identify the attack and create defenses against the attack in the future.”

Jon Oberheide, CTO, Duo Security:

“I don’t believe that the limited distribution of the FBI warning was improper. But, I think the scope and focus on data-destroying malware was a bit misguided.

 

Certainly data loss can have a big impact on the operations of a business. We saw that big time back in 2012 with the Saudi Aramco attack by data-wiping malware. But, regardless of whether the data loss is intentional or inadvertent, it’s vital to have proper disaster recovery and business continuity processes in place to be able to recover and continue operation. However, when considering a sophisticated cyber-attack, disaster recovery processes must assume that an attacker has more capabilities and reach than standard inadvertent data loss events. For example, an attacker may have access to your data backup infrastructure and be able to destroy backups as well. So, modern organizations may have to revisit their DR/BC models and take into account these new threat models.

The real impact of the Sony breach is not the destruction of data, but the longer term effects of confidentiality and integrity of their data and infrastructure. Rebuilding all their infrastructure post-breach in a trusted environment is an incredibly challenging and arduous task. The disclosure of credentials, infrastructure, critical assets, employee PII, and even things like RSA SecurID token seeds will have a much longer-term, but more under-the-radar, impact on Sony’s business.

Most importantly, in the modern day, breaches don’t only impact the directly-affected organization, but they tend to sprawl out and negatively impact the security of all organizations and the Internet ecosystem as a whole. A breach doesn’t happen in a vacuum: stolen credentials are re-used to gain footholds in other organizations, stolen source code is used to find vulnerabilities to assist future attacks, and information and experience is gleaned by attackers to hone their tactics, techniques, and procedures.”

Idan Tendler, CEO of Fortscale:

“The traditional concept for security was to keep the most important resources, i.e. the vaults with the cash (or in Sony’s case, films) safe. What we’re seeing with breaches of this magnitude is that the harm now goes far beyond any immediate and limited capital damage. Leaked sensitive information regarding employee salary and healthcare has the potential to cause enormous reputational harm and internal turmoil within a workforce. Revealing that kind of data can lead to jealousy, resentment and distrust among workers and create a very toxic work environment.

With news of passwords to sensitive documents also being leaked, Sony will need to be more vigilant in securing user access to resources by constantly monitoring and analyzing user activity for possible credential abuse.”

Clinton Karr, Senior security specialist at Bromium:

“These attacks are troublesome, but not surprising. Earlier this year we witnessed Code Spaces shutdown after a successful attack destroyed its cloud back-ups. Likewise, the evolution of crypto-ransomware suggests attackers are targeting the enterprise with destructive attacks. These attacks are unlike the “cat burglary” of Trojan attacks, but much more brute force like a smash-and-grab or straight vandalism.”

Ariel Dan, Co-Founder and Executive VP, Porticor:

“Reporting the technical details of a specific attack is a sensitive topic. Attack details can and will be used by new hackers against new targets. On the other hand, companies can’t do much to defend against a type of attack they know very little about. One relevant example of such a potential attack was around a severe security bug in the Xen virtualization system that exposed cloud users of Amazon Web Services, Rackspace and other cloud providers. The cloud vendors had stealthily patched affected systems, issued a vague notification to their users of an immediate restart action, and only after it was all done was the attack realized and publicized. Reporting the bug prior to fixing the problem would have a devastating effect on cloud users.

 

Back to the Sony attack: I personally believe that reporting the entire details of a security breach can do more harm than good, but there should be a way to communicate enough meaningful information without empowering the bad guys. Blogs like KrebsonSecurity provided additional details, including a snort signature to detect this specific attack. Such data is meaningful for the defender and does not help an attacker. From this information we learned that organizations should embrace an “encrypt everything” approach as we step into 2015. We should be able to guarantee that data is not exposed even if an organization has been infiltrated.”

Tim Keanini, CTO at Lancope:

“I think the question being asked here is a great opportunity to describe the threats of yesterday versus the threats we face today.  In the past, broad advisories on technical flaws were effective mainly because the problem was universal.  Attackers would automate tools to go after technical flaws and there was no distinction between exploitation of a large corporation or your grandmother. If the vulnerability existed, the exploitation was successful.  In the case of Sony, we are talking about a specific adversary (Guardians of Peace) targeting Sony Pictures and with specific extortion criteria.  With this type of advanced threat, warnings sent out by the FBI on the investigation itself will be less prescriptive and more general making its timeliness less of a priority. 

From everything we have seen disclosed so far, it is difficult to assess and advise on the information security practice when some of the flaws exploited seem to suggest very little security was in place.  The analogy would be: it would be hard to assess how the locks where compromised when the doors to host the locks were not even present.   For example, some of the disclosure on reddit earlier in the week suggests that some files named ‘passwords’ were simply in the clear and stored unencrypted in txt and xls files.  The investigation will determine the true nature of all of this speculation but I use this as an example because the FBI could issue a warning every day of the week that said “Don’t do stupid things” and be just as effective.

The lesson learned here is that if you are connected to the Internet in any shape or form, this type of security breach happening to you and your company is a very real risk.  Step up your game before you become the subject of another story just like this.  It would be weird but Sony Pictures should write a movie on how a cybercrime group completely comprised and held an entertainment company for cyber extortion – categorized under non-fiction horror.”

Kevin Bocek, Vice President of Security Strategy & Threat Intelligence at Venafi:

“As the FBI, DHS and others investigating the Sony hack work furiously to uncover the details and the threat actors behind this breach, it’s important that we recognize the attack patterns that are right in front of our face: cybercriminals are and will continue to use the same attack blueprint over and over again. Why? Because they use what works.

In April 2011, Sony’s PlayStation Network was breached where asymmetric keys were stolen, compromising the security of 77 million users’ accounts. Now, nearly four years later, Sony is still facing the same threat — only this time it’s directed on Sony Pictures Entertainment. In this latest breach, cybercriminals successfully gained access to dozens of SSH private keys – the same way they stole private keys in the Mask, Crouching Yeti and APT18 attacks. Once these keys are stolen, the attackers can get access to other systems — and then it just goes from bad to worse. It’s critical that incident response and security teams realize that the only way that the attackers can *truly* be stopped from accessing these systems is by replacing the keys and certificates. Until then, they will continue to wreak havoc and cause more damage with elevated privileges, the ability to decrypt sensitive data in transit, and spoof systems and administrators. All it takes is one compromised key or vulnerable certificate to cause millions in damages. Hopefully, Sony will learn its lesson this go round.”

Until Next Friday… Have a Great Weekend!

Subscribe to the SecurityWeek Email Briefing

view counter

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Feedback Friday: Hackers Infiltrate White House Network – Industry Reactions

Posted on November 3, 2014 by in Security

Welcome back to Feedback Friday! An unclassified computer network at the White House was breached recently and the main suspects are hackers allegedly working for the Russian government.

Feedback Friday: White House Network Breached

The incident came to light earlier this week when an official said they had identified “activity of concern” on the unclassified network of the Executive Office of the President (EOP) while assessing recent threats. The official said the attackers didn’t cause any damage, but some White House users were temporarily disconnected from the network while the breach was dealt with.

Experts have pointed out that while the attackers breached an unclassified network, it doesn’t necessarily mean that they haven’t gained access to some useful data, even if it’s not classified. They have also outlined the methods and strategies used by both the attackers and the defenders in such a scenario.

And the Feedback Begins…

Amit Yoran, President at RSA:

“The breach underscores the constant siege of attacks on our government and businesses. Fortunately — by definition — information with grave or serious impact to national security is classified and would not be found on an unclassified network. That said, there is most likely information on unclassified networks that the White House would not like public or for 3rd party consumption.

As for the profile of the adversary, the White House uses the latest security technologies making them a very challenging target to breach. Top secret clearances are required for access to networks and personnel are continuously and rigorously vetted. As such — and acknowledging that until a thorough investigation is completed, speculation can be dangerous — a standard botnet or phishing malware is a less likely scenario than a focused adversary with time and expertise in developing customized exploits, malware and campaigns.”

Mark Orlando, director of cyber operations at Foreground Security. Orlando previously worked at the EOP where he led a contract team responsible for building and managing the EOP Security Operations Center under the Office of Administration:

“Sophisticated attackers constantly alter their approach so as to evade detection and they will eventually succeed. The best a defender can do in this case is to identify and respond to the attack as quickly and effectively as possible. It isn’t at all unusual for an attack like this one to be discovered only after a malicious email has been identified, analyzed, and distilled into indicators of compromise (subject lines, source addresses, file names, and related data elements) used to hunt for related messages or attacks that were initially missed. White House defenders routinely exchange this kind of data with analysts across the Federal Government to facilitate those retrospective investigations. That may have been how this compromise was discovered and that doesn’t amount to a ‘miss’.

While the media points to outages or delays in major services like email at the White House, this is also not an unusual side effect of proper containment and eradication of a threat like this one- especially if there are remote users involved. Incidents exactly like this one occur all over the Federal government and increasingly in the private sector as well; the only thing different about this attack that makes it more newsworthy than those other incidents is that it occurred at EOP.”

Tom Kellermann, Trend Micro chief cybersecurity officer and former commissioner on The Commission on Cyber Security for the 44th Presidency:

“Geopolitical tensions are now manifested through cyberattacks. The enemies of the state conduct tremendous reconnaissance on their targets granting them situational awareness as to our defenses in real time. This reality allows for elite patriotic hackers to bypass our defenses.”

Irene Abezgauz, VP Product Management, Quotium:

“Security, cyber or physical, relies heavily on risk management. With a large operation, it is difficult to secure everything on the same level, priority is often given to the more sensitive networks. In the case of the White House hack, the breached network was unclassified, meaning it probably has slightly different security measures than classified networks.

Government systems are prime targets for hackers. Even if the breached network is unclassified and no sensitive information was exposed, all government network breaches draw attention. In public opinion, attackers gaining access to government computer systems, no matter whether classified or not, reflects badly on the ability of the US to defend itself, especially when foreign nationals are suspected. In addition, availability and integrity must be maintained in systems that involve any kind of government decision making, more than in most other systems.

The bottom line is that high profile targets must maintain a high level of security on all networks. Hackers, private and state-funded, are continuously attempting attacks on these systems. Such attacks must be blocked in order to protect data within as well as assure the public of the ability of the government to protect its cyber systems.”

John Dickson, Principal at the Denim Group:

“Although initial reports emphasize the unclassified nature of the system and networks, security experts know that successful attacks against certain unclassified systems can, in fact, still be gravely serious. Given the fact this concerns perhaps the most high-visibility target in the world – the White House – and you potentially have a genuinely difficult situation.

On one hand, you have the issue of public confidence in our institutions of government. ‘If the attackers can compromise the White House, what else can the possibly get into?’ is a perfectly valid question from citizens who may not recognize the distinction between unclassified and classified systems. Also, sensitive information that is unclassified may traverse these systems and give attackers more context to allow them to put together a larger picture of what’s happening at the White House. Military folks call refer to this term as Operational Security, or OPSEC, and this is always a worry for those protecting the President, the White House, and the operations of the Executive Branch of government.

From a defensive standpoint, when you face a sophisticated attacker with substantial resources you have be constantly vigilant and assume certain systems will fail. It’s far too early to editorialize on theories of ‘what might have happened’ at the White House, but we always recommend a defense in depth approach to application and system design that ‘fails open,’ so that if an attacker compromises one type of defense, it doesn’t compromise the entire ecosystem.”

Ian Amit, Vice President at ZeroFOX:

“Much of the conversation surrounding the recent White House hack centers on the nature of the compromised network. The network is ‘unclassified,’ leading many people to believe the affected information is non-critical or innocuous. It’s important to note however that enough unclassified information, when aggregated and correlated, quickly becomes classified. Isolated data points might not mean much by themselves, but enough time spent passively listening to unclassified chatter can reveal some very sensitive intelligence.

So how much time was the hacker on the network? It’s difficult to tell. Security officials alerted on ‘suspicious activity.’ This phrase doesn’t give us much insight into how long the network was compromised. The hacker could have been active on the network for months without doing anything to sound the alarms. It’s one thing if a hacker is caught in the act of breaking in or stealing data. That kind of event information generally gives a clear indication of the attack timeline. Triggering on passive behavior makes this much more difficult.

With that said, it’s commendable that White House security officials are looking for behavioral cues rather than overt events to detect malicious activity. Soft indicators are much more difficult to detect and means the security officials are using some advanced tools to understand traffic on the network.”

Anup Ghosh, CEO of Invincea:

“The disclosure of breach from the White House this week was remarkable for its differences from a similar disclosure in 2012. It’s clear from recent press releases from security companies, that Russia is the New Black now. In fact, if you get hacked by the Chinese now, it’s almost embarrassing because they are considered less sophisticated than the Russians. So now, every breach seems to be attributed to Russians, though largely without any evidence.

A little more than two years ago in October 2012, the White House acknowledged a breach of its unclassified networks in the White House Military Office (which also manages the President’s nuclear ‘football’). The talking points at the time were: 1. Chinese threat, 2. Non-sophisticated attack method (spear-phish), 3. Unclassified network, so no harm. This week, the talking points are: 1. Russian government threat, 2. Sophisticated attack method (spear-phish), and 3. Deep concern over breach of unclassified network. The similarities between the two breaches are remarkable, but the reaction couldn’t be more different.

Before we indict the Russians for every breach now, it would be great to see some bar set for attribution to a particular group. It would also be great to not use “sophisticated” threat or Russians as a scape goat for not properly addressing spear-phishing threats with technology readily available off the shelf (and shipped with every Dell commercial device).”

Michael Sutton, VP of Security Reasearch for Zscaler:

“The breach of a compromised White House computer reported this week is simply the latest in ongoing and continual attacks on government networks. While such breaches periodically hit the headlines thanks to ‘unnamed sources’, it’s safe to assume that the general public only has visibility into the tip of the iceberg. White House officials admitted that this latest breach was discovered ‘in the course of assessing recent threats’, suggesting that following the trail of breadcrumbs for one attack led to another.

In September, there were reports of yet another successful attack, this one leveraging spear phishing and compromising a machine on an unclassified network and earlier this month, details of the Sandworm attacks emerged, which leveraged a then 0day Microsoft vulnerability to target NATO and EU government agencies. All of these recent attacks have been attributed to groups in Russia and it’s likely that they’re tied together. All Internet facing systems face constant attack, but the White House understandably presents a particularly attractive target.

While all G20 nations have advanced cyber warfare capabilities and conduct offensive operations, Russia and China have been particularly aggressive in recent years, often conducting bold campaigns that are sure to be uncovered at some point.”

Zach Lanier, Senior Security Researcher at Duo Security:

“U.S. government and defense networks are often the target of attackers — and the White House is without a doubt very high on that list, regardless of the breached network reportedly being ‘unclassified’. Everyone from hacktivists to foreign intelligence agencies have sought after access to these networks and systems, so this intrusion isn’t a huge surprise.” 

Carl Wright, General Manager of North America for TrapX Security:

“When it comes to our military, government and its supporting national defense industrial complex, the American public’s expectation is and should be significantly higher. The Senate Armed Services Committee (SASC) findings in September highlighted how nation-state actors were targeting contractors with relation to the federal government so it is to be expected that actual government bodies are also being targeted.

95 percent of the security market is signature based and thus will not detect a targeted zero-day. We must operate under the notion that networks are already compromised and focus defenses on monitoring lateral movements within data centers and private networks as that is how hackers escalate their attack and access. Unfortunately, existing security technologies focus from the outside in, trying to understand the entire world of cyber terrorists’ behaviors which inundate security teams with alerts and false-positives.

These breaches demonstrate how traditional security tools alone don’t do enough and both enterprises and government organizations need to constantly evaluate and improve their security posture to thwart today’s nation-states or crime syndicates whether foreign or domestic. With the United States President’s intranet being compromised, it truly shows the poor state of our national cyber defense capabilities.”

Nat Kausik, CEO at Bitglass:

“Organizations whose security models involve ‘trusted devices’ are naturally prone to breaches. Employees take their laptops on the go, get hacked at public WIFI networks, and come back to the office where the device is treated as trusted and allowed to connect to the network.

The compromised device enables the hacker to gain a broader and more permanent foothold inside the network. Government entities have long favored the ‘trusted devices’ model and are actually more prone to breaches than organizations that treat all user devices as suspect.”

Greg Martin, CTO at ThreatStream:

“It’s public knowledge that Russia has been very active in sponsored cyber espionage and attacks but have recently turned up the volume since both the Ukranian conflict and given the Snowden leaks which in my opinion have given Russian and China the open door to be even more bold in their offensive cyber programs.

Recent cyberattacks on retailers and financial institutions have been riddled with anti-US propaganda. This makes it increasingly difficult to pinpoint the backers as the activity is heavily blended threats between criminal actors, hack-tivist and state sponsored activity. As seen in the recent reports, Russia APT attacks have been prevalent in targeting U.S. interests including the financial sector.

ThreatStream believes organizations should accelerate their policy of sharing cyber threat information and look at how they currently leverage threat and adversary intelligence in their existing cyber defense strategies.”

Until Next Friday…Happy Happy Halloween and have a Great Weekend!

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Feedback Friday: ‘Shellshock’ Vulnerability – Industry Reactions

Posted on September 28, 2014 by in Security

The existence of a highly critical vulnerability affecting the GNU Bourne Again Shell (Bash) has been brought to light this week. The security flaw is considered by some members of the industry as being worse than the notorious Heartbleed bug.

Feedback Friday

GNU Bash is a command-line shell used in many Linux, Unix and Mac OS X operating systems. The vulnerability (CVE-2014-6271) has been dubbed “Bash Bug” or “Shellshock” and it affects not only Web servers, but also Internet-of-Things (IoT) devices such as DVRs, printers, automotive entertainment systems, routers and even manufacturing systems.

By exploiting the security hole, an attacker can execute arbitrary commands and take over targeted machine. Symantec believes that the most likely route of attack is through Web servers that use CGI (Common Gateway Interface). There have already been reports of limited, targeted attacks exploiting the vulnerability.

A patch has been made available, but it’s incomplete. Until a permanent fix is rolled out, several organizations have launched Shellshock detection tools. Errata Security has started scanning the Web to find out how many systems are affected, and Symantec has published a video to demonstrate how the flaw can be exploited.

The security community warns that the vulnerability can have serious effects, and points out that it could take a long time until all systems are patched.

And the Feedback Begins…

Ian Pratt, Co-founder and EVP at Bromium:

 “The ‘shellshock’ bash vulnerability is a big deal. It’s going to impact large numbers of internet-facing Linux/Unix/OS X systems as bash has been around for many years and is frequently used as the ‘glue’ to connect software components used in building applications. Vulnerable network-facing applications can easily be remotely exploited to allow an attacker to gain access to the system, executing with the same privilege the application has. From there, an attacker would attempt to find a privilege escalation vulnerability to enable them to achieve total compromise.

Bash is a very complex and feature-rich piece of software that is intended for interactive use by power users. It does way more than is typically required for the additional role for which it is often employed in gluing components together in applications. Thus it presents an unnecessarily broad attack surface — this likely won’t be the last vulnerability found in bash. Application developers should try to avoid invoking shells unless absolutely necessary, or use minimalist shells where required.”

 

Mark Parker, Senior Product Manager at iSheriff:

 “This bash vulnerability is going to prove to be a much bigger headache than Heartbleed was. In addition to the general Mac OS X, Linux and Unix systems that need to be patched, there are also thousands upon thousands of Internet connected Linux and Unix based embedded devices, such as DVRs, home automation systems, automotive entertainment systems, mobile phones, home routers, manufacturing systems and printers.

Most of these devices will be susceptible because most Linux based devices run bash, it is such an integral part of the Linux OS. I anticipate that we will be continue to see the fallout from this vulnerability for a long time to come.”

Carl Wright, General Manager of TrapX Security:

“We feel that industry will take this very seriously and come out with patches for this vulnerability ASAP. It could take us years to understand how many systems were compromised and how many were used to escalate privileges into systems without this vulnerability. The transitive trust nature of directory architectures and authentications systems could mean we are living with this far beyond patching the current systems if this exploit has been taken advantage of even at a small 1% level.”

Coby Sella, CEO of Discretix:

“This is the second time over the last six months when a key infrastructure component used by billions of connected things across a variety of industries has been compromised. We see this problem only getting worse as more and more unsecured or not adequately secured things are rolled out without any comprehensive security solution that reaches all the way down to the chipset. Real solutions to this problem must cover every layer from the chipset to the cloud enabling companies to remotely insert secrets into the chipset layer via secured connections within their private or cloud infrastructure.”

Nat Kausik, CEO, Bitglass:

“Enterprises with ‘trusted endpoint’ security models for laptops and mobile devices are particularly vulnerable to this flaw.  Malware can exploit this vulnerability on unix-based laptops such as Mac and Chromebook when the user is away from the office, and then spread inside the corporate network once the user returns to the office.”

Steve Durbin, Managing Director of the Information Security Forum:

“The Bash vulnerability simply stresses the point that there is no such thing as 100% security and that we all need to take a very circumspect and practical approach to how we make use of the devices that we use to share data both within and outside the home and our businesses. I have my doubts on whether or not this will lead to a wave of cyber-attacks, but that is not to say that the vulnerability shouldn’t be taken seriously. It is incumbent upon all of us as users to guard our data and take all reasonable precautions to ensure that we are protecting our information as best as we are realistically able.”

Steve Lowing, Director of Product Management, Promisec:

 “Generally, the Bash vulnerability could be really bad for systems, such as smart devices including IP cameras, appliances, embedded web servers on routers, etc… which are not updated frequently. The exposure for most endpoints is rapidly being addressed in the form of patches to all flavors of UNIX including Redhat and OS X. Fortunately for Microsoft, they avoid much of this pain since most Windows systems do not have Bash installed on them.

For vulnerable systems, depending on how they are leveraging the Bash shell the results could be grave. For example, a webserver that uses CGI for example would likely be configured to use Bash as the shell for executing commands and compromising this system via this vulnerability is fairly straightforward. The consequences could be to delete all web content which could mean Service level agreements (SLA)s are not met because of complete outage or deface the site which tarnishes your brand or even to be a point of infiltration for a targeted attack which could mean IP and/or sensitive customer information loss.

The IoT is the likely under the biggest risk since many of these devices and appliances are not under subject to frequent software updates like a desktop or laptop or server would be. This could result in many places for an attacker to break into and lay wait for sensitive information to come their way.”

Jason Lewis, Chief Collection and Intelligence Officer, Lookingglass Cyber Solutions:

 “The original vulnerability was patched by CVE-2014-6271. Unfortunately this patch did not completely fix the problem. This means even patched systems are vulnerable.

 

Several proof of concepts have been released.  The exploit has the ability to turn into a worm, so someone could unleash an exploit to potentially infect a huge number of hosts.”

Ron Gula, Chief Executive Officer and Chief Technical Officer, Tenable Network Security: 

 “Auditing systems for ShellShock will not be like scanning for Heartbleed. Heartbleed scans could be completed by anyone with network access with high accuracy. With ShellShock, the highest form of accuracy to test for this is to perform a patch audit. IT auditing shops that don’t have mature relationships with their IT administrators may not be able to audit for this.

 

Detecting the exploit of this is tricky. There are network IDS rules to detect the attack on unencrypted (non-SSL) web servers, but IDS rules to look for this attack over SSL or SSH won’t work. Instead, solutions which can monitor the commands run by servers and desktops can be used to identify commands which are new, anomalistic and suspect.”

Mike Spanbauer, Managing Director of Research, NSS Labs:

“Bash is an interpretive shell that makes a series of commands easy to implement on a Unix derivative. Linux is quite prevalent today throughout the Web, both as commerce platform and as commercial website platform. It happens to be the default script shell for Unix, Linux, well… you get the picture.

The core issue is that while initially the vulnerability highlights the ease with which an attacker might take over a Web server running CGI scripting, and ultimately, ‘get shell’ which offers the attacker the means to reconfigure the access environment, get to sensitive data or compromise the victim machine in many ways.

As we get to the bottom of this issue, it will certainly be revealed just how bad this particular discovery is – but there is a chance it’s bigger than Heartbleed, and that resulted in thousands of admin hours globally applying patches and fixes earlier this year.”

Contrast Security CTO and co-founder Jeff Williams:

 “This is a pretty bad bug. The problem happens because bash supports a little used syntax for ‘exported functions’ – basically a way to define a function and make it available in a child shell.   There’s a bug that continues to execute commands that are defined after the exported function.

So if you send an HTTP request with a referrer header that looks like this: Referer:() { :; }; ping -c 1 11.22.33.44. The exported function is defined by this crazy syntax () { :; };  And the bash interpreter will just keep executing commands after that function.  In this case, it will attempt to send a ping request home, thus revealing that the server is susceptible to the attack.

Fortunately there are some mitigating factors.  First, this only applies to systems that do the following things in order: 1) Accept some data from an untrusted source, like an HTTP request header, 2) Assign that data to an environment variable, 3) Execute a bash shell (either directly or through a system call).

If they send in the right data, the attacker will have achieved the holy grail of application security: ‘Remote Command Execution.’  An RCE basically means they have completely taken over the host.

Passing around data this way is a pretty bad idea, but it was the pattern back in the CGI days.  Unfortunately, there are still a lot of servers that work that way.  Even worse, custom applications may have been programmed this way, and they won’t be easy to scan for.  So we’re going to see instances of this problem for a long long time.”

Tal Klein, Vice President of Strategy at Adallom:

 “What I don’t like to see is people comparing Shellshock to Heartbleed. Shellshock is exponentially more dangerous because it allows remote code execution, meaning a successful attack could lead to the zombification of hosts. We’ve already seen one self-replicating Shellshock worm in the wild, and we’ve already seen one patch circumvention technique that requires patched Bash to be augmented in order to be ‘truly patched’. What I’m saying is that generally I hate people who wave the red flag about vulnerabilities, but this is a 10 out of 10 on the awful scale and poses a real threat to core infrastructure. Take it seriously.”

Michael Sutton, Vice President of Security Research at Zscaler:

 “Robert Graham has called the ‘Shellshock’ vulnerability affecting bash ‘bigger than Heartbleed.’ That’s a position we could defend or refute, it all depends upon how you define bigger. Will more systems be affected? Definitely. While both bash and OpenSSL, which was impacted by Heartbleed, are extremely common, bash can be found on virtually all *nix system, while the same can’t be said for OpenSSL as many systems simply would require SSL communication. That said, we must also consider exploitability and here is where I don’t feel that the risk posed by Shellshock will eclipse Heartbleed.

Exploiting Heartbleed was (is) trivially easy. The same simple malformed ‘heartbeat’ request would trigger data leakage on virtually any vulnerable system. This isn’t true for Shellshock as exploitation is dependent upon influencing bash environment variables. Doing so remotely will depend upon the exposed applications that interact with bash. Therefore, this won’t quite be a ‘one size fits all’ attack. Rather, the attacker will first need to probe servers to determine not only those that are vulnerable, but also how they can inject code into bash environment variables.

The difference here is that we have to take application logic into account with Shellshock and that was not required with Heartbleed. That said, we’re in very much in the same boat having potentially millions of vulnerable machines, many of which will simply never be patched. Shellshock, like Heartbleed, will live on indefinitely.”

Mamoon Yunus, CEO of Forum Systems: 

“The Bash vulnerability has the potential to be much worse than Heartbleed. Leaking sensitive data is obviously bad but the Bash vulnerability could lead to losing control of your entire system.

The Bash vulnerability is a prime example of why it’s critical to take a lockdown approach to open, free-for-all shell access, a practice that is all too common for on-premise and cloud-based servers. Mobile applications have caused an explosion in the number of services being built and deployed. Such services are hosted on vanilla Linux OS variants with little consideration given to security and are typically close to the corporate edge. Furthermore, a large number of vendors use open Linux OSes, install their proprietary functionality, and package commercial network devices that live close to the network edge at Tier 0. They do so with full shell access instead of building a locked-down CLI for configuration.

The Bash vulnerability is a wake-up call for corporations that continue to deploy business functionality at the edge without protecting their services and API with hardened devices that do not provide a shell-prompt for unfettered access to OS internals for anyone to exploit.”

Jody Brazil, CEO of FireMon:

“This is the kind of vulnerability that can be exploited by an external attacker with malicious intent. So, how do those from the Internet, partner networks or other outside connection gain access to this type of exposure?

An attack vector analysis that considers network access through firewalls and addresses translation can help identify which systems are truly exposed. Then, determine if it’s possible to mitigate the risk by blocking access, even temporarily. In those cases where this is not an option, prioritizing patching is essential. In other cases where, for example, where there is remote access to a vulnerable system that is not business-critical, access can be denied using existing firewalls.

This helps security organizations focus their immediate patching efforts and maximize staffing resources. It’s critical to identify the greatest risk and then prioritize remediation activities accordingly. Those are key best practices to address Bash or any vulnerability of this nature.”

Mark Stanislav, Security Researcher at Duo Security:

“While Heartbleed eventually became an easy vulnerability to exploit, it was ultimately time consuming, unreliable and rarely resulted in ‘useful’ data output. Shell Shock, however, effectively gives an attacker remote code execution on any impacted host with a much easier means to exploit than Heartbleed and greater potential results for criminals.

Once a web application or similarly afflicted application is found to be vulnerable, an attacker can do anything from download software, to read/write system files, to escalating privilege on the host or across internal networks. More damning, of course, is that the original patch to this issue seems to be flawed and now it’s a race to get a better patch released and deployed before attackers leverage this critical bug.”

Rob Sadowski, Director of Technology Solutions at RSA:

“This is a very challenging vulnerability to manage because the scope of potentially affected systems is very large, and can be exploited in a wide variety of forms across multiple attack surfaces. Further, there is no single obvious signature to help comprehensively detect attempts to exploit the vulnerability, as there are so many apps that access BASH in many different ways.

Because many organizations had to recently manage a vulnerability with similar broad scope in Heartbleed, they may have improved their processes to rapidly identify and remediate affected systems which they can leverage in their efforts here.” 

Joe Barrett, Senior Security Consultant, Foreground Security:

 “Right now, Shellshock is making people drop everything and scramble to fix patches. Security experts are still expanding the scope of vulnerability, finding more devices and more methods in which this vulnerability can be exploited. But no one has gotten hacked and been able to turn around and point and say ‘It was because of shellshock’ that I’ve seen.

 

If you have a Linux box, patch it. Now. Do you have a Windows box using Cygwin? Update Cygwin to patch it. And then start trying to categorize all of the ‘other’ devices on the network and determining if they might be vulnerable. Because chances are a lot of them are.

Unfortunately, vendors probably will never release patches to solve this for most appliances, because most [Internet-connected] appliances don’t even provide a way to apply such an update. But for the most part all you can do is try to identify affected boxes and move them behind firewalls and out of the way of anyone’s ability to reach them. Realistically, we’ll probably still be exploiting this bug in penetration tests in 8 years. Not to mention all of the actual bad guys who will be exploiting this.”

Until Next Friday…Have a Great Weekend!

Related Reading: What We Know About Shellshock So Far, and Why the Bash Bug Matters

Previous Columns by Eduard Kovacs:


SecurityWeek RSS Feed

Will Technology Replace Security Analysts?

Posted on September 15, 2014 by in Security

Recently, at a round table discussion, I heard someone make the statement, “In five years, there will be no more security analysts. They will be replaced by technology.” This is not the first time I have heard a statement along these lines. I suppose that these sorts of statements are attention grabbing and headline worthy, but I think they are a bit naïve to say the least.

Taking a step back, it seems to me that this statement is based on the belief or assumption that operational work being performed within the known threat landscape of today can be fully automated within five years. I don’t know enough about the specific technologies that would be involved in that endeavor to comment on whether or not that is an errant belief. However, I can make two observations, based on my experience, which I believe are relevant to this specific discussion:

• Operational work tends to focus on known knowns

• The threat landscape of today, both known and unknown, will not be the threat landscape of tomorrow

The work that is typically performed in a security operations setting follows the incident response process of: Detection, Analysis, Containment, Remediation, Recovery, and Lessons Learned. Detection is what kicks off this process and what drives the day-to-day workflow in a security operations environment. If we think about it, in order to detect something, we have to know about it. It doesn’t matter if we learn of it via third party notification, signature-based detection, anomaly detection, or any other means.

The bottom line is that if we become aware of something, it is by definition “known”. But what percentage of suspicious or malicious activity that may be present within our organizations do we realistically think is known? I don’t know of a good way to measure this, since it involves a fair amount of information that is unknowable. I do, however, think we would be naïve to think it is anywhere near 100%.

If we take a step back, the ramifications of this are quite striking. In essence, most of the work we are performing today involves what is likely a mere fraction of what ought to concern us. Even if technology could automate all of today’s security operations functions within five years’ time, that still leaves quite a bit of work undone.

I think we would also be naïve to think that the threats of today, both known and unknown will be the threats of tomorrow. If I think back five or ten years, I’m not sure how many of us foresaw the degree to which intrusions involving large-scale payment card theft would become an almost regular occurrence. Granted, theft of sensitive information has been an issue for quite some time, but not to the degree that it has been in the recent past. Payment card theft is now a threat that many organizations take very seriously, whereas five or ten years ago, it may have been a threat that only certain specific organizations would have taken seriously. This is merely an example, but my main point here is that we can’t view today’s threat landscape as a base upon which to build predictions and make assertions for the future.

In my experience, analysts can provide unique value that may not be obvious to those who have not worked in the role. For those who don’t know, I worked as an analyst for many years before moving over to the vendor side. It is from that experience that I make this point.

In a mature security operations setting, there will be a certain workflow and process. Some organizations will be in a more mature place, while other organizations will be in a less mature place. Regardless of where an organization finds itself, there will always be room to improve. Alongside performing the tasks required by the process, a good analyst will make the process better and improve the maturity of the organization. This can happen in many ways, but here are a few different approaches that I have often seen:

• Improving existing alerting

• Identifying automation opportunities

• Performing gap analysis

• Implementing new alerting

Any work queue will have both signal (true positives) and noise (false positives). A mature, efficient security operations program will have a high enough signal-to-noise ratio so as to allow for a reasonable chance at timely detection of incidents. Regardless of the signal-to-noise ratio, alerting that populates the work queue can always be improved. As the people most familiar with the ins and outs of various different alerts, analysts play an important role here. The analyst can provide unique perspective regarding tuning and improving alerts to make them less noisy and ensure they keep up with the times.

It is certainly true that some alerts follow a nearly identical sequence of events each time they are vetted, qualified, and investigated. These cases are good candidates for automation, but they won’t identify themselves. A skilled analyst is needed to identify those repetitive manual tasks best suited for automation. Automation is a good thing and should be leveraged whenever appropriate, but it will never replace the analyst.

With automation comes newly liberated analyst cycles. Those cycles can and should be used to hunt, dig, and perform gap analysis. Hunting and digging help to identify unknown unknowns – network traffic or endpoint activity for which the true nature is unknown. Gap analysis serves to identify points within the organization where proper network and endpoint telemetry may not exist. All these activities help provide a window into the unknown. After all, today’s unknown may be tomorrow’s breach.

When unknown unknowns are discovered, they should be studied to understand their true nature. This process turns them into new known knowns. And it is from this pile that new alerting is continuously formulated. The analyst is an invaluable resource in turning unknown unknowns into known knowns. Based on my experience, there is no shortage of unknown unknowns waiting to be investigated.

A good analyst is hard to find and is a critical resource within a mature security operations function. Although it may be tempting to envision a world where the analyst has been fully automated, this does not seem particularly reasonable. Rather, the work of the analyst can and must evolve over time to keep pace with the changing threat landscape.

Joshua Goldfarb (Twitter: @ananalytical) is Chief Security Strategist of the Enterprise Forensics Group at FireEye and has over a decade of experience building, operating, and running Security Operations Centers (SOCs). Before joining nPulse Technologies, which was acquired by FireEye, as its Chief Security Officer (CSO), he worked as an independent consultant where consulted and advised numerous clients in both the public and private sectors at strategic and tactical levels. Earlier in his career Goldfarb served as the Chief of Analysis for US-CERT where he built from the ground up and subsequently ran the network, physical media and malware analysis/forensics capabilities. Goldfarb holds both a B.A. in Physics and a M.Eng. in Operations Research and Information Engineering from Cornell University.

Previous Columns by Joshua Goldfarb:


SecurityWeek RSS Feed

Insider vs. Outsider Threats: Can We Protect Against Both?

Posted on June 26, 2014 by in Security

Media reports affirm that malicious insiders are real. But unintentional or negligent actions can introduce significant risks to sensitive information too. Some employees simply forget security best practices or shortcut them for convenience reasons, while others just make mistakes.

Some may not have received sufficient security awareness training and are oblivious to the ramifications of their actions or inactions. They inadvertently download malware, accidentally misconfigure systems, or transmit and store sensitive data in ways that place it at risk of exposure.

Insider ThreatsPersonnel change too. Companies hire new employees, and promote and transfer individuals to new roles. They augment staff with temporary workers and contractors. New leadership comes onboard. Many of these insiders require legitimate access to sensitive information, but needs differ with changing roles, tenure, or contract length. It’s extremely challenging to manage user identities and access privileges in this environment, not to mention the people themselves. A person who was once trustworthy might gradually become an insider threat – while another becomes a threat immediately, overnight.

New technologies and shifting paradigms further complicate matters. The evolving trends of mobility, cloud computing and collaboration break down the traditional network perimeter and create complexity. While these new tools and business models enhance productivity and present new opportunities for competitive advantage, they also introduce new risks.

At the same time, you can’t ignore outsider threats which are responsible for the lion’s share of breaches. Since 2008, the Verizon Data Breach Investigations Report has shown that external actors – not insiders – are responsible for the vast majority of the breaches they investigated. Some of the top reasons why breaches were successful include: weak credentials, malware propagation, privilege misuse, and social tactics. These are precisely the types of weaknesses that trace back to the actions (or inactions) of insiders.

The question isn’t whether to focus on the insider or outsider threat. The question is how to defend against both – equally effectively.

What’s needed is a threat-centric approach to security that provides comprehensive visibility, continuous control, and advanced threat protection regardless of where the threat originates. To enable this new security model, look for technologies that are based on the following tenets:

Visibility-driven: Security administrators must be able to accurately see everything that is happening. When evaluating security technologies, breadth and depth of visibility are equally important to gain knowledge about environments and threats. Ask vendors if their technologies will allow you to see and gather data from a full spectrum of potential attack vectors across the network fabric, endpoints, email and web gateways, mobile devices, virtual environments, and the cloud. These technologies must also offer depth, meaning the ability to correlate that data and apply intelligence to understand context and make better decisions.

Threat-focused: Modern networks extend to wherever employees are, wherever data is, and wherever data can be accessed from. Keeping pace with constantly evolving attack vectors is a challenge for security professionals and an opportunity for insider and outsider threats. Policies and controls are essential to reduce the surface area of attack, but breaches still happen. Look for technologies that can also detect, understand, and stop threats once they’ve penetrated the network and as they unfold. Being threat-focused means thinking like an attacker, applying visibility and context to understand and adapt to changes in the environment, and then evolving protections to take action and stop threats.

Platform-based: Security is now more than a network issue; it requires an integrated system of agile and open platforms that cover the network, devices, and the cloud. Seek out a security platform that is extensible, built for scale, and can be centrally managed for unified policy and consistent controls. This is particularly important since breaches often stem from the same weaknesses regardless of whether they result from insider actions or an external actor. This constitutes a shift from deploying simply point security appliances that create security gaps, to integrating a true platform of scalable services and applications that are easy to deploy, monitor, and manage.

Protecting against today’s threats – whether they originate from the inside or the outside – is equally challenging. But they have a lot in common – tapping into many of the same vulnerabilities and methods to accomplish their missions. There’s no need to choose which to prioritize as you allocate precious resources. With the right approach to security you can protect your organization’s sensitive information from both insiders and outsiders.

Marc Solomon, Cisco’s VP of Security Marketing, has over 15 years of experience defining and managing software and software-as-a-service platforms for IT Operations and Security. He was previously responsible for the product strategy, roadmap, and leadership of Fiberlink’s MaaS360 on-demand IT Operations software and managed security services. Prior to Fiberlink, Marc was Director of Product Management at McAfee, responsible for leading a $ 650M product portfolio. Before McAfee, Marc held various senior roles at Everdream (acquired by Dell), Deloitte Consulting and HP. Marc has a Bachelor’s degree from the University of Maryland, and an MBA from Stanford University.

Previous Columns by Marc Solomon:


SecurityWeek RSS Feed