A Strategic Sea-Change in Protecting the Security of Private Data
Posted on March 4, 2014 by Kara Dunlap in Security
Balancing data privacy and data security is a long-standing information security challenge. Historically, companies have focused their response efforts on establishing strong perimeter and endpoint controls; data was considered at risk from external actors, and protected by encryption, DLP, and network controls, but often left open to insiders without respect to role and need to see the information. Success and failure were measured in terms of data access; if an outsider was able to read company data, the security program had failed.
The public cloud has changed this model, however. The very market forces that sparked the explosive adoption of public cloud platforms (mobile technology, a robust app market, consumerization of IT, and the technological convergence of our personal and professional lives) have rewritten the rules for how and where users are accessing and sharing their information. In allowing employees to bring their devices to work, organizations have created expectations around access and efficiency that are radically different from the top-down control model that dominated the previous decade. More importantly, the decision as to whether to implement public cloud technologies such as SaaS applications has been made already, by those very users; fail to address their needs, and they will simply use consumer-grade alternatives of their own accord.
As security professionals, the initial response — to simply block all applications coming in from a cloud environment — is no longer the most appropriate or most effective way to respond to the market’s demands for information protection and security. Where companies establish restrictive controls, end-users are presented with myriad options for circumventing them; where collaboration technologies where once the domain of IT, they have become democratized, and end users who are familiar with traditionally consumer-focused apps such as DropBox or Box are likely to bring those technologies into play if alternatives like Google Apps or Salesforce are locked down by organizational policies, preventing them from operating in a way that maximizes their efficiency.
In response, organizations need to rethink how they approach the challenge of data management. Engaging the user when working through data security is something that most companies have come to accept; the question that remains is how they can also enforce data privacy rules, through which highly sensitive information is protected from inadvertent exposure and external threat without driving users “underground’ into consumer-grade filesharing applications.
A Change in Expectations
End users often feel comfortable working with familiar apps that have not been subject to a security review because they do not see evidence of risk. As an industry trend, this is understandable; even catastrophic data breaches often go undetected by IT and InfoSec teams for months prior to discovery.
The delay in detection is not equivalent to a delay in damage, however. Even if a given file is only theoretically externalized, and no indicators suggest that sensitive or regulated data has been viewed by a malicious party, the exposure itself can be a data breach sufficient to warrant regulatory response.
Are your people the problem, or the solution?
What needs to change is the perception that the primary role of IT is in safeguarding and blocking data from being viewed by an outsider. The notion that the company’s employees are the source of risk is counterproductive when translated to attempts at formulating a solution; given the tremendous autonomy that the cloud grants the typical user today, especially when they own and control the endpoint devices being used to access organizational information, it is clear that security needs to make all of the people who interact with sensitive data and systems participants (and even custodians) of information security.
Putting the Pieces Together
Training is a fundamental part of the change process. Information security threats are constantly evolving and changing; to assume that your people inherently have a full understanding of the risks they are confronted with and the appropriate skills to respond is foolhardy. Make them aware of the risks, make them aware of the practices they should follow to protect data security, and importantly, make them aware that their performance in safeguarding information assets can and will be measured.
Supporting this effort requires the implementation of a risk appropriate response framework: content awareness to differentiate sensitive and mundane data, encryption where it makes sense, and the ability to easily and efficiently monitor your total risk space. Consider the following elements:
– Content Awareness: the ability to discover and classify information assets on the network that belong inside the secure perimeter, right down to the level of individual words and numbers. This allows you to flag files containing potentially sensitive data such as social security numbers, health information, credit card data, or internal IP, without manually parsing the contents.
– Risk-appropriate Encryption: Encryption is a tool, and a necessary component to a good security framework, but it is not a solution in itself. It should be an iterative response, one that builds on the content-aware policies that an organization puts in place; ideally, users will be able to self-select which files should be encrypted, to add a defense-in-depth security layer to their sharing activities. This might then be extended by policy-driven encryption actions, which can automatically encrypt files considered highly sensitive; note that this is different from universally applied encryption designed to establish a perimeter, but without any means of protecting against insider threat.
– Consolidated Security View: As mentioned above, one of the primary challenges around information security is how to narrow the gap between an incident and its detection. Any strategy designed to support a cloud security model should address this; a particularly effective approach will entail the consolidation of incidents into a single interface, highlighting policy violations, end-user data access activities, geo-awareness regarding logins and data access, and application risk in a single view.
Importantly, by enlisting information workers as part of the data security system, this total solution approach changes the equation in security management. The organization’s staff can become a vital part of the process of protecting secure information assets, rather than working at cross-purposes with InfoSec efforts, and instead of pushing users away from the environment and into consumer apps, they can be converted into essential perimeters unto themselves.
The cloud is already here; talking about adoption in 2014 is passé, because users have and will continue to find ways to move your data into cloud platforms, and will do so even more quickly when forced by overly coercive policies. Instead of trying to obfuscate and block, or worse, attempting to solve for a threat that no longer exists (that is, the perimeter security model), change your focus. We as an industry are on the cusp of a technological paradigm shift; you need to decide whether you will embrace that change, or be cast aside by it.
RSA: The Cyber Security Gap in Education
Posted on February 26, 2014 by Kara Dunlap in Security
SAN FRANCISCO – In last year’s workforce study from ISC2, 56 percent of those surveyed said their security organization was short-staffed. A year later, figuring out what to do about that remains a challenge, and it is one not far from the minds of some of the attendees at the RSA Conference.
One answer may be to make sure that all aspects of IT consider security as a critical part of their operation. But that process often gets off to a rocky start for aspiring IT professionals, as many universities are not doing a good enough job of educating students on security – particularly those not going directly into the security field, argued Jacob West, HP’s CTO of Enterprise Security Products.
“Honestly I think we’re doing almost nothing at the university level today to teach security,” he told SecurityWeek at the conference, where he presented on the topic earlier in the day.
For those pursuing a career in cyber-security, there is at least a clear career path and opportunities, he said. But for anyone seeking a career in IT where security is not their primary responsibility, the danger of security falling through the cracks is very real.
“[Developers] are not getting realistic expectations placed on them at the university level around the kind of coding that they do,” he said. “They are basically asked to provide certain functionality…and are supposed to provide it with a certain level of performance perhaps – some cases not even that – but they’re not expected to provide it in a robust way. They are not graded against frankly the same standards that code in the real world is graded against today, which is being in an adversarial environment and where a small mistake can lead to a huge security problem.”
Adding to the challenge of preparing a workforce is the dynamic realities of IT security, where change is perhaps the only constant. In a panel discussion, representatives from security certification body (ISC)² stressed that seeking professional certifications can help not only bolster an employee’s credentials, but also serve as proof of expertise regarding real-world situations.
The test for the group’s CISSP certification is updated with new questions every few months, and the test has to be retaken every three years for the credential to stay in good standing, explained Vehbi Tasar, director of professional programs development for (ISC) ², explained to SecurityWeek. When it comes to education, he said, the best learning usually comes on the job.
“All good security people learned their job doing the job,” he said. “They didn’t learn at the university. That is a big gap in my opinion because universities are teaching just the basic stuff. They are not necessarily teaching different angles that people will encounter. They cannot really; you cannot expect them to do it.”
West said during his presentation he would like to see additional programs from both the government and the tech industry to support those seeking to get into the field, and added later that it was critical to recruit women, who he said as a group continue to be underrepresented in IT security. To that end, earlier in the week, HP announced it was making $ 250,000 available in scholarships for women studying information security.
“It’s not as simple as adding a new class on security,” he said. “It’s the idea that we have to build security and the requirements of robust programming into everything we teach at the university level, and that’s a much broader problem.”
Asus Patches Firmware Security Vulnerability
Posted on February 18, 2014 by Kara Dunlap in Security
It is not uncommon for vendors to give security advisories. This time however, it appears a hacker gave at least one victim an unexpected heads up.
According to Ars Technica, a user of an Asus router uncovered a text file on his external hard drive. The message read as follows: “This is an automated message being sent out to everyone effected. Your Asus router (and your documents) can be accessed by anyone in the world with an Internet connection.”
The note also instructed the user to read information on how to protect against the attack, which took advantage of a vulnerability uncovered last year by researcher Kyle Lovett. According to Lovett, the issue allows hackers to “traverse to any external storage plugged in through the USB ports on the back of the router.”
Asus did not respond to a request for comment on the issue. However, Softpedia reported that the vulnerability was addressed last week in a firmware update by Asus.
Earlier this month, a list of nearly 13,000 IP addresses reportedly tied to the vulnerable routers was posted on the Internet. The list contained the names of files stored on the hard drives of impacted users have been published as well.
The list of impacted routers includes RT-N66U, RT-N66R, RT-AC56U, RT-N56R, RT-N56U, RT-N14U, RT-N16, RT-N16R, RT-AC66R and RT-AC66U. More information about the updates for each model can be found here.
Just recently, researchers at the SANS Institute warned about a worm exploiting a vulnerability in several Linksys routers. The worm, dubbed ‘TheMoon’, takes advantage of a flaw that has since been patched by Linksys. Users are advised to apply the relevant updates.
Introduction and Welcome – Security Metrics
Posted on February 13, 2014 by Kara Dunlap in Security
This is the beginning of a series of postings I’ll be doing on security metrics. It’s a topic that I don’t think we, as a community, have a particularly good grasp of – probably because security, as a field, is only just beginning to professionalize to the point where (in some markets) it’s getting more than a nod as a necessary evil. I can’t even imagine the number of times in my career that I have heard a security practitioner say something like, “We have to speak to executives in the language of business!” which often gets mistaken for “use lots of PowerPoint and buzzwords” but which really means: Be able to quantify what you’re talking about. And that’s where metrics come in.
Lord Kelvin – If you really understand metrics, maybe you’ll have a unit of measurement named after you, like he did. What is the unit of measurement for computer security, anyway? |
During the course of this series I’m going to hit on a range of topics from why metrics are important and what they are, to bottom-up analysis of your business process, and top-down analysis of your mission, then the problems of normalization and data-sharing, as well as suggestions on how to present data. I’m not going to pretend to you that metrics are insanely exciting as a field, because they aren’t. On the other hand, metrics are how you learn where you’re going and, as the great quip goes, “If you don’t know where you’re going, how will you know when you get there?” William Thompson, Lord Kelvin, once observed that “If you can not measure it, you can not improve it.” That, in short, is almost all we’d have to say about metrics in computer security – our goal is to improve, and right now many of us are following popular fads or traditions, instead of seriously studying what we’re doing.
Thompson also said, “There is nothing new to be discovered in physics now, All that remains is more and more precise measurement.” which goes to show you that it’s a bad idea to think that your field of study is immune to change. Computer security is hardly immune to change – in fact it’s more characterized by constant flux than anything else – which is what makes it so hard: we are chasing a mixture of configurations and practices that surround a variety of applications and protocols all of which are mutating at a very high rate. When I started in this field the first firewall I built needed to effectively handle 5 protocols (DNS, NNTP, SMTP, FTP, and TELNET) and it didn’t even need to support a full command-set for those protocols. Today, the complexity of security has grown out of pace with the number of applications and protocols.
In other words, it’s a good time to be alive. Enjoy the ride.
Next up: Why should you care about metrics? We’ll look at where metrics fit into the organizational structure and why they are an important part of executive knowledge.
Samsung KNOX Security Software Embedded in Galaxy S4 Vulnerable, Researchers Say
Posted on December 26, 2013 by Kara Dunlap in Security
Researchers have reportedly found a vulnerability in a security system embedded in Samsung’s Galaxy S4 smartphone that could allow an attacker to steal data.
Security researchers at Ben-Gurion University of the Negev in Israel uncovered vulnerabilities in Samsung’s KNOX security solution. The findings were first reported by the Wall Street Journal, which noted that KNOX is currently being reviewed by the U.S. Department of Defense and other government agencies for potential use. Aimed at Google Android devices, KNOX includes the ability to enforce the separation of information through containerization as well as a secure boot and kernel monitoring capabilities.
According to researchers at BGU’s Cyber Security Labs, the issue makes interception of data communications between the secure container and the external world – including file transfers and emails – relatively easy.
“To us, Knox symbolizes state-of-the-art in terms of secure mobile architectures and I was surprised to find that such a big ‘hole’ exists and was left untouched,” Ph.D. student Mordechai Guri said in a statement. “The Knox has been widely adopted by many organizations and government agencies and this weakness has to be addressed immediately before it falls into the wrong hands. We are also contacting Samsung in order to provide them with the full technical details of the breach so it can be fixed immediately.”
Guri, who is part of a team of BGU researchers that focus on mobile security and other cyber-issues, uncovered the vulnerability while performing an unrelated research task. According to BGU, KNOX’s secure container is supposed to ensure that all data and communications that take place within the secure container are protected. Even a malicious application should attack an area outside the secure container all the protected data should be inaccessible under all circumstances.
However, researchers found that that is not the case.
“To solve this weakness, Samsung may need to recall their devices or at least publish an over the air software fix immediately,” said Dudu Mimran, chief technology officer of the BGU labs, in the statement. “The weakness found may require Samsung to re-think a few aspects of their secure architecture in future models.”
Samsung did not respond to a request for comment from SecurityWeek. However, the company told the Wall Street Journal that it was investigating the matter, and that preliminary investigation has found that the researchers’ work seems to be based on a device that was not equipped with features that a corporate client would use alongside Knox.
“Rest assured, the core Knox architecture cannot be compromised or infiltrated by such malware,” the Samsung spokesperson told the Wall Street Journal.
Alleged NSA Payment to RSA Raises New Fears of Gov’t Undermining Crypto Security
Posted on December 23, 2013 by Kara Dunlap in Security
During the past several months, leaks about the NSA’s electronic surveillance operations have pooled into a river that has spilled into calls for reform.
The most recent drop in that river is a report from Reuters that the NSA paid RSA $ 10 million to ensure a vulnerable encryption algorithm was used by default in RSA’s BSAFE toolkit. RSA, now a division of EMC, denied ever entering into a contract or being involved in any project with the intention of weakening its products. Still, the report, which was based on sources familiar with the contract, has sparked additional questions about collusion between the tech industry and intelligence agencies.
“The bad part is – if the story is true – the very, very large downside is that it’s compromising a security product,” said John Pescatore, director of emerging security trends at SANS Institute. “It’s one thing if somebody buys a switch or a typewriter or whatever you are not expecting it to sort of protect you…crypto, you are. You’re buying security products with the assumption that the company selling them to you is selling the most secure products. So if NSA has been successful at getting companies like RSA or Microsoft or any of them to compromise the security of their products, that’s sort of taking it to a different level than we have seen in the past.”
In September, leaks by former NSA contractor Edward Snowden led to media reports that the NSA had engaged in an to insert vulnerabilities into commerical encryption systems so that it could more easily decrypt communications. Last week, Reuters reported the agency created a backdoor in the Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) that could be exploited and then pushed for RSA to adopt it. Problems with the algorithm have been known for several years, though RSA continued to use it in BSAFE until NIST [National Institute of Standards and Technology] withdrew its support for the standard in September in the wake of growing concerns.
Last week, the Obama administration’s Review Group on Intelligence and Communications Technologies released a report in which recommended the NSA abandon efforts to undermine cryptographic standards.
“The US Government should take additional steps to promote security, by (1) fully supporting and not undermining efforts to create encryption standards; (2) making clear that it will not in any way subvert, undermine, weaken, or make vulnerable generally available commercial encryption; and (3) supporting efforts to encourage the greater use of encryption technology for data in transit, at rest, in the cloud, and in storage,” according to the report.
“Recent press coverage has asserted that RSA entered into a “secret contract” with the NSA to incorporate a known flawed random number generator into its BSAFE encryption libraries,” RSA said in a statement. “We categorically deny this allegation. We have worked with the NSA, both as a vendor and an active member of the security community. We have never kept this relationship a secret and in fact have openly publicized it. Our explicit goal has always been to strengthen commercial and government security.”
RSA also said it made the decision to use Dual EC DRBG back in 2004, two years before the Reuters’ report alleged NSA approached them with a deal.
“We no longer know whom to trust,” blogged noted cryptographer Bruce Schneier today. “This is the greatest damage the NSA has done to the Internet, and will be the hardest to fix.”
Pescatore, who has worked for the NSA and U.S. Secret Service in the past, said it is a mistake for the NSA to be charged with both the offensive and defensive aspects of the cyber-war, and that the conflicting priorities of those roles can create a mindset where injecting security flaws into encryption standards make sense. Currently, both the NSA and the US Cyber Command are under the direction of NSA Director Gen. Keith Alexander.
The idea of strong encryption getting into the wrong hands however should not be enough of a reason for the intelligence community to undermine encryption, Pescatore said. After all, if the NSA can find the backdoor, others can as well, he argued.
“I do not think that there needs to be sort of reduced strength [in] security products in case the bad guys get a hold of them any more than I think people’s houses should use easy to pick locks just in case the police need to get in,” he said.
Providers at 2013 CSA Congress tout cloud protection over conventional IT
Posted on December 7, 2013 by Kara Dunlap in Security
At the 2013 CSA Congress, professionals from Microsoft plus AWS produced the case for why cloud provider safety is superior to conventional IT protection.
SearchSecurity: Security Wire Daily News
Despite cloud processing safety dangers, infosec pros learn their character
Posted on October 8, 2013 by Kara Dunlap in Security
As company demands plus rogue consumers introduce cloud processing protection dangers into numerous companies, infosec pros know they need to be enablers.
SearchSecurity: Security Wire Daily News