Feedback Friday: ‘Shellshock’ Vulnerability – Industry Reactions
Posted on September 28, 2014 by Kara Dunlap in Security
The existence of a highly critical vulnerability affecting the GNU Bourne Again Shell (Bash) has been brought to light this week. The security flaw is considered by some members of the industry as being worse than the notorious Heartbleed bug.
GNU Bash is a command-line shell used in many Linux, Unix and Mac OS X operating systems. The vulnerability (CVE-2014-6271) has been dubbed “Bash Bug” or “Shellshock” and it affects not only Web servers, but also Internet-of-Things (IoT) devices such as DVRs, printers, automotive entertainment systems, routers and even manufacturing systems.
By exploiting the security hole, an attacker can execute arbitrary commands and take over targeted machine. Symantec believes that the most likely route of attack is through Web servers that use CGI (Common Gateway Interface). There have already been reports of limited, targeted attacks exploiting the vulnerability.
A patch has been made available, but it’s incomplete. Until a permanent fix is rolled out, several organizations have launched Shellshock detection tools. Errata Security has started scanning the Web to find out how many systems are affected, and Symantec has published a video to demonstrate how the flaw can be exploited.
The security community warns that the vulnerability can have serious effects, and points out that it could take a long time until all systems are patched.
And the Feedback Begins…
Ian Pratt, Co-founder and EVP at Bromium:
“The ‘shellshock’ bash vulnerability is a big deal. It’s going to impact large numbers of internet-facing Linux/Unix/OS X systems as bash has been around for many years and is frequently used as the ‘glue’ to connect software components used in building applications. Vulnerable network-facing applications can easily be remotely exploited to allow an attacker to gain access to the system, executing with the same privilege the application has. From there, an attacker would attempt to find a privilege escalation vulnerability to enable them to achieve total compromise.
Bash is a very complex and feature-rich piece of software that is intended for interactive use by power users. It does way more than is typically required for the additional role for which it is often employed in gluing components together in applications. Thus it presents an unnecessarily broad attack surface — this likely won’t be the last vulnerability found in bash. Application developers should try to avoid invoking shells unless absolutely necessary, or use minimalist shells where required.”
Mark Parker, Senior Product Manager at iSheriff:
“This bash vulnerability is going to prove to be a much bigger headache than Heartbleed was. In addition to the general Mac OS X, Linux and Unix systems that need to be patched, there are also thousands upon thousands of Internet connected Linux and Unix based embedded devices, such as DVRs, home automation systems, automotive entertainment systems, mobile phones, home routers, manufacturing systems and printers.
Most of these devices will be susceptible because most Linux based devices run bash, it is such an integral part of the Linux OS. I anticipate that we will be continue to see the fallout from this vulnerability for a long time to come.”
Carl Wright, General Manager of TrapX Security:
“We feel that industry will take this very seriously and come out with patches for this vulnerability ASAP. It could take us years to understand how many systems were compromised and how many were used to escalate privileges into systems without this vulnerability. The transitive trust nature of directory architectures and authentications systems could mean we are living with this far beyond patching the current systems if this exploit has been taken advantage of even at a small 1% level.”
Coby Sella, CEO of Discretix:
“This is the second time over the last six months when a key infrastructure component used by billions of connected things across a variety of industries has been compromised. We see this problem only getting worse as more and more unsecured or not adequately secured things are rolled out without any comprehensive security solution that reaches all the way down to the chipset. Real solutions to this problem must cover every layer from the chipset to the cloud enabling companies to remotely insert secrets into the chipset layer via secured connections within their private or cloud infrastructure.”
Nat Kausik, CEO, Bitglass:
“Enterprises with ‘trusted endpoint’ security models for laptops and mobile devices are particularly vulnerable to this flaw. Malware can exploit this vulnerability on unix-based laptops such as Mac and Chromebook when the user is away from the office, and then spread inside the corporate network once the user returns to the office.”
Steve Durbin, Managing Director of the Information Security Forum:
“The Bash vulnerability simply stresses the point that there is no such thing as 100% security and that we all need to take a very circumspect and practical approach to how we make use of the devices that we use to share data both within and outside the home and our businesses. I have my doubts on whether or not this will lead to a wave of cyber-attacks, but that is not to say that the vulnerability shouldn’t be taken seriously. It is incumbent upon all of us as users to guard our data and take all reasonable precautions to ensure that we are protecting our information as best as we are realistically able.”
Steve Lowing, Director of Product Management, Promisec:
“Generally, the Bash vulnerability could be really bad for systems, such as smart devices including IP cameras, appliances, embedded web servers on routers, etc… which are not updated frequently. The exposure for most endpoints is rapidly being addressed in the form of patches to all flavors of UNIX including Redhat and OS X. Fortunately for Microsoft, they avoid much of this pain since most Windows systems do not have Bash installed on them.
For vulnerable systems, depending on how they are leveraging the Bash shell the results could be grave. For example, a webserver that uses CGI for example would likely be configured to use Bash as the shell for executing commands and compromising this system via this vulnerability is fairly straightforward. The consequences could be to delete all web content which could mean Service level agreements (SLA)s are not met because of complete outage or deface the site which tarnishes your brand or even to be a point of infiltration for a targeted attack which could mean IP and/or sensitive customer information loss.
The IoT is the likely under the biggest risk since many of these devices and appliances are not under subject to frequent software updates like a desktop or laptop or server would be. This could result in many places for an attacker to break into and lay wait for sensitive information to come their way.”
Jason Lewis, Chief Collection and Intelligence Officer, Lookingglass Cyber Solutions:
“The original vulnerability was patched by CVE-2014-6271. Unfortunately this patch did not completely fix the problem. This means even patched systems are vulnerable.
Several proof of concepts have been released. The exploit has the ability to turn into a worm, so someone could unleash an exploit to potentially infect a huge number of hosts.”
Ron Gula, Chief Executive Officer and Chief Technical Officer, Tenable Network Security:
“Auditing systems for ShellShock will not be like scanning for Heartbleed. Heartbleed scans could be completed by anyone with network access with high accuracy. With ShellShock, the highest form of accuracy to test for this is to perform a patch audit. IT auditing shops that don’t have mature relationships with their IT administrators may not be able to audit for this.
Detecting the exploit of this is tricky. There are network IDS rules to detect the attack on unencrypted (non-SSL) web servers, but IDS rules to look for this attack over SSL or SSH won’t work. Instead, solutions which can monitor the commands run by servers and desktops can be used to identify commands which are new, anomalistic and suspect.”
Mike Spanbauer, Managing Director of Research, NSS Labs:
“Bash is an interpretive shell that makes a series of commands easy to implement on a Unix derivative. Linux is quite prevalent today throughout the Web, both as commerce platform and as commercial website platform. It happens to be the default script shell for Unix, Linux, well… you get the picture.
The core issue is that while initially the vulnerability highlights the ease with which an attacker might take over a Web server running CGI scripting, and ultimately, ‘get shell’ which offers the attacker the means to reconfigure the access environment, get to sensitive data or compromise the victim machine in many ways.
As we get to the bottom of this issue, it will certainly be revealed just how bad this particular discovery is – but there is a chance it’s bigger than Heartbleed, and that resulted in thousands of admin hours globally applying patches and fixes earlier this year.”
Contrast Security CTO and co-founder Jeff Williams:
“This is a pretty bad bug. The problem happens because bash supports a little used syntax for ‘exported functions’ – basically a way to define a function and make it available in a child shell. There’s a bug that continues to execute commands that are defined after the exported function.
So if you send an HTTP request with a referrer header that looks like this: Referer:() { :; }; ping -c 1 11.22.33.44. The exported function is defined by this crazy syntax () { :; }; And the bash interpreter will just keep executing commands after that function. In this case, it will attempt to send a ping request home, thus revealing that the server is susceptible to the attack.
Fortunately there are some mitigating factors. First, this only applies to systems that do the following things in order: 1) Accept some data from an untrusted source, like an HTTP request header, 2) Assign that data to an environment variable, 3) Execute a bash shell (either directly or through a system call).
If they send in the right data, the attacker will have achieved the holy grail of application security: ‘Remote Command Execution.’ An RCE basically means they have completely taken over the host.
Passing around data this way is a pretty bad idea, but it was the pattern back in the CGI days. Unfortunately, there are still a lot of servers that work that way. Even worse, custom applications may have been programmed this way, and they won’t be easy to scan for. So we’re going to see instances of this problem for a long long time.”
Tal Klein, Vice President of Strategy at Adallom:
“What I don’t like to see is people comparing Shellshock to Heartbleed. Shellshock is exponentially more dangerous because it allows remote code execution, meaning a successful attack could lead to the zombification of hosts. We’ve already seen one self-replicating Shellshock worm in the wild, and we’ve already seen one patch circumvention technique that requires patched Bash to be augmented in order to be ‘truly patched’. What I’m saying is that generally I hate people who wave the red flag about vulnerabilities, but this is a 10 out of 10 on the awful scale and poses a real threat to core infrastructure. Take it seriously.”
Michael Sutton, Vice President of Security Research at Zscaler:
“Robert Graham has called the ‘Shellshock’ vulnerability affecting bash ‘bigger than Heartbleed.’ That’s a position we could defend or refute, it all depends upon how you define bigger. Will more systems be affected? Definitely. While both bash and OpenSSL, which was impacted by Heartbleed, are extremely common, bash can be found on virtually all *nix system, while the same can’t be said for OpenSSL as many systems simply would require SSL communication. That said, we must also consider exploitability and here is where I don’t feel that the risk posed by Shellshock will eclipse Heartbleed.
Exploiting Heartbleed was (is) trivially easy. The same simple malformed ‘heartbeat’ request would trigger data leakage on virtually any vulnerable system. This isn’t true for Shellshock as exploitation is dependent upon influencing bash environment variables. Doing so remotely will depend upon the exposed applications that interact with bash. Therefore, this won’t quite be a ‘one size fits all’ attack. Rather, the attacker will first need to probe servers to determine not only those that are vulnerable, but also how they can inject code into bash environment variables.
The difference here is that we have to take application logic into account with Shellshock and that was not required with Heartbleed. That said, we’re in very much in the same boat having potentially millions of vulnerable machines, many of which will simply never be patched. Shellshock, like Heartbleed, will live on indefinitely.”
Mamoon Yunus, CEO of Forum Systems:
“The Bash vulnerability has the potential to be much worse than Heartbleed. Leaking sensitive data is obviously bad but the Bash vulnerability could lead to losing control of your entire system.
The Bash vulnerability is a prime example of why it’s critical to take a lockdown approach to open, free-for-all shell access, a practice that is all too common for on-premise and cloud-based servers. Mobile applications have caused an explosion in the number of services being built and deployed. Such services are hosted on vanilla Linux OS variants with little consideration given to security and are typically close to the corporate edge. Furthermore, a large number of vendors use open Linux OSes, install their proprietary functionality, and package commercial network devices that live close to the network edge at Tier 0. They do so with full shell access instead of building a locked-down CLI for configuration.
The Bash vulnerability is a wake-up call for corporations that continue to deploy business functionality at the edge without protecting their services and API with hardened devices that do not provide a shell-prompt for unfettered access to OS internals for anyone to exploit.”
Jody Brazil, CEO of FireMon:
“This is the kind of vulnerability that can be exploited by an external attacker with malicious intent. So, how do those from the Internet, partner networks or other outside connection gain access to this type of exposure?
An attack vector analysis that considers network access through firewalls and addresses translation can help identify which systems are truly exposed. Then, determine if it’s possible to mitigate the risk by blocking access, even temporarily. In those cases where this is not an option, prioritizing patching is essential. In other cases where, for example, where there is remote access to a vulnerable system that is not business-critical, access can be denied using existing firewalls.
This helps security organizations focus their immediate patching efforts and maximize staffing resources. It’s critical to identify the greatest risk and then prioritize remediation activities accordingly. Those are key best practices to address Bash or any vulnerability of this nature.”
Mark Stanislav, Security Researcher at Duo Security:
“While Heartbleed eventually became an easy vulnerability to exploit, it was ultimately time consuming, unreliable and rarely resulted in ‘useful’ data output. Shell Shock, however, effectively gives an attacker remote code execution on any impacted host with a much easier means to exploit than Heartbleed and greater potential results for criminals.
Once a web application or similarly afflicted application is found to be vulnerable, an attacker can do anything from download software, to read/write system files, to escalating privilege on the host or across internal networks. More damning, of course, is that the original patch to this issue seems to be flawed and now it’s a race to get a better patch released and deployed before attackers leverage this critical bug.”
Rob Sadowski, Director of Technology Solutions at RSA:
“This is a very challenging vulnerability to manage because the scope of potentially affected systems is very large, and can be exploited in a wide variety of forms across multiple attack surfaces. Further, there is no single obvious signature to help comprehensively detect attempts to exploit the vulnerability, as there are so many apps that access BASH in many different ways.
Because many organizations had to recently manage a vulnerability with similar broad scope in Heartbleed, they may have improved their processes to rapidly identify and remediate affected systems which they can leverage in their efforts here.”
Joe Barrett, Senior Security Consultant, Foreground Security:
“Right now, Shellshock is making people drop everything and scramble to fix patches. Security experts are still expanding the scope of vulnerability, finding more devices and more methods in which this vulnerability can be exploited. But no one has gotten hacked and been able to turn around and point and say ‘It was because of shellshock’ that I’ve seen.
If you have a Linux box, patch it. Now. Do you have a Windows box using Cygwin? Update Cygwin to patch it. And then start trying to categorize all of the ‘other’ devices on the network and determining if they might be vulnerable. Because chances are a lot of them are.
Unfortunately, vendors probably will never release patches to solve this for most appliances, because most [Internet-connected] appliances don’t even provide a way to apply such an update. But for the most part all you can do is try to identify affected boxes and move them behind firewalls and out of the way of anyone’s ability to reach them. Realistically, we’ll probably still be exploiting this bug in penetration tests in 8 years. Not to mention all of the actual bad guys who will be exploiting this.”
Until Next Friday…Have a Great Weekend!
Related Reading: What We Know About Shellshock So Far, and Why the Bash Bug Matters
Microsoft Shutting Down Trustworthy Computing Unit
Posted on September 23, 2014 by Kara Dunlap in Security
As part of its reorganization efforts, Microsoft has decided to shut down its Trustworthy Computing (TwC) unit that has been focusing on improving customers’ trust in the company’s commercial products.
While TwC will no longer function as a standalone business unit, its general manager, John Lambert, noted on Twitter that they’re just moving to a new home and that “SDL [Security Development Lifecycle], operational security, pentest, MSRC [Microsoft Security Response Center], Bluehat are just under a new roof.”
Some members of the TwC team are among the 2,100 employees laid off by Microsoft last week. However, most of the team will join the company’s Cloud and Enterprise Division or the Legal and Corporate Affairs group.
“I will continue to lead the Trustworthy Computing team in our new home as part of the Cloud and Enterprise Division. Significantly, Trustworthy Computing will maintain our company-wide responsibility for centrally driven programs such as the Security Development Lifecycle (SDL) and Online Security Assurance (OSA),” Scott Charney, corporate vice president of Trustworthy Computing said in a blog post on Monday. “But this change will also allow us to embed ourselves more fully in the engineering division most responsible for the future of cloud and security, while increasing the impact of our critical work on privacy issues by integrating those functions directly into the appropriate engineering and legal policy organizations.”
“I was the architect of these changes. This is not about the company’s loss of focus or diminution of commitment. Rather, in my view, these changes are necessary if we are to advance the state of trust in computing,” Charney added.
Microsoft’s Trustworthy Computing initiative was announced back in 2002 by Bill Gates, who emphasized at the time the need for such a platform.
“Every week there are reports of newly discovered security problems in all kinds of software, from individual applications and services to Windows, Linux, Unix and other platforms. We have done a great job of having teams work around the clock to deliver security fixes for any problems that arise. Our responsiveness has been unmatched – but as an industry leader we can and must do better,” Gates said in a memo to employees.
Brad Hill, Web security technologist at eBay, explained in a post on Google+ the importance of TwC and its impact on the security landscape over the past years.
“That Trustworthy Computing diaspora today constitutes a big part of the core of the modern information security industry. Veterans of TwC are security leaders in at Yahoo, Google, PayPal, Facebook, Adobe, VMWare and dozens of other companies,” Hill said. “From the hapless, hopeless position the industry found ourselves in a dozen years ago, we’re today starting to stand up credible defenses against nation-state level attackers. And while the heavyweight SDL processes of five years ago have been streamlined even at Microsoft, every security program today has some of the DNA of Trustworthy Computing in it and thinks about the job it exists to do in a different way because of it.”
In addition to shutting down the Trustworthy Computing, Microsoft is closing down its research facility in Silicon Valley.
The organization plans on cutting a total of 18,000 jobs, representing 14% of its workforce. Roughly 12,500 of the job cuts are related to the recently acquired mobile device manufacturer Nokia.
FireEye Unveils On Demand Security Service, Threat Intelligence Suite
Posted on September 20, 2014 by Kara Dunlap in Security
Threat protection firm FireEye has announced new offerings designed to provide customers with on-demand access to its cyber defense technology, intelligence, and analysts expertise on a subscription basis.
Designed to help enterprises scale their defense strategies, the new offerings provide customers with a single point of contact to meet their needs before, during or after a security incident.
The new FireEye as a Service offering is an on-demand security management offering that allows organizations to leverage FireEye’s technology, intelligence and expertise to discover and thwart cyber attacks.
The second new offering, FireEye Advanced Threat Intelligence, provides access to threat data and analytical tools that help identify attacks and provide context about the tactics and motives of specific threat actors, FireEye said.
Combined, the solutions are designed to equip enterprise security teams so they can implement an Adaptive Defense security model, an approach for defending against advanced threat actors that scales up or down based on the unique needs of each security organization.
“The new FireEye Advanced Threat Intelligence offering adds two new capabilities to complement FireEye’s existing Dynamic Threat Intelligence subscription,” the company explained in its announcement. “First, when the FireEye Threat Prevention Platform identifies an attack, users will now be able to view intelligence about the attackers and the malware. Security teams will be able to see who the associated threat actor is, what their likely motives are, and get information about the malware and other indicators they can use to search for the attackers.”
Additionally, a new threat intelligence research service allows customers to subscribe to ongoing research including dossiers, trends, news and analysis on advanced threat groups as well as profiles of targeted industries, including information about the types of data that threat groups target.
Other highlights of FireEye as a Service include:
• Detection of Adversaries and their Actions – FireEye analysts staff an around the clock global network of security operations centers to hunt for attackers in an environment using FireEye technology and advanced analytics that identifies outliers and correlates them with behaviors of known attackers. By finding high-risk threats at the earliest stages of an attack, FireEye minimizes the risk of a breach.
• Ability to Pivot to Incident Response – With FireEye as a Service, organizations can quickly engage a Mandiant incident response team when needed.
• Access to Personalized Intelligence Reports — FireEye as a Service customers get access to key intelligence findings and judgments specific to their organization from the FireEye intelligence team. This includes identification of attackers specifically targeting their industry, typical attack methodologies used by relevant adversaries, and key business or financial data that motivates attackers to target your organization.
“We need to analyze the environment to address the attacks that penetrate an organization’s perimeter and bypass preventive measures,” FireEye COO, Kevin Mandia, wrote in a blog post. “And then ultimately, when we understand an attack well enough, contain it to get back to normal business operations. To succeed in today’s cyber-threat environment this cycle must shrink – from alert to fix in months, to alert to fix in minutes – in order to eliminate the consequences of a security breach.”
With FireEye as a Service, customers have the option to manage their own security operations, offload security operations to FireEye, or co-manage operations with FireEye or a FireEye partner.
Both new offerings are available as a subscription to customers that have purchased FireEye products. Pricing for ongoing monitoring starts at $ 10,000 per month for smaller clients needing full support and. For larger organizations the price is much higher.
Organizations pay a subscription fee and account for the service as an operational expense or pay up front and account for it as a capital expense, FireEye said.
Will Technology Replace Security Analysts?
Posted on September 15, 2014 by Kara Dunlap in Security
Recently, at a round table discussion, I heard someone make the statement, “In five years, there will be no more security analysts. They will be replaced by technology.” This is not the first time I have heard a statement along these lines. I suppose that these sorts of statements are attention grabbing and headline worthy, but I think they are a bit naïve to say the least.
Taking a step back, it seems to me that this statement is based on the belief or assumption that operational work being performed within the known threat landscape of today can be fully automated within five years. I don’t know enough about the specific technologies that would be involved in that endeavor to comment on whether or not that is an errant belief. However, I can make two observations, based on my experience, which I believe are relevant to this specific discussion:
• Operational work tends to focus on known knowns
• The threat landscape of today, both known and unknown, will not be the threat landscape of tomorrow
The work that is typically performed in a security operations setting follows the incident response process of: Detection, Analysis, Containment, Remediation, Recovery, and Lessons Learned. Detection is what kicks off this process and what drives the day-to-day workflow in a security operations environment. If we think about it, in order to detect something, we have to know about it. It doesn’t matter if we learn of it via third party notification, signature-based detection, anomaly detection, or any other means.
The bottom line is that if we become aware of something, it is by definition “known”. But what percentage of suspicious or malicious activity that may be present within our organizations do we realistically think is known? I don’t know of a good way to measure this, since it involves a fair amount of information that is unknowable. I do, however, think we would be naïve to think it is anywhere near 100%.
If we take a step back, the ramifications of this are quite striking. In essence, most of the work we are performing today involves what is likely a mere fraction of what ought to concern us. Even if technology could automate all of today’s security operations functions within five years’ time, that still leaves quite a bit of work undone.
I think we would also be naïve to think that the threats of today, both known and unknown will be the threats of tomorrow. If I think back five or ten years, I’m not sure how many of us foresaw the degree to which intrusions involving large-scale payment card theft would become an almost regular occurrence. Granted, theft of sensitive information has been an issue for quite some time, but not to the degree that it has been in the recent past. Payment card theft is now a threat that many organizations take very seriously, whereas five or ten years ago, it may have been a threat that only certain specific organizations would have taken seriously. This is merely an example, but my main point here is that we can’t view today’s threat landscape as a base upon which to build predictions and make assertions for the future.
In my experience, analysts can provide unique value that may not be obvious to those who have not worked in the role. For those who don’t know, I worked as an analyst for many years before moving over to the vendor side. It is from that experience that I make this point.
In a mature security operations setting, there will be a certain workflow and process. Some organizations will be in a more mature place, while other organizations will be in a less mature place. Regardless of where an organization finds itself, there will always be room to improve. Alongside performing the tasks required by the process, a good analyst will make the process better and improve the maturity of the organization. This can happen in many ways, but here are a few different approaches that I have often seen:
• Improving existing alerting
• Identifying automation opportunities
• Performing gap analysis
• Implementing new alerting
Any work queue will have both signal (true positives) and noise (false positives). A mature, efficient security operations program will have a high enough signal-to-noise ratio so as to allow for a reasonable chance at timely detection of incidents. Regardless of the signal-to-noise ratio, alerting that populates the work queue can always be improved. As the people most familiar with the ins and outs of various different alerts, analysts play an important role here. The analyst can provide unique perspective regarding tuning and improving alerts to make them less noisy and ensure they keep up with the times.
It is certainly true that some alerts follow a nearly identical sequence of events each time they are vetted, qualified, and investigated. These cases are good candidates for automation, but they won’t identify themselves. A skilled analyst is needed to identify those repetitive manual tasks best suited for automation. Automation is a good thing and should be leveraged whenever appropriate, but it will never replace the analyst.
With automation comes newly liberated analyst cycles. Those cycles can and should be used to hunt, dig, and perform gap analysis. Hunting and digging help to identify unknown unknowns – network traffic or endpoint activity for which the true nature is unknown. Gap analysis serves to identify points within the organization where proper network and endpoint telemetry may not exist. All these activities help provide a window into the unknown. After all, today’s unknown may be tomorrow’s breach.
When unknown unknowns are discovered, they should be studied to understand their true nature. This process turns them into new known knowns. And it is from this pile that new alerting is continuously formulated. The analyst is an invaluable resource in turning unknown unknowns into known knowns. Based on my experience, there is no shortage of unknown unknowns waiting to be investigated.
A good analyst is hard to find and is a critical resource within a mature security operations function. Although it may be tempting to envision a world where the analyst has been fully automated, this does not seem particularly reasonable. Rather, the work of the analyst can and must evolve over time to keep pace with the changing threat landscape.
Dropbox Got Up to 249 National Security Requests in First Half of 2014
Posted on September 12, 2014 by Kara Dunlap in Security
Dropbox released another transparency report on Thursday and announced that moving forward, it will do so every six months in an effort to keep the public informed of its interactions with authorities.
Bart Volkmer, a lawyer with the company, revealed in a blog post that Dropbox had received 268 request for user information from law enforcement agencies between January and June of this year. In addition, while he hasn’t specified an exact number due to restrictions, the Dropbox representative said there had been 0-249 national security requests.
The company received a total of 120 search warrants and provided content (files stored in users’ accounts) and non-content (subscriber information) in 103 cases. In response to 109 subpoenas, the company hasn’t provided law enforcement with any content, but it has produced subscriber details in 89 cases. While many of the requests came from the United States, the report shows that there have been a total of 37 requests from agencies in other countries.
Volkmer has pointed out that while these numbers are small considering that the company has 300 million customers, Dropbox only complies with such requests if all legal requirements are satisfied. He claims cases in which agencies request too much information or haven’t followed proper procedures are “pushed back.”
The report also shows that the rate of data requests from governments remains steady. An interesting aspect is that agencies keep asking Dropbox not to notify targeted users. However, customers are notified as per the company’s policies, except for cases where there’s a valid court order. A total of 42 users were notified when the file sharing service was presented with search warrants, and 47 individuals were informed in the case of subpoenas.
There haven’t been any requests from governments targeting Dropbox for Business accounts, the company said.
“We’ll push for greater openness, better laws, and more protections for your information. A bill currently in Congress would do just that by reining in bulk data collection by the US government and allowing online services to be more transparent about the government data requests they receive,” Volkmer said. “Another would make it clear that government agencies must get a warrant supported by probable cause before they may demand the contents of user communications. We’ll continue to lend our support for these bills and for real surveillance reform around the world.”
While many companies publish transparency reports to keep the public informed of requests from governments, interesting details can also emerge from court documents. A perfect example are a series of recently unsealed documents showing that US authorities threatened to fine Yahoo $ 250,000 a day if it failed to comply with PRISM, the notorious surveillance program whose existence was brought to light last year by former NSA contractor Edward Snowden.
Google to Sunset SHA-1 Crypto Hash Algorithm
Posted on September 9, 2014 by Kara Dunlap in Security
Google has announced plans to begin sunsetting the SHA-1 cryptographic hash algorithm in the upcoming version of its Chrome browser.
In Chrome 39, which is slated to come in November, HTTPS sites whose certificates use SHA-1 and are valid past January 1, 2017, will no longer appear to be fully trustworthy in Chrome’s user interface.
“The SHA-1 cryptographic hash algorithm has been known to be considerably weaker than it was designed to be since at least 2005 — 9 years ago,” blogged Google’s Chris Palmer and Ryan Sleevi. “Collision attacks against SHA-1 are too affordable for us to consider it safe for the public web PKI. We can only expect that attacks will get cheaper.”
The use SHA-1 has been deprecated since 2011, when the CA/Browser Forum published their Baseline Requirements for SSL, Palmer and Sleevi noted. The requirements recommended that all CAs [certificate authorities] move away from SHA-1 as soon as possible.
“We have seen this type of weakness turn into a practical attack before, with the MD5 hash algorithm,” the two explained. “We need to ensure that by the time an attack against SHA-1 is demonstrated publicly, the web has already moved away from it. Unfortunately, this can be quite challenging. For example, when Chrome disabled MD5, a number of enterprises, schools, and small businesses were affected when their proxy software — from leading vendors — continued to use the insecure algorithms, and were left scrambling for updates. Users who used personal firewall software were also affected.”
“We plan to surface, in the HTTPS security indicator in Chrome, the fact that SHA-1 does not meet its design guarantee,” they wrote. “We are taking a measured approach, gradually ratcheting down the security indicator and gradually moving the timetable up.”
In Chrome 40, sites with end-entity certificates that expire between June 1, 2016, and Dec. 31, 2016, and include a SHA-1-based signature as part of the certificate chain will be treated as “secure, but with minor errors.” Sites with end-entity certificates that expire on or after Jan. 1, 2017, and include a SHA-1-based signature as part of the certificate chain will be considered “neutral, lacking security.”
The current visual display for “neutral, lacking security” is a blank page icon, and is used in other situations, such as HTTP, the two stated.
In Chrome 41, sites with end-entity certificates that expire between the start of 2016 and Dec. 31, 2016, that include a SHA-1-based signature as part of the certificate chain will be treated as “secure, but with minor errors.” Sites with end-entity certificates that expire on or after Jan. 1, 2017, and include a SHA-1-based signature as part of the certificate chain meanwhile will be treated as “affirmatively insecure.” Subresources from such domain will be treated as “active mixed content,” according to Google.
Bush-era Memos: President Can Wiretap Americans at all Times
Posted on September 7, 2014 by Kara Dunlap in Security
WASHINGTON – The US Justice Department has released two memos detailing the Bush administration’s legal justification for monitoring the phone calls and emails of Americans without a warrant.
The documents, released late Friday, relate to a secret program dubbed Stellar Wind that began after the September 11, 2001 attacks.
It allowed the National Security Agency to obtain communications data within the United States when at least one party was a suspected Al-Qaeda or Al-Qaeda affiliate member, and at least one party in the communication was located overseas.
“Even in peacetime, absent congressional action, the president has inherent constitutional authority … to order warrantless foreign intelligence surveillance,” then-assistant attorney general Jack Goldsmith said in a heavily redacted 108-page memo dated May 6, 2004.
“We believe that Stellar Wind comes squarely within the commander in chief’s authority to conduct the campaign against Al-Qaeda as part of the current armed conflict and that congressional efforts to prohibit the president’s efforts to intercept enemy communications through Stellar Wind would be an unconstitutional encroachment on the commander in chief’s power.”
The document was obtained by the American Civil Liberties Union rights group through a Freedom of Information Act lawsuit.
Goldsmith at the time also headed the Justice Department’s Office of Legal Counsel under then-attorney general John Ashcroft and then-deputy attorney general James Comey, who now heads the FBI.
According to Goldsmith, Congress’s authorization for the use of force passed shortly after 9/11 provided “express authority” for Stellar Wind.
“In authorizing ‘all necessary and appropriate force,’ the authorization necessarily included the use of signals intelligence capabilities (wiretapping), which are a critical, and traditional, tool for finding the enemy so that destructive force can be brought to bear on him,” Goldsmith wrote.
He suggested that the congressional approval granted the president authority that “overrides the limitations” of the Foreign Intelligence Surveillance Act (FISA), a law requiring a court order to monitor the communications of any American or person on US soil.
The second memo, dated July 16, 2004, pointed to a Supreme Court decision handed down just over two weeks earlier as providing additional justification for Stellar Wind.
Goldsmith noted that five of the Supreme Court justices agreed that the detention of US citizen Yaser Esam Hamdi, who was captured while fighting in Afghanistan, was authorized because it was a “fundamental” and “accepted” incident of waging war.
“Because the interception of enemy communications for intelligence purposes is also a fundamental and long-accepted incident of war, the Congressional Authorization likewise provides authority for Stellar Wind targeted content,” he added.
The program was brought under FISA court supervision in 2007, six years into its existence. Its was first revealed by The New York Times in 2005.
Microsoft Preps Critical Internet Explorer Security Update for Patch Tuesday
Posted on September 4, 2014 by Kara Dunlap in Security
Microsoft is set to release four security bulletins next Tuesday covering issues in Windows, Internet Explorer and other products.
Only one of the bulletins – the one dealing with Internet Explorer – is rated ‘Critical.’ The other three are classified by Microsoft as ‘Important.’
“Looks like a very light round of Microsoft Patching this month,” said Ross Barrett, senior manager of security engineering at Rapid7. “Only four advisories, of which only one is critical. The sole critical issue this month is the expected Internet Explorer role up affecting all supported (and likely some unsupported) versions. This will be the top patching priority for this month.”
Many organizations do not routinely stay up-to-date with the latest version of the browser, noted Eric Cowperthwaite, vice president of advanced security and strategy at Core Security.
“I checked with a couple recently and they are still running two or three versions of IE behind the current version,” he said. “The IE vulnerabilities are likely to impact significant portions of the enterprise computing space. Clearly the IE vulnerabilities that will allow remote code execution on every desktop OS and most server OS is the vulnerability that should be addressed first. Because it is so widespread and requires system restarts, this is going to be challenging for most IT organizations.”
The three non-critical bulletins address issues in Windows, the .NET Framework and Microsoft Lync Server. Two of the bulletins deal with denial of service issues, while the other addresses an escalation of privilege.
“The few number of patches expected out next week doesn’t mean you can take a pass on patching this month however,” noted Russ Ernst, director of product management at Lumension. “The critical class patch is for at least one remote code execution vulnerability in IE – likely another cumulative update for the browser.”
The updates are slated to be released Tuesday, Sept. 9.
Tor-Enabled Bifrose Variant Used in Targeted Attack
Posted on September 1, 2014 by Kara Dunlap in Security
A new variant of the Bifrose backdoor has been used in a cyberattack aimed at an unnamed device manufacturer, Trend Micro reported.
The threat, detected by the security firm as BKDR_BIFROSE.ZTBG-A, is more evasive than previous variants because it uses the Tor anonymity network for command and control (C&C) communications.
After infecting a device, the backdoor allows its masters to perform various tasks, including downloading and uploading files, creating and deleting folders, executing files and commands, capturing keystrokes, capturing screenshots and webcam images, terminating processes, collecting system information and manipulating windows.
“BIFROSE is mostly known for its keylogging routines, but it is capable of stealing far more information than just keystrokes,” Trend Micro threat response engineer Christopher Daniel So explained in a blog post. “It can also send keystrokes and mouse events to windows, which means that the attacker may be able to conduct operations as the affected user without having to compromise their accounts. For example, the attacker can log into internal systems or even send messages to other users in the network.”
While C&C communications via Tor can make the threat more elusive, the same communications can also be used by IT administrators to detect an attack. More precisely, they can identify malicious activity by monitoring the network for Tor traffic. Many organizations don’t use Tor for regular operations so any traffic associated with the anonymity network could indicate a cyberattack.
Another method recommended by Trend Micro for detecting Bifrose, in addition to the use of security solutions, involves checking for a file named klog.dat, which is used for the threat’s keylogging routines. Verifying network and mail logs could also help IT admins in detecting the malware.
Bifrose has been around since at least September 2008. One interesting campaign leveraging this particular threat was launched in 2010, when cybercriminals distributed the backdoor with the aid of a mail worm. The operation, dubbed “Here You Have,” was initially aimed at the human resource departments of organizations like NATO and the African Union. This old campaign demonstrates Bifrose’s potential for targeted attacks.
The “Here You Have” campaign was so successful that it caused a global outbreak.