On VEP: Regulatory Inefficiencies in Cybersecurity
In April 2017, anonymous hackers calling themselves “The Shadow Brokers” released a set of the National Security Agency’s hacking tools, called exploits, onto the internet. Within weeks, experts had identified different multiple strains of malicious code, called malware, based on the leaked NSA code. These strains, with names like “WannaCry,” targeted individuals, hospital systems and corporate infrastructure throughout the U.S. and Europe alarming even staunch defenders of the NSA like its former director, General Michael Hayden, and reinvigorating calls for reforms to the way the NSA handles vulnerabilities.[1],[2] In this case, the need for a more robust process that is consistent across executive agencies is readily apparent. The current process is an example of the executive branch regulating itself with a very light touch, too fragile to effectively limit the NSA or ensure that exploits are handled responsibly.
Like the executive branch as a whole, the NSA regularly finds itself balancing the security of domestic systems with its need to infiltrate targeted computers. Each new vulnerability is not only a potential tool that the NSA can use to collect intelligence, but also an attack vector that can be used against Americans. Aside from the demonstrated risk of attacks using stolen NSA tools, these vulnerabilities can also be independently discovered and exploited. The NSA’s stock of so-called “zero-day” exploits, which have not yet been reported to the software developer, are especially damaging because even the latest version of the software, with the latest security patches, is vulnerable. Thus, an attacker can compromise any system that they have access to, until the vendor can develop and deploy a patch that closes the vulnerability.[3]
The best way to prevent this threat is to report vulnerabilities to the vendor so that they can be patched before an attack occurs. Of course, that means that these especially useful vulnerabilities cannot be used for intelligence-gathering: the same vendor patch that protects American systems also protects the systems that the NSA wants to infiltrate. This tradeoff becomes particularly important when a vulnerability is found in core infrastructure and can have widespread consequences if exploited. A responsible decision on each vulnerability, then, is one that accounts for both of these risks.
The key governmental process that attempts to balance these concerns is internal to the executive branch, called the “Vulnerabilities Equities Process,” or “VEP.” Through this process, participating agencies attempt to make informed decisions about whether an exploit should be disclosed to the vendor or kept internally for future use. The existence, but not the details, of VEP first became public in a 2014 Obama Administration blog post in the wake of a serious security vulnerability called “Heartbleed,” one that the NSA was apparently unaware of.[4] More information about the process came out gradually, most notably after a Freedom of Information Act request and lawsuit by the Electronic Frontier Foundation.[5] The VEP requires participating agencies to submit vulnerabilities they discover. An interagency group called the “Equities Review Board” then decides whether to report or exploit them. This decision is ideally reached by consensus, but, if that’s not possible, it may also be settled by a majority vote.[6]
While this process is theoretically beneficial, it is not robust enough to ensure good decision-making that balances the agency’s dual responsibilities. Regardless of what the optimal level of vulnerability disclosure is, several features of the VEP mean that the process is incapable of finding that point or reining in excessive secrecy.
For one, the VEP is simply as an agreement among participating agencies. This legal structure can have unfortunate consequences. First of all, there are no penalties for agencies that choose not to participate[7]. While the NSA participates in the VEP and is very likely to continue to do so, uneven regulation across federal agencies has real consequences. For example, it means the considerations of non-participating agencies aren’t included in the VEP, and agencies that do not participate may end up working at cross purposes to those that are. It’s also possible that agencies like the NSA that are bound by the VEP could exploit the different standards to engage in activities that they otherwise could not.
This is similar in principle to the practice of parallel construction, with the role of the NSA reversed: in parallel construction cases, the NSA shares information with another agency that it collected using surveillance programs that the other agency doesn’t have access to. This information is offered on the condition that it is not introduced in court as evidence, insulating the NSA program from a legal challenge while giving the other agency potentially essential clues.[8] Similarly, in this case, the NSA might be able to insulate itself from its VEP reporting and deliberation requirements by partnering with a non-participating agency. By having a non-participating agency maintain control of the vulnerability, the NSA could keep vulnerabilities out of the VEP without having to give up on any use these tools.
This is not, however, even the most serious circumvention problem within the VEP framework. We’ve seen the FBI, a participating agency, successfully circumvent it in the recent high-profile case of the San Bernardino shooter in 2015. The FBI sought to access the encrypted data stored on the shooter’s iPhone and, after Apple refused to decrypt the device, hired a contractor to unlock it by exploiting a previously-unknown vulnerability in the iPhone software.[9] An unknown software vulnerability is exactly what the VEP is for, but in this case the FBI didn’t have to submit it at all, because they carefully avoided learning what it was. Since only the contractor knew, and the VEP doesn’t impose any requirements on contracted work, the FBI could entirely skirt the process.[10] This approach has been extraordinarily effective. It removed the possibility that the FBI would be required to report the vulnerability to Apple, who could patch it, while maintaining complete secrecy – the D.C. District Court ruled that the FBI did not have to release even the name of the contractor or the price paid in response to a Freedom of Information Act request.[11] Given this success, it would be strange if this technique did not become more common and spread to other agencies. Perhaps it already has – it’s unlikely that similar NSA conduct would ever become public.
This is a clear violation of the spirit of the VEP and undermines its central purpose. In fact, it should be concerning regardless of whether one supports or opposes the underlying conduct. As long as there are some vulnerabilities that should be reported (that is, the cost-benefit calculus does not always favor stockpiling and exploiting them), a decision needs to be made on the merits of each case. A strategy of circumvention guarantees that decision is not made and vulnerabilities are always kept internally, even when that may be grossly inappropriate. To the extent that someone believes that the offensively-minded decision to stockpile vulnerabilities is usually a better choice, they can and should advocate for a VEP that acknowledges the sound reasons behind it, not no VEP at all.
Of course, it is possible to argue that the existing VEP is too eager to disclose vulnerabilities, and so circumventive strategies are necessary, but the facts of this case make that unlikely. While public communication about the VEP in the Obama Administration stressed that “biased toward responsibly disclosing” vulnerabilities and that they do so “in the majority of cases,” this is not the relevant metric.[12] What matters is how many significant vulnerabilities – those that are very useful and, for the same reason, extremely dangerous, they choose to release. The public evidence suggests that most vulnerabilities of this class are not being disclosed in great numbers. For example, the exploits released by the Shadow Brokers used vulnerabilities in Windows that the NSA disclosed after it discovered that its exploits had been stolen and could potentially been published.[13] Other vulnerabilities, in widely-used enterprise networking equipment, were not disclosed until after the attacks emerged.[14] While it is still possible that there are vulnerability-disclosure errors at the margin, there does not seem to be a large enough problem to support such a radical runaround.
Finally, the VEP suffers from the typical problems of executive branch self-regulation: it’s easily amended and standards can change with administration. In fact, because the VEP is internal policy and not an Executive Order, there is no reason we would know if it had already been changed, either to accommodate some particular situation, or as part of a general review of regulations under the Trump Administration.
In sum, the Vulnerabilities Equities Process is a deeply unsatisfying regulatory structure that does not do enough to constrain how the executive branch handles vulnerabilities and exploits. This is far from the only instance of poor executive oversight, but it’s one that should be especially concerning: the NSA hack has given us some indication of the consequences of software vulnerabilities, and much worse – attacks on critical infrastructure or malware specifically designed to sow chaos – is certainly possible. The decisions that the executive makes in this area make a difference. The goal is laudable, even if the current procedure is not, and a more rigorous one that forces the hard decisions to be made thoughtfully is in everyone’s interest.
[1] Timberg, Craig, Griff Witte, and Ellen Nakashima. “Malware, described in leaked NSA documents, cripples computers worldwide.” The Washington Post. May 12, 2017. Accessed October 9, 2017. https://www.washingtonpost.com/world/hospitals-across-england-report-it-failure-amid-suspected-major-cyber-attack/2017/05/12/84e3dc5e-3723-11e7-b373-418f6849a004_story.html.
[2] Shane, Scott. “Malware Case Is Major Blow for the N.S.A.” The New York Times. May 16, 2017. Accessed October 9, 2017. http://www.nytimes.com/2017/05/16/us/nsa-malware-case-shadow-brokers.html.
[3] Ghappour, Ahmed. “Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web.” Stanford Law Review 69, no. 4 (April 2016): 1075-136
[4] Daniel, Michael. “Heartbleed: Understanding When We Disclose Cyber Vulnerabilities.” National Archives and Records Administration. April 28, 2014. Accessed October 9, 2017. https://obamawhitehouse.archives.gov/blog/2014/04/28/heartbleed-understanding-when-we-disclose-cyber-vulnerabilities.
[5] Complaint for Injunctive Relief for Violation of the Freedom of Information Act, No. 3:14-cv-0301, Electronic Frontier Foundation v. National Security Agency, Office of the Director of National Intelligence, (N.D. California filed July 1, 2014) https://www.eff.org/document/eff-v-nsa-odni-complaint
[6] Commercial and Government Information Technology and Industrial Control Product or System Vulnerabilitties Equities Policy and Process (Redacted). https://www.eff.org/document/vulnerabilities-equities-process-redactions
[7] Schwartz, Ari and Rob Knake. Government’s Role in Vulnerability Disclosure: Creating a Permanent and Accountable Vulnerability Equities Process. Cambridge: Belfer Center for Science and International Affairs, 2016. Accessed October 9, 2017. https://www.belfercenter.org/sites/default/files/legacy/files/vulnerability-disclosure-web-final3.pdf
[8] Grayson, Amanda Claire. “Parallel Construction: Constructing the NSA out of Prosecutorial Records.” Harv. L & Pol’y Rev. Online S25 9 (2015).
[9] Nakashima, Ellen. “FBI paid professional hackers one-time fee to crack San Bernardino iPhone.” The Washington Post. April 12, 2016. Accessed October 18, 2017. https://www.washingtonpost.com/world/national-security/fbi-paid-professional-hackers-one-time-fee-to-crack-san-bernardino-iphone/2016/04/12/5397814a-00de-11e6-9d36-33d198ea26c5_story.html.
[10] Healey, Jason. The U.S. Government and Zero-Day Vulnerabilities: From Pre-Heartbleed to Shadow Brokers. New York: School of International and Public Affairs, Journal of International Affairs, 2016. Accessed October 9, 2017.
[11] Associated Press et al v Federal Bureau of Investigation, Civil Actiton No. 1:16-cv-01859-TSC, 2017 (D.C. Cir., September 30, 2017). https://www.documentcloud.org/documents/4064163-FOIA-case-vs-FBI-phone-unlock.html
[12] Daniel, Michael. “Heartbleed: Understanding When We Disclose Cyber Vulnerabilities.” National Archives and Records Administration. April 28, 2014. Accessed October 9, 2017. https://obamawhitehouse.archives.gov/blog/2014/04/28/heartbleed-understanding-when-we-disclose-cyber-vulnerabilities.
[13] Goodin, Dan. “Mysterious Microsoft patch killed 0-days released by NSA-leaking Shadow Brokers.” Ars Technica. April 15, 2017. Accessed October 20, 2017. https://arstechnica.com/information-technology/2017/04/purported-shadow-brokers-0days-were-in-fact-killed-by-mysterious-patch/.
[14] Greenberg, Andy. “The Shadow Brokers Mess Is What Happens When the NSA Hoards Zero-Days.” Wired. June 03, 2017. Accessed October 20, 2017. https://www.wired.com/2016/08/shadow-brokers-mess-happens-nsa-hoards-zero-days/. Reitman, Rainey. “The Shadow Brokers Publish NSA Spy Tools, Demonstrating Possible Flaws in the NSA’s Approach to Security Vulnerabilities.” Electronic Frontier Foundation. September 07, 2016. Accessed October 20, 2017. https://www.eff.org/deeplinks/2016/09/shadow-brokers-publish-powerful-nsa-spy-tools-demonstrating-flaws-nsas-approach.