Full disclosure (computer security)

From HandWiki
Short description: Policy in computer security


In the field of computer security, independent researchers often discover flaws in software that can be abused to cause unintended behaviour; these flaws are called vulnerabilities. The process by which the analysis of these vulnerabilities is shared with third parties is the subject of much debate, and is referred to as the researcher's disclosure policy. Full disclosure is the practice of publishing analysis of software vulnerabilities as early as possible, making the data accessible to everyone without restriction. The primary purpose of widely disseminating information about vulnerabilities is so that potential victims are as knowledgeable as those who attack them.[1]

In his 2007 essay on the topic, Bruce Schneier stated "Full disclosure – the practice of making the details of security vulnerabilities public – is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure".[2] Leonard Rose, co-creator of an electronic mailing list that has superseded bugtraq to become the de facto forum for disseminating advisories, explains "We don't believe in security by obscurity, and as far as we know, full disclosure is the only way to ensure that everyone, not just the insiders, have access to the information we need."[3]

The vulnerability disclosure debate

The controversy around the public disclosure of sensitive information is not new. The issue of full disclosure was first raised in the context of locksmithing, in a 19th-century controversy regarding whether weaknesses in lock systems should be kept secret in the locksmithing community, or revealed to the public.[4] Today, there are three major disclosure policies under which most others can be categorized:[5] Non Disclosure, Coordinated Disclosure, and Full Disclosure.

The major stakeholders in vulnerability research have their disclosure policies shaped by various motivations, it is not uncommon to observe campaigning, marketing or lobbying for their preferred policy to be adopted and chastising those who dissent. Many prominent security researchers favor full disclosure, whereas most vendors prefer coordinated disclosure. Non disclosure is generally favored by commercial exploit vendors and blackhat hackers.[6]

Coordinated vulnerability disclosure

Coordinated vulnerability disclosure is a policy under which researchers agree to report vulnerabilities to a coordinating authority, which then reports it to the vendor, tracks fixes and mitigations, and coordinates the disclosure of information with stakeholders including the public.[7][8] In some cases the coordinating authority is the vendor. The premise of coordinated disclosure is typically that nobody should be informed about a vulnerability until the software vendor says it is time.[9][10] While there are often exceptions or variations of this policy, distribution must initially be limited and vendors are given privileged access to nonpublic research.

The original name for this approach was “responsible disclosure”, based on the essay by Microsoft Security Manager Scott Culp “It's Time to End Information Anarchy”[11] (referring to full disclosure). Microsoft later called for the term to be phased out in favor of “Coordinated Vulnerability Disclosure” (CVD).[12][13]

Although the reasoning varies, many practitioners argue that end-users cannot benefit from access to vulnerability information without guidance or patches from the vendor, so the risks of sharing research with malicious actors is too great for too little benefit. As Microsoft explain, "[Coordinated disclosure] serves everyone's best interests by ensuring that customers receive comprehensive, high-quality updates for security vulnerabilities but are not exposed to malicious attacks while the update is being developed."[13]

To prevent vendors to indefinitely delaying the disclosure, a common practice in the security industry, pioneered by Google,[14] is to publish all the details of vulnerabilities after a deadline, usually 90 or 120[15] days reduced to 7 days if the vulnerability is under active exploitation.[16]

Full disclosure

Full disclosure is the policy of publishing information on vulnerabilities without restriction as early as possible, making the information accessible to the general public without restriction. In general, proponents of full disclosure believe that the benefits of freely available vulnerability research outweigh the risks, whereas opponents prefer to limit the distribution.

The free availability of vulnerability information allows users and administrators to understand and react to vulnerabilities in their systems, and allows customers to pressure vendors to fix vulnerabilities that vendors may otherwise feel no incentive to solve. There are some fundamental problems with coordinated disclosure that full disclosure can resolve.

  • If customers do not know about vulnerabilities, they cannot request patches, and vendors experience no economic incentive to correct vulnerabilities.
  • Administrators cannot make informed decisions about the risks to their systems, as information on vulnerabilities is restricted.
  • Malicious researchers who also know about the flaw have a long period of time to continue exploiting the flaw.

Discovery of a specific flaw or vulnerability is not a mutually exclusive event, multiple researchers with differing motivations can and do discover the same flaws independently.

There is no standard way to make vulnerability information available to the public, researchers often use mailing lists dedicated to the topic, academic papers or industry conferences.

Non disclosure

Non disclosure is the policy that vulnerability information should not be shared, or should only be shared under non-disclosure agreement (either contractually or informally).

Common proponents of non-disclosure include commercial exploit vendors, researchers who intend to exploit the flaws they find,[5] and proponents of security through obscurity.

Debate

In 2009, Charlie Miller, Dino Dai Zovi and Alexander Sotirov announced at the CanSecWest conference the "No More Free Bugs" campaign, arguing that companies are profiting and taking advantage of security researchers by not paying them for disclosing bugs.[17] This announce made it to the news and opened a broader debate about the problem and its associated incentives.[18][19]

Arguments against coordinated disclosure

Researchers in favor of coordinated disclosure believe that users cannot make use of advanced knowledge of vulnerabilities without guidance from the vendor, and that the majority is best served by limiting distribution of vulnerability information. Advocates argue that low-skilled attackers can use this information to perform sophisticated attacks that would otherwise be beyond their ability, and the potential benefit does not outweigh the potential harm caused by malevolent actors. Only when the vendor has prepared guidance that even the most unsophisticated users can digest should the information be made public.

This argument presupposes that vulnerability discovery is a mutually exclusive event, that only one person can discover a vulnerability. There are many examples of vulnerabilities being discovered simultaneously, often being exploited in secrecy before discovery by other researchers.[20] While there may exist users who cannot benefit from vulnerability information, full disclosure advocates believe this demonstrates a contempt for the intelligence of end users. While it's true that some users cannot benefit from vulnerability information, if they're concerned with the security of their networks they are in a position to hire an expert to assist them as you would hire a mechanic to help with a car.

Arguments against non disclosure

Non disclosure is typically used when a researcher intends to use knowledge of a vulnerability to attack computer systems operated by their enemies, or to trade knowledge of a vulnerability to a third party for profit, who will typically use it to attack their enemies.

Researchers practicing non disclosure are generally not concerned with improving security or protecting networks. However, some proponents[who?] argue that they simply do not want to assist vendors, and claim no intent to harm others.

While full and coordinated disclosure advocates declare similar goals and motivations, simply disagreeing on how best to achieve them, non disclosure is entirely incompatible.

References

  1. Heiser, Jay (January 2001). "Exposing Infosecurity Hype". Information Security Mag. TechTarget. http://infosecuritymag.techtarget.com/articles/january01/columns_curmudgeons_corner.shtml. 
  2. Schneier, Bruce (January 2007). "Damned Good Idea". CSO Online. https://www.schneier.com/essay-146.html. 
  3. Rose, Leonard. "Full-Disclosure". A lightly-moderated mailing list for the discussion of security issues. https://lists.grok.org.uk/mailman/listinfo/full-disclosure. 
  4. Hobbs, Alfred (1853). Locks and Safes: The Construction of Locks. London: Virtue & Co.. 
  5. 5.0 5.1 Shepherd, Stephen. "Vulnerability Disclosure: How do we define Responsible Disclosure?". SANS GIAC SEC PRACTICAL VER. 1.4B (OPTION 1). SANS Institute. https://www.sans.org/reading_room/whitepapers/threats/define-responsible-disclosure_932. 
  6. Moore, Robert (2005). Cybercrime: Investigating High Technology Computer Crime. Matthew Bender & Company. p. 258. ISBN 1-59345-303-5. 
  7. "Software Vulnerability Disclosure in Europe" (in en-US). 2018-06-27. https://www.ceps.eu/ceps-publications/software-vulnerability-disclosure-europe-technology-policies-and-legal-challenges/. 
  8. Weulen Kranenbarg, Marleen; Holt, Thomas J.; van der Ham, Jeroen (2018-11-19). "Don't shoot the messenger! A criminological and computer science perspective on coordinated vulnerability disclosure" (in en). Crime Science 7 (1): 16. doi:10.1186/s40163-018-0090-8. ISSN 2193-7680. 
  9. "Project Zero: Vulnerability Disclosure FAQ". https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-faq.html. 
  10. Christey, Steve. "Responsible Vulnerability Disclosure Process". IETF. p. 3.3.2. https://tools.ietf.org/html/draft-christey-wysopal-vuln-disclosure-00. 
  11. Culp, Scott. "It's Time to End Information Anarchy". Technet Security. Microsoft TechNet. http://www.microsoft.com/technet/treeview/default.asp?url=/technet/columns/security/noarch.asp. 
  12. Goodin, Dan. "Microsoft imposes security disclosure policy on all workers". The Register. https://www.theregister.co.uk/2011/04/19/microsoft_vulnerability_disclosure_policy/print.html. 
  13. 13.0 13.1 Microsoft Security. "Coordinated Vulnerability Disclosure". https://www.microsoft.com/security/msrc/report/disclosure.aspx. 
  14. "About Google's App Security - Google" (in en). https://about.google/intl/ALL_us/appsecurity/. 
  15. "Policy | Zero Day Initiative". https://zerodayinitiative.com/. 
  16. "Reviewing 90 Day Responsible Disclosure Policies in 2022" (in en). 2022-08-30. https://www.tenable.com/podcast/reviewing-90-day-responsible-disclosure-policies-in-2022. 
  17. "Dailydave: No more free bugs (and WOOT)" (in en). https://seclists.org/dailydave/2009/q2/17. 
  18. ""No more free bugs"? There never were any free bugs" (in en). https://www.zdnet.com/article/no-more-free-bugs-there-never-were-any-free-bugs/. 
  19. "No more free bugs for software vendors" (in en). 2009-03-23. https://threatpost.com/no-more-free-bugs-software-vendors-032309/72484/. 
  20. B1tch3z, Ac1d. "Ac1db1tch3z vs x86_64 Linux Kernel". http://seclists.org/fulldisclosure/2010/Sep/268.