Skip to main content
MSRC

The new security disclosure landscape

Rain Forest Puppy ( rfp@wiretrip.net)

Security disclosure has always been a contested topic, pitting “those that find the bugs” against “those that are responsible for the bugs.” In the days before security disclosure became a formal topic, those people who gave credence to some sort of moral compass often sought to follow a “gentleman’s code” that typically involved an earnest attempt to disclose the problem to the vendor and give the vendor a chance to fix it. There eventually came a time when this unofficial disclosure code of conduct became subject to different ideologies, which made it hard to know what ideology a discloser or vendor embraced. So the logical move was to write down the unwritten rules of disclosure, and that’s where RFPolicy came from. Note that the goal was not to specify _the _rules of disclosure, but rather to document one specific disclosure ideology. Having something documented made it easier to convey the intended ‘rules of engagement’, so to speak, when a discloser approached a vendor with a security problem. Others were encouraged to modify and document their own disclosure ideologies; the important aspect was not the process by which one made the disclosure, but that the process was communicated to and understood by both parties. This really highlights the entire underpinnings of the disclosure process: communication. The exact disclosure process taken is really irrelevant as long as both parties are effectively communicating, and that particularly includes communicating expectations regarding timelines, anticipated communication/updates, etc. Thus a written disclosure policy is just the foundation to facilitate communication by both parties to eventually reach a resolution regarding fixing the security problem.

This all worked well back when these policies were first drafted; however, all of those disclosure ideologies were built upon an assumption that was appropriate for that era but no longer hold true. That is, those ideologies all assume disclosure of security problems contained in disseminated software, discovered in closed environments. The act of one performing a security assessment on a piece of software installed upon their own system is generally assumed to not affect any other independently installed copy of that software–the security researcher is essentially performing their security research in a vacuum, with (theoretically) no ramifications of their research immediately affecting anyone other than their self. The aim of disclosing security problems in this context is about getting the vendor to update the software, so other uses of the software aren’t subject to the same security problems in their installation. Again, this was appropriate thinking for that era. However, now that we are in the Web 2.0 era, this all starts to break down. Whether it’s a popular web site, a SaaS (software as a service) offering, etc., one thing is immediately different: security researchers are no longer acting upon a closed environment, where the ramifications of their acts are limited to their self. The nature of what’s being assessed has changed, and thus we need a new type of disclosure process to accommodate the Web 2.0 paradigm.

Unfortunately it’s not just a simple matter of adjusting expected disclosure timelines and rewording some disclosure policy passages to specify “site updates” rather than “software patches.” There’s a lot more at play now. And more importantly, there’s a lot more at risk now. No, I’m not referring to vendors and corporations realizing they have a big security problem on their hands that must be dealt with. No, I’m not referring to all the ramifications that occur when you have a security problem (publicity, monetary, etc.). You see, the tables have turned. Security researchers are the ones at risk now. Reviewing an installed piece of software in your own closed environment, while conceptually subject to copyright and other intellectual property infringements, is benign enough within that exact context. However, reviewing someone else’s production web site (without their permission, of course) for security problems is essentially a criminal activity. What is the real difference between looking for a vulnerability in a web site to help make it more secure versus looking for a vulnerability in a web site for malicious purposes? In the initial stages, both approaches involve the same exact technical activity/process. The only difference is the attacker’s intent—and intent is just a subjective frame of mind of a person that can easily be (mis)interpreted in a court of law.

There’s no real practical way to change the act of looking for security problems in a third-party hosted web site such that it is 100% clear that the act/intent is not malicious (with the exception of gaining permission to perform such activity, ahead of time). Further, the laws and precedents of many countries are very clear regarding cybercrime…and they directly define, encompass, and punish the activity that many well-meaning security researchers believe they can perform against third-party web sites regardless. Yes, these well-meaning security researchers may not have a malicious intent, but intent is often an after-the-fact designation that may not be considered before the activity is considered and prosecuted as cybercrime. So, simply put: NO MATTER YOUR INTENTIONS, LOOKING FOR SECURITY VULNERABILITIES IN THIRD-PARTY WEB SITES (without permission) IS ILLEGAL PER THE LAWS OF YOUR COUNTRY. Period. That statement is so important, I will repeat it: NO MATTER YOUR INTENTIONS, LOOKING FOR SECURITY VULNERABILITIES IN THIRD-PARTY WEB SITES (without permission) IS ILLEGAL PER THE LAWS OF YOUR COUNTRY.

The law is the law, and changing that is a long, drawn-out process. While many may not agree with the law, it still is what it is for the time being. And if the laws in your country address cybercriminal activity, than it is likely that looking for security vulnerabilities in a third-party hosted web site is not differentiated in any way from exploiting the third-party hosted web site for malicious purposes. Thus disclosure policies and ideologies that look to describe how to disclose problems found in third-party web sites are a bit of a misnomer, because researchers should generally be discouraged from looking due to the research activity likely to be considered criminal!

Of course, that’s a dismal view advocating researchers just leave things be and we all go blissfully unaware of the security problems running rampant in our world, all due to the fear of prosecution. For the sake and security of the Internet, there needs to be something else (that doesn’t involve sweeping global law changes to favor/accommodate security research). Fortunately, I think there is.

Remember: the difference between a well-meaning researcher and a cybercriminal is their general intent. Therefore, a good solution to this conundrum would be to have the researcher show their intent to the vendor/third-party responsible for the target web site. Since the vendor/third-party decides whether to pursue a criminal investigation, knowing the intent of researcher could presumably change their decision to launch an investigation. While simple on the surface, there are still lots of caveats:

· The vendor/third-party must be willing to recognize the different intents (well-meaning security researchers vs. cybercriminals) accordingly

· There needs to be a reconcilable way for a researcher to establish intent to the vendor/third-party

· The method of establishing intent should dissuade cybercriminals from using the method to masquerading their true (malicious) intents

· The entire process should not interfere with or otherwise hinder any incident response processes of the vendor/third-party, which are still necessary to handle true cybercrime incidents in a timely fashion

That’s a tall order, but I believe it’s possible. However, in order to work, it’s going to require another paradigm shift: vendors/third-parties will have to proactively decide their level of participation, while the security researchers will be subject to the vendor’s decisions in a more absolute manner. This is contrary to the traditional disclosure ideologies of eras past that give the upper-hand to the researchers. However, if researchers truly want to be able to assess third-party hosted web sites for security problems without being subject to criminal penalties, they need to do so within the circumstances the third-party is willing to operate within.

The details of how I believe this can be done are still in the draft stages. Hopefully in the next few weeks, after receiving and incorporating feedback, I will discuss the details of an approach that can hopefully provide direction on how to handle these types of disclosure. Keep tabs on this blog or www.wiretrip.net for details.

——————

Editor’s Note: Microsoft has long understood the importance of thanking and acknowledging responsible researchers. We have an acknowledgement policy that includes online vulnerability finders, and a FAQ that explains how it works.

For more information on Microsoft’s online finder acknowledgement, see: http://www.microsoft.com/technet/security/acknowledge/default.mspx and http://www.microsoft.com/technet/security/acknowledge/faq.mspx .


How satisfied are you with the MSRC Blog?

Rating

Feedback * (required)

Your detailed feedback helps us improve your experience. Please enter between 10 and 2,000 characters.

Thank you for your feedback!

We'll review your input and work on improving the site.