Towards better vendor security assessments

// By Hongyi Hu • Mar 27, 2019

Addressing vendor security is a significant and inescapable problem for any modern company. Like many other companies, Dropbox has external third-party integrations with our products, and we also use vendors for internal services, from HR workflows to sales, marketing, and IT. In many ways, vendors play a critical part in Dropbox’s overall security posture and thus require appropriate scrutiny from our security team based on the risk posed by the vendor and feasible mitigations.

Today, we’re sharing the results of an experiment to improve vendor security assessments—directly codifying reasonable security requirements into our vendor contracts. We’re also sharing our model security legal terms and making them freely available for anyone to use and modify. We hope that more companies adopting this approach will help incentivize vendors to prioritize security and lead to broader security improvements among vendors.

Would Dropbox sign these security terms when we are the vendor in question? Of course! We can only demand our vendors commit to a top-tier security posture if we have done the same ourselves.


Like other companies, we have historically used security questionnaires to evaluate vendors as part of our overall vendor risk assessment process. Over time, we realized that questionnaires generally provided little value because they often yielded frustratingly vague answers. Most of the time, it was difficult to verify the answers. In many cases, this resulted in unnecessary back-and-forth discussion, delays for our internal stakeholders that wanted to use the vendor, and ultimately lack of useful signal on vendor security posture. Questionnaires also often favored vendors who could provide answers that sounded good and not necessarily vendors that had a good security program.

So we tried to do better.

Principles for more useful vendor security requirements

No list of security requirements will be perfect. Instead of aiming for perfection, we tried to follow these principles:

  • Encourage transparency. We believe that transparency, such as having a permissive vulnerability disclosure policy (VDP) that encourages security research, is a key characteristic of a good, mature security program.
  • Prioritize decision-making speed. Slowing down our internal stakeholders to make a security decision is a bad result and runs counter to our culture as a security team. We wanted to ensure that the process gave stakeholders a much faster outcome than before and also quickly revealed failure modes that we could rapidly address and iterate upon.
  • Protect ourselves and security researchers. In the modern SaaS world, we must consider vendors to be within our security perimeter. As a security team, we need to be able to test their security posture in order to protect our users’ data. Because anti-hacking laws have become an over-broad and complex patchwork, we need to ensure that we protect ourselves and external security researchers that might be looking for bugs in Dropbox or bugs in our vendors that impact Dropbox.
  • Walk the talk. We should be able to commit to and meet the same security standard that we hold vendors to. We should also keep requirements flexible by capturing the spirit of basic security best practices rather than providing an unnecessarily detailed and rigid list of requirements such as “you must use CVSS to determine severity.”

We decided to translate our core security requirements into legal terms that we would automatically add to our standard vendor contract if a vendor posed enough risk. While this might seem like an odd approach, it makes sense for us for several reasons.

First, we already needed to impose certain security and privacy contractual obligations on vendors, so we needed to periodically revise existing language to accurately reflect our needs.

Second, we needed to secure legal authorization for us and for our bug bounty participants to conduct careful, in-good-faith security testing. That gives us the flexibility to test for bugs ourselves and helps avoid the “snapshot in time” problem of only doing a single initial pen testing assessment. It also allows us to incentivize bug reports from the external security research community and promote more transparency into vendor bug reports. For vendor bug reports that affect the security of Dropbox, we can use our bug bounty program with competitive payouts or top-up the vendor’s bounty payouts. And bug bounty reports provide valuable, concrete data points if we need to argue for cancelling a vendor contract or mitigating risk in other ways if the vendor has demonstrated poor security practices and does not meaningfully improve.

Third, putting our security requirements into our contract speeds up the vendor security assessment process by centralizing all of our requirements in one format instead of incurring the overhead of a separate process for addressing security questionnaires.

Fourth, we’ve seen vendors try to bluff their way to good answers to a vendor security questionnaire. Is it the sales team or the security team leading the discussions? Sometimes it’s hard to tell. By engaging directly with the vendor’s legal team, we get less bluffing and more direct discussion.

Finally, if vendors flat out refuse to agree to certain provisions, that’s a potentially useful signal as to their overall commitment level to security, as well as to what risk areas to dig into further.

Of course, this method is not perfect, and we’re still iterating on the language, the scope, and best practices for addressing vendors who sign on but demonstrate unacceptable security practices and do not improve. But we think this is the right direction for our industry to go. We hope that as more companies experiment and adopt this idea that it will become normal and expected for vendors to have more transparency and higher standards around security.

Share and Experiment!

If you would like to experiment with our model security legal terms, please do so! We are unable to provide legal advice to you, and we highly recommend working with your own legal team to determine how to best employ the model language and modify it to suit your particular needs. Every security program has different challenges and risks, and we hope this is a useful starting recipe.

And if you have any feedback or pull requests to share, please let us know. We would love to see a set of standardized legal templates for vendor security that are useful and well-tested in the future.

Early results

The early results from this experiment are encouraging. We’d like to thank our vendors and partners who have signed our new legal terms. They understand and welcome the benefits of transparency, and who have committed to protecting good-faith security research. We expect their lead to become industry norm over time. Specific benefits of our legal terms so far include:

  • One vendor discovered missing 2FA on an important surface, and contractually committed to fix this. It is likely this was only discovered due to the diligence present in checking legal terms, and that a standard vendor security questionnaire would have glossed over this.
  • These vendors are in scope for our various hackathon events, such as the H1-3120 event we previously blogged about. This event resulted in the discovery (and quick patching) of an RCE-class vulnerability in one of our vendors.
  • Dropbox has a mature bug bounty program with top-tier VIPs. Coming under the umbrella of the Dropbox program, these vendors—even those with existing private bounty programs—have received dozens of high value vulnerability reports including lots of XSS, CSRF, IDOR and SQLi. Some of these were serious and discovering them so they can be patched is in everyone’s best interest.

These results are win-win, so we will continue with our new legal terms and encourage other parties to also adopt them. On the flip side, we have refused to do business with vendors who have rejected significant portions of our security legal terms. We see refusal to commit to things like transparency and protecting security researchers as significant red flags that the vendor’s commitment to security falls below our standards.


Special thanks to Becca Friedman, Caroline Bontia, Chris Evans, Dan Cook, Devdatta Akhawe, Mangesh Kulkarni, Rajan Kapoor, Romeo Gonzalez, and everyone else on the legal and security teams at Dropbox for all their help and advice on implementing this experiment.

// Copy link