This is the first of four posts on our experience deploying Content Security Policy at Dropbox. If this sort of work interests you, we are hiring! We will also be at AppSec USA this week. Come say hi!
At Dropbox, we are big fans of Content Security Policy or CSP. For those not familiar with the specification, I recommend reading Mike West’s excellent introduction to CSP. A quick recap: at its core, CSP is a declarative mechanism to whitelist content sources (such as sources for scripts, objects, images) in a web application.
Setting a CSP policy mitigates injection risk in the web application by limiting content sources. For example, here’s the script-src directive in the content security policy for a request I made to the Dropbox homepage:
'unsafe-eval' 'self' 'unsafe-inline' 'nonce-w+NfAhVZiaXzHT1RmBqS'
The directive lists all the trusted URIs (including the full path for browsers supporting it) where we could possibly load script code from. When a web browser supporting CSP sees a script tag, it checks the src attribute and matches it against the whitelist provided by the script-src directive of the CSP policy. If the script source is not included in the whitelist (maybe because of HTML injection), the browser will block the request.
The Dropbox CSP policy provides a strong mitigation against XSS and content injection attacks. But deploying a strong CSP policy at scale has a number of challenges. We hope that this four-part series sharing lessons we learnt provides value to the broader community. Today’s post discusses how to setup a report filtering pipeline to identify errors in the policy; in the second post, we will discuss how we deployed nonces and mitigated the ‘unsafe-inline’ in the policy above. In the third post, we will discuss our efforts to mitigate the risk from ‘unsafe-eval’, including open-sourcing patches we wrote. Finally, we will discuss how we reduced the risk of third-party integrations with privilege separation.
Filtering CSP Violation Reports
Identifying and enforcing a CSP header for a modern, complex web application is a difficult task. Thankfully, Content-Security Policy supports a trick to help you roll it out: report-only mode. The key trick behind report-only mode is allowing a website to test out policies and see their impact via violation reports sent to an endpoint of the policy author’s choosing. For example, you could just set a report-only policy of script-src ‘none’ to learn all the places you include scripts from.
Report-only mode holds great promise for deploying CSP: you keep iterating on the policy in report-only mode till you hit a point of no violation reports and then flip the switch to enforcement. This is often the recommended first step before turning on CSP in enforcement mode. Similarly, at a recent event I attended, the panel on adopting modern security mechanisms stressed how the CSP report-only mode can provide a useful crutch to deploying CSP, allowing you to evaluate possible policies before deploying them in enforcement mode.
This is true: CSP reporting is an irreplaceable tool for getting actionable feedback on deployed policies. At Dropbox, we deployed CSP in report-only mode for months before flipping the switch and going to "block" mode. But, at scale, one of the first lessons of deploying CSP is the sheer noise in the reports that make the default report mechanism unusable.
We found the biggest source of noise to be browser extensions that insert scripts into the page and/or other programs that might modify the HTML of your page. Recall that CSP blocks any unknown content from running on your page, so content injected into the page will likely get blocked by the browser too. If we just log all the reports that reach us, the logs will contain these errors too. Since you don’t have any control over these extensions, the end goal of “no more violation reports” mentioned above is unreachable.
Given our experience deploying CSP at scale, we have over the last year fine-tuned a filtering mechanism to ignore common false-positive violation reports. Our reporting pipeline filters out these reports before logging them to our analytics backend. In the spirit of encouraging adoption of CSP, we are sharing these filtering techniques and hope that you find them useful. The list started off from Neil Matatall’s brilliant, detailed list that we strongly recommend reading too.
At first glance, filtering violation reports sounds weird. Why would you not want to know when ad-injectors and spammers are modifying your web application? But, recall that we are talking about the pre-rollout phase of CSP. At this stage, the focus is on making sure that the CSP content whitelist isn’t breaking the web application. Filtering out the noise lets you focus on places where CSP enforcement might be a breaking change and fix appropriately. Once you enable CSP enforcement, the browser will block all the loads in the filtered list anyhow.
The filtering is two fold: first, we filter based on the URI scheme of the blocked URIs.
_ignored_blockedURI_schemas = [
"http:",# We never use HTTP content so not a legit report
"mxaddon-pkg",# maxthon addon packages
"jar:",# firefox addons
"file:",#we never insert file URIs into the page
"tmtbff://",# ticketmaster toolbar?
"safari",# safari extensions
"chrome",# stuff like chromenull: chrome:// chrome-extension://
"webviewprogressproxy:",# added by browsers in webviews
If the scheme of the blocked URI starts with any of the members of this list, we ignore it. The second step of the filtering looks at the host component of blocked URIs
"akamaihd.net", #Dropbox doesn't use Akamai CDN