Mon, 10 Sep 2018

Security Bugs in Practice: SSRF via Request Splitting

One of the most interesting (and sometimes scary!) parts of my job at Mozilla is dealing with security bugs. We don't always ship perfect code – nobody does – but I'm privileged to work with a great team of engineers and security folks who know how to deal effectively with security issues when they arise. I'm also privileged to be able to work in the open, and I want to start taking more advantage of that to share some of my experiences.

One of the best ways to learn how to write more secure code is to get experience watching code fail in practice. With that in mind, I'm planning to write about some of the security-bug stories that I've been involved in during my time at Mozilla. Let's start with a recent one: Bug 1447452, in which some mishandling of unicode characters by the Firefox Accounts API server could have allowed an attacker to make arbitrary requests to its backend data store.

The bug: corruption of unicode characters in HTTP request path

It started when I was debugging an unrelated unicode-handling issue that eventually led me to a bug report against the Node.js `http` module, where the reporter noted that:

When making a request using `http.get` with the path set to '/café🐶, the server receives /café=6

In other words, the reporter was asking Node.js to make a HTTP request to a particular path, but the outgoing request was actually directed at a different path! Digging into the details, it turned out that this issue was caused by a lossy encoding of unicode characters when Node.js was writing the HTTP request out to the wire.

Although users of the `http` module will typically specify the request path as a string, Node.js must ultimately output the request as raw bytes. JavaScript has unicode strings, so converting them into bytes means selecting and applying an appropriate unicode encoding. For requests that do not include a body, Node.js defaults to using "latin1", a single-byte encoding that cannot represent high-numbered unicode characters such as the 🐶 emoji. Such characters are instead truncated to just their lowest byte of their internal JavaScript representation:

> v = "/caf\u{E9}\u{01F436}" '/café🐶' > Buffer.from(v, 'latin1').toString('latin1') '/café=6'

Data corruption when handling user input is frequently a red flag for an underlying security issue, and I knew that our codebase made outgoing HTTP requests that could include user input in the path. So I immediately filed a confidential security bug in Bugzilla, reached out to the node security team for more info, and dove in to look for places where we might be constructing URLs based on user-provided unicode strings.

The vulnerability: SSRF via Request Splitting

The specific vulnerability I was worried about was an attack called request splitting, to which text-based protocols like HTTP are often vulnerable. Consider a server that takes some user input and includes it in a request to an internal service exposed over HTTP, like this:

GET /private-api?q=<user-input-here> HTTP/1.1 Authorization: server-secret-key

If the server does not properly validate the user input, it may be possible for an attacker to inject protocol control characters directly into the outgoing request. Suppose in this case that the server accepted a user input of:

"x HTTP/1.1\r\n\r\nDELETE /private-api HTTP/1.1\r\n"

When making its outgoing request, the server might write this out to the wire directly as:

GET /private-api?q=x HTTP/1.1 DELETE /private-api Authorization: server-secret-key

The receiving service would interpret this as two separate HTTP requests, a `GET` followed by a `DELETE`, with no way to know that this isn't what the caller intended.

In effect, this specially-crafted user input would trick the server into making an additional outbound request, a situation known as Server-Side Request Forgery or "SSRF". The server may have privileges that the attacker does not, such as access to internal networks or secret API keys, which would increase the severity of the issue.

Good-quality HTTP libraries will typically include mitigations to prevent this behaviour, and Node.js is no exception: if you attempt to make an outbound HTTP request with control characters in the path, they will be percent-escaped before being written out to the wire:

> http.get('http://example.com/\r\n/test').output [ 'GET /%0D%0A/test HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n' ]

Unfortunately, the above bug in handling unicode characters means that these measures can be circumvented. Consider a URL like the following, which contains some unicode characters with diacritics:

> 'http://example.com/\u{010D}\u{010A}/test' http://example.com/čĊ/test

When Node.js version 8 or lower makes a `GET` request to this URL, it doesn't escape them because they're not HTTP control characters:

> http.get('http://example.com/\u010D\u010A/test').output [ 'GET /čĊ/test HTTP/1.1\r\nHost: example.com\r\nConnection: close\r\n\r\n' ]

But when the resulting string is encoded as latin1 to write it out to the wire, these characters get truncated into the bytes for "\r" and "\n" respectively:

> Buffer.from('http://example.com/\u{010D}\u{010A}/test', 'latin1').toString() 'http://example.com/\r\n/test'

Thus, by including carefully-selected unicode characters in the request path, an attacker could trick Node.js into writing HTTP protocol control characters out to the wire.

The behaviour has been fixed in the recent Node.js 10 release, which will throw an error if the request path contains non-ascii characters. But for Node.js versions 8 or lower, any server that makes outgoing HTTP requests may be vulnerable to an SSRF via request splitting if it:

  • Accepts unicode data from from user input, and
  • Includes that input in the request path of an outgoing HTTP request, and
  • The request has a zero-length body (such as a GET or DELETE).

The impact: forging requests to the FxA data store

We audited the FxA server stack to look for places where it makes HTTP requests with a zero-length body and user-provided data in the request path, and we found three places where the above bug could be triggered.

The first was in our support for WebPush. A signed-in client can provide a https URI at which to receive notification of account status changes, which the server will deliver by making a zero-length `PUT` request. Fortunately, the requests made by the server in this case do not carry any special privileges or include any API tokens. The bug could be exploited here to trick the FxA server into making an unintended request to the webpush notification host, but that request would not be any more powerful than one which the attacker could have made directly.

The second was in checking the authenticity of BrowserID certificates, where the FxA server parses a hostname out of a user-provided JSON blob, and then fetches the signing keys for that host by making a `GET` request like:

GET /.well-known/browserid?domain=<hostname>

In our development environment, this bug could thus be exploited to trick the server into making arbitrary requests to arbitrary hostnames. Fortunately, in our production environment these requests are all sent via the squid caching proxy, which is configured with strict validation rules to block any unexpected outgoing requests, and which prevented the bug from being exploited in this case.

The third was in making HTTP requests to our backend data store, and it's here that we had a real exploitable issue in practice.

As a bit of background, the Firefox Accounts production server is split between a web-facing API server and a separate internal datastore service that talks to a MySQL database, like this:

+--------+ +--------+ +-----------+ +----------+ | Client | HTTP | API | HTTP | DataStore | SQL | MySQL | | |<------>| Server |<------>| Service |<----->| Database | +--------+ +--------+ +-----------+ +----------+

The API server talks to the datastore service over plain old HTTP, and it turned out that there was one single place where unicode data from user input could make its way into the path of one of these requests.

Many of our data storage requests are keyed by email address, and email addresses are allowed to contain unicode characters. To avoid issues with unicode encoding and decoding between the two services, most email-related operations in our datastore API accept the email as a hex-encoded utf8 string. For example, the API server would fetch the account record for email "test@example.com" by making a HTTP request to the data store like this:

GET /email/74657374406578616d706c652e636f6d

By a simple historical oversight, there was one operation that accepted the email address as a raw string. Deleting an email from an account with id "xyz" was done via a request like:

DELETE /account/xyz/emails/test@example.com

This is inconsistent, but it's not obvious from casual inspection that it could cause a security problem — we carefully validate all user input coming into the system, so the email address can't contain any HTTP control characters, and even if it did they would be automatically escaped by the `http` module. But the email address can contain unicode characters.

In a test environment, I was able to create an account and add the following strange-but-valid email address to it:

x@̠ňƆƆɐį1̮1č̊č̊ɆͅƆ̠įaccountįf9f9eebb05ef4b819b0467cc5ddd3b4aįsessions̠ňƆƆɐį1̮1č̊č̊.cc

The non-ascii characters here are carefully chosen so that, when lowercased and encoded in latin1, they will produce the raw bytes for various HTTP control characters:

> v = 'x@̠ňƆƆɐį1̮1č̊č̊ɆͅƆ̠įaccountįf9f9eebb05ef4b819b0467cc5ddd3b4aįsessions̠ňƆƆɐį1̮1č̊č̊.cc' > Buffer.from(v.toLowerCase(), "latin1").toString() 'x@ HTTP/1.1\r\n\r\nGET /account/f9f9eebb05ef4b819b0467cc5ddd3b4a/sessions HTTP/1.1\r\n\r\n.cc'

By adding this email address to an account and then deleting it, I could cause the API server to make an HTTP request to the datastore like:

DELETE /account/f9f9eebb05ef4b819b0467cc5ddd3b4a/email/x@̠ňɔɔɐį1̮1č̊č̊ɇͅɔ̠įaccountįf9f9eebb05ef4b819b0467cc5ddd3b4aįsessions̠ňɔɔɐį1̮1č̊č̊.cc

Which, thanks to the above bug in Node.js, would be written out to the wire as:

> console.log(Buffer.from('DELETE /account/f9f9eebb05ef4b819b0467cc5ddd3b4a/email/x@̠ňɔɔɐį1̮1č̊č̊ɇͅɔ̠įaccountįf9f9eebb05ef4b819b0467cc5ddd3b4aįsessions̠ňɔɔɐį1̮1č̊č̊.cc', 'latin1').toString()) DELETE /account/f9f9eebb05ef4b819b0467cc5ddd3b4a/email/x@ HTTP/1.1 GET /account/f9f9eebb05ef4b819b0467cc5ddd3b4a/sessions HTTP/1.1 .cc

That's an SSRF, causing the API server to make an extra `GET` that it did not intend.

This specific `GET` request would be harmless, but it was enough to convince me that the bug was exploitable and could potentially be used to trick the API server into making a wide variety of fraudulent requests to the datastore API — say, to create an account for an email address that the user did not control, or to reset the password on another user's account, or just about any operation that could be expressed within the 255-unicode-character length limit that Firefox Accounts imposes on an email address.

Fortunately, we do not have any evidence of this bug being actively exploited in our production environment.

It's also important to note that it would not have been possible for an attacker to exploit this bug to access a user's Firefox Sync data. Firefox Sync uses strong client-side encryption to ensure that only someone who knows your account password can access your synced data.

The quick fix: encoding the email address

Upon first encountering the underlying Node.js issue, I had reached out to the node security team for information and guidance. They were very responsive, and confirmed that this was a known behaviour that couldn't be changed for backwards-compatibility reasons, but would be fixed in the then-upcoming release of Node.js 10. In other words: we would have to ship a fix in our application.

In a bit of a twist, It turned out that we had already noticed this discrepancy in the behaviour of the email-deletion endpoint, and our fantastic Outreachy intern Deepti had fixed it to hex-encode the email address as a matter of general code cleanliness. Unfortunately that fix had not yet shipped to production, so we had to enact our "chemspill" process to ship it to production as quickly as possible.

We maintain a private github fork of all Firefox Accounts code repositories for exactly this purpose, so in practice the process of releasing the fix involved:

  • Syncing the private repo with the latest release branch from the public repo.
  • Cherry-picking the fix into the private release branch, and requesting review.
  • Making a new release tag in the private repo, and allowing CircleCI to build docker images for deployment.
  • Deploying the new release to our staging environment and running a suite of both manual and automated tests to guard against regressions.
  • Rolling out the fix to our production environment.

All up, it took us a little less than 24 hours to go from initial awareness of the underlying Node.js bug through to having a fix deployed in production. That's including the time spent on analysis, auditing, code review, QA and deployment, and I think it's a pretty solid turnaround time! I'm very proud of everyone on the Firefox Accounts team for their quick and professional response to this issue.

The followup: adding additional mitigations

With any security-related issue, it's important not to just push out a fix and then walk away. Instead, try to figure out what circumstances led to the issue and whether similar issues can be prevented or mitigated in the future.

In this case, the ultimate cause of the issue was HTTP's text-based nature making it vulnerable to injection-style attacks such as request splitting. This particular Node.js bug is just one example of how things can go wrong when constructing HTTP requests; the recent Blackhat presentation "A New Era of SSRF" provides many more examples in a variety of programming languages.

In my opinion, the best long-term mitigation will be for us to move away from using HTTP for internal service requests, and towards something more structured like gRPC. However, that's not feasible in the short-term.

Instead, we borrowed a page from the playbook of another classic text-based protocol with a long history of injection-style attacks: SQL. A modern web app should never be building SQL queries from user input by hand, but should instead be using techniques such as parameterized queries or programmatic query builders. Bugs like this one show that we should not be constructing any part of a HTTP request by hand either.

Once we were confident that the initial fix was stable and working in production, we refactored all outgoing HTTP requests in the API server to use a thin wrapper around the safe-url-assembler package. This should ensure that the final URL string is assembled from properly-encoded components, providing an extra layer of protection against any similar bugs that may arise in the future.

If you run a server that can make outgoing HTTP requests that include any sort of user input, I highly recommend taking a look at the "A New Era of SSRF" presentation to get a sense of all the ways this can go wrong. It's eye-opening stuff, and it makes the small overheads of an extra safety layer like safe-url-assembler seem very worthwhile for some extra peace of mind.


Thanks to Shane Tomlinson, Chris Karlof, and Greg Guthe for reviewing initial drafts of this post, and to the entire Firefox Accounts team for this among many other adventures.