all that jazz

james' blog about scala and all that jazz

Semantic discoverability security

A well known security issue in HTTP is that if you return 404 Not Found for a resource that doesn’t exist, but return 401 Unauthorized or 403 Forbidden for a resource that you’re not allowed to access, then you might be giving away information that an attacker could use, that is, that the resource exists. While in some cases that’s no big deal, in other cases it’s a security problem.

The usual solution to this is when you’re not permitted to access something, the server should respond as if it doesn’t exist, by returning 404 Not Found. Thus, you receive the same response whether the resource exists or not, and are not able to determine whether it does exist. I’d like to argue that this is the wrong approach, it is not semantic.

Just before we go on, I just want to clarify, while returning 401 Unauthorized may be a semantic way to tell a client that you need to authenticate before you can continue, on the web, it’s not a practical way to tell the client that, since the HTTP spec requires that it be paired with some user agent aware authentication method, such as HTTP BASIC authentication. But typcially, sites get people to log in using forms and cookies, and so the practical response to tell a web user that they need to authenticate before they can continue is to send a redirect to the login page. For the remainder of this blog post, when I refer to the 401 status code, I am meaning informing the user that they need to authenticate, which could be done by actually sending a redirect.

So, let’s think about what we’re trying to achieve here when we say we want to protect discoverability of resources. What we’re saying is that we want to prevent users who don’t have discoverability permission from finding out if if something does or doesn’t exist. The 404 Not Found status code says that a resource doesn’t exist. So if I’m not allowed to know if a resource doesn’t exist, but the server sends me a 404 Not Found, that’s a contradiction, isn’t it? It’s certainly not semantic. Of course, it’s not a security issue because the server will also send me a 404 Not Found if the resource does exist, but that’s not semantic either, the server is in fact lying to me then.

Sending a 404 Not Found in every case is not the only solution, there’s another solution where the server can say what it really means, ie be semantic, while still protecting discoverability. If a resource doesn’t exist, but I’m not allowed to find out whether a resource does or doesn’t exist, then the semantic response is not to tell me it doesn’t exist, it’s to tell me that I’m not allowed to find out if it exists or not. This would mean, if I’m not authenticated, sending a 401 Unauthorized, or if I am authenticated but am still not allowed to find out, to send a 403 Forbidden. The server has told me the truth, you’re not allowed to know anything about this resource that may or may not exist. And, if it does exist, and I’m not allowed to do it, the server will do the same, sending a 401 or 403 response. In either case, whether the resource exists or not, the response code will be the same, and so can’t be exploited to discover resources.

In this way, we have implemented discoverability protection, but we have done so semantically. We haven’t lied to the user, we haven’t told them that something doesn’t exist that actually does, we have simply told them that they’re not allowed to find out if it does or doesn’t exist. And we haven’t seemingly violated our own contract by telling a user something is not found when they’re not allowed to know if it’s not found or not.

In practice, using this approach also provides a much better user experience. If I am not logged in, but I click a link somewhere to the resource, it would be much better for me to be redirected to the user login screen so that I can login and be redirected back to the resource, than for me to be just told that the resource doesn’t exist. If the resource doesn’t actually exist, then after logging in, I can be redirected back to the resource where I’ll get a 404 Not Found, this is semantic and makes sense, it’s not until I log in that I can actually get a Not Found out of the server, that’s what discoverability means. This is exactly my argument in a Jenkins issue, where Jenkins is currently returning a 404 Not Found screen (with no option to click log in) for builds that I don’t have permission to access until I log in.

A practical solution to the BREACH vulnerability

Two weeks ago CERT released an advisory for a new vulnerability called BREACH. In the advisory they say there is no practical solution to this vulnerability. I believe that I've come up with a practical solution that we'll probably implement in Play Frameworks CSRF protection.

Some background

First of all, what is the BREACH vulnerability? I recommend you read the advisory, there's no point in me repeating it here, but for those that are lazy, here are is a summary. The prerequisites for exploiting this vulnerability are:

  1. The target page must be using HTTPS, preferably with a stream cipher (eg RC4) though it is possible to exploit when block ciphers with padding are used (eg AES)
  2. The target page must be using HTTP level compression, eg gzip or deflate
  3. The target page must produce responses with a static secret in them. A typical example would be a CSRF token in a form.
  4. The target page must also reflect a request parameter in the response. It may also be possible to exploit if it reflected POSTED form body values in the response.
  5. Responses must be otherwise reasonably static. Dynamic responses, particularly ones that vary the length of the response, are much more expensive to exploit.
  6. The attacker must be able to eavesdrop on the connection, and specifically, measure the length of the encrypted responses.
  7. The attacker must be able to coerce the victims browser to request the target web page many times.

To exploit, the attacker gets the victims browser to submit specially crafted requests. These requests will contain repeat patterns that the compression algorithm will compress. If the pattern matches the first part of the secret, then the response will be shorter than if it doesn't, since that part of the secret will also be compressed along with the repeat patterns. Then character by character, the attacker can determine the secret.

Some work arounds

The advisory mentions some work arounds. Whether these work arounds are effective depend greatly on the specific application, none of them can be effectively done by a framework, without potentially breaking the application.

Probably the most effective of the work arounds is randomising the secret on each request. In the case of CSRF protection tokens, which is often provided by many frameworks, this would prevent a user from using the application from multiple tabs at the same time. It would also cause issues when a user uses the back button.

I would like to propose a variant of using randomised tokens, that should work for most framework provided CSRF protection mechanisms, and that, pending feedback from the internet on whether my approach will be effective, we will probably implement in Play Framework.

Signed nonces

The idea is to use a static secret, but combine it with a nonce, sign the secret and the nonce, and do this for every response that the secret is sent in. The signature will effectively create a token that is random in each response, thus violating the third prerequisite above, that the secret be static.

The nonce does not need to be generated in a cryptographically secure way, it may be a predictable value such as a timestamp. The important thing is that the nonce should change sufficiently frequently, and should repeat old values sufficiently infrequently, that it should not be possible to get many responses back that use the same nonce. The signature is the unpredictable part of the token.

Application servers will need to have a mechanism for signing the nonce and the secret using a shared secret. For applications served from many nodes, the secret will need to be shared between all nodes.

The application will represent secrets using two types of tokens, one being "raw tokens", which is just the raw secret, the other being "signed tokens". Signed tokens are tokens for which a nonce has been generated on each use. This nonce is concatenated with the raw token, and then signed. An algorithm to do this in Scala might look like this:

def createSignedToken(rawToken: String) = {
  val nonce = System.currentTimeMillis
  val joined = rawToken + "-" + nonce
  joined + "-" + hmacSign(joined)
}

where hmacSign is a function that signs the input String using the applications shared secret using the HMAC algorithm. HMAC is not the only signing algorithm that could be used, but it is a very common choice for these types of use cases.

Each time a token is sent in a response, it must be a newly generated signed token. While it is ok to publish the raw token in HTTP response headers, to avoid confusion on which incoming tokens must be signed and which can be raw, I recommend to always publish and only accept signed tokens. When comparing tokens, the signature should be verified on each token, and if that passes then only the raw part of the tokens need to be compared. An algorithm to extract the raw token from the signed token created using the above algorithm might look like this:

def extractRawToken(signedToken: String): Option[String] = {
  val splitted = signedToken.split("-", 3)
  val (rawToken, nonce, signature) = (splitted(0), splitted(1), splitted(2))
  if (thetaNTimeEquals(signature, hmacSign(rawToken + "-" + nonce))) {
    Some(rawToken)
  } else {
    None
  }
}

where thetaNTimeEquals does a String comparison with Θ(n) time when the lengths of the Strings are equal, to prevent timing attacks. Verifying that two tokens match might look like this:

def compareSignedTokens(tokenA: String, tokenB: String) = {
  val maybeEqual = for {
    rawTokenA <- extractRawToken(tokenA)
    rawTokenB <- extractRawToken(tokenB)
  } yield thetaNTimeEquals(rawTokenA, rawTokenB)
  maybeTrue.getOrElse(false)
}

Why this works

When using a signed token, the attacker can still work out what the raw token is using the BREACH vulnerability, however since the application doesn't accept raw tokens, this is not useful to the attacker. Because the attacker doesn't have the secret used to sign the signed token, they cannot generate a signed token themselves from the raw token. Hence, they need to determine not just the raw token, but an entire signed token. But since signed tokens are random for each response, this breaks the 3rd prerequisite above, that secrets in the response must be static, hence they cannot do a character by character evaluation using the BREACH vulnerability.

Encrypted tokens

Another option is to encrypt the concatenated nonce and raw token. This may result in shorter tokens, and I am not aware of any major performance differences between HMAC and AES for this purpose. APIs for HMAC signing do tend to be a little easier to use safely than APIs for AES encryption, this is why I've used HMAC signing as my primary example.

Framework considerations

The main issue that might prevent a framework from implementing this is that they might not readily have a secret available to them to use to do the signing or encrypting. When an application runs on a single node, it may be acceptable to generate a new secret at startup, though this would mean the secret changes on every restart.

Some frameworks, like Play Framework, do have an application wide secret available to them, and so this solution is practical to implement in application provided token based protection mechanisms such as CSRF protection.

Why does Chrome consider some ports unsafe?

Today in my Twitter feed I noticed a frustrated tweet from Dan North complaining about why Chrome seems to arbitrarily block connections to some ports, giving the confusing error code "net::ERR_UNSAFE_PORT". I've encountered this before, and have also been similarly frustrated, so I decided to do a little research, and work out what Google means by unsafe. There are a few discussions in stack overflow and forums about exactly which ports are blocked, but I didn't find much about why they are considered unsafe.

My first assumption was that somehow it was unsafe for Chrome itself to connect to services on these ports, because maybe that services protocol might confuse Chrome into doing something wrong. However, this didn't make sense, and I then worked out that it's not about protecting Chrome, but protecting the services themselves running on these ports. Many internet protocols are designed to be very permissive as to what they accept, and this presents an interesting attack opportunity for attackers.

As web developers with a mind for security are well aware, the browser is incredibly obliging to attackers when it comes to making requests on servers on your behalf. XSRF is a perfect example of this, but the good news is, we know how to protect against XSRF. However, what if an attacker tricked a web browser into making a connection to something that doesn't speak HTTP? Let's take, for example, an SMTP server. If you've never seen an SMTP session before, here is an example session that I've ripped from wikipedia:

S: 220 smtp.example.com ESMTP Postfix
C: HELO relay.example.org
S: 250 Hello relay.example.org, I am glad to meet you
C: MAIL FROM:
S: 250 Ok
C: RCPT TO:
S: 250 Ok
C: RCPT TO:
S: 250 Ok
C: DATA
S: 354 End data with .
C: From: "Bob Example" 
C: To: "Alice Example" 
C: Cc: theboss@example.com
C: Date: Tue, 15 January 2008 16:02:43 -0500
C: Subject: Test message
C:
C: Hello Alice.
C: This is a test message with 5 header fields and 4 lines in the message body.
C: Your friend,
C: Bob
C: .
S: 250 Ok: queued as 12345
C: QUIT
S: 221 Bye

On first glance, this may look nothing like HTTP. Except for one important feature, like HTTP, SMTP separates elements of its protocol with newlines. Furthermore, what happens when we send data to an SMTP server that it doesn't understand? Well, here's a simple HTTP GET request with an example SMTP server:

C: GET / HTTP/1.1
S: 500 5.5.1 Command unrecognized: "GET / HTTP/1.1"
C: Host: www.example.com
S: 500 5.5.1 Command unrecognized: "Host: www.example.com"
C: 
S: 500 5.5.1 Command unrecognized: ""

So, obviously our mail server is confused, but an important thing to notice here is that it's not outright rejecting our connection, it's telling the client that there is an error, but continuing to accept commands. So can we exploit this? The answer is a resounding yes. What if I, an attacker, were to craft a page that had a form, that was automatically submitted (using a javascript on load event that invoked the submit method - similar to a form based XSRF attack) to an SMTP server. In order to speak SMTP with this server, I would need to be able to include new lines in my messages. This is not possible using application/x-www-form-urlencoded, since newlines are URL encoded, however using multipart/form-data, I can create a field that has the client side messages in the above SMTP conversation, and so using my victims web browser, I can submit an email to the target SMTP server. The SMTP server would ignore all the HTTP protocol as in the above example, including the method and headers, as well as the multipart/form-data header content, but once it got my field, which would have my HELO, MAIL FROM, RCPT TO etc commands in it, it would start processing them.

But of course, an attacker could just connect directly to the SMTP server, right? For some SMTP servers yes. However, many SMTP servers are protected with nothing more than a firewall, and especially if the SMTP server is configured to use this as a security mechanism for verifying the authenticity of the source of the email, it may be desirable for an attacker to be able to send email through such an SMTP server. So what this means is an attacker could use a browser running behind a firewall that was protecting an SMTP server as a proxy to connecting to that SMTP server, and so send messages using it.

As it turns out, some SMTP servers actually detect HTTP requests made on them and immediately close the connection (my servers SMTP server did this when I tried with it). However, not all SMTP servers do this, because it wasn't envisioned that a piece of client server, such as a web browser, could be used as an open proxy to a protected network. And it's certainly not a good idea to assume that other types of services/servers, such as FTP servers and even IRC servers, will do this too.

So, why does Chrome refuse to connect to some ports? Because the Google engineers has gone through the list of well known ports, and worked out how tolerant the protocols that use these ports are to being sent HTTP requests, and if they are tolerant, they've marked it as unsafe and so blocked it, to prevent Google Chrome from being an open proxy to a secured network. Should all web browsers do this? Probably.

About

Hi! My name is James Roper, and I am a software developer with a particular interest in open source development and trying new things. I program in Scala, Java, Go, PHP, Python and Javascript, and I work for Lightbend as the architect of Kalix. I also have a full life outside the world of IT, enjoy playing a variety of musical instruments and sports, and currently I live in Canberra.