Researcher Discovers New 'HTTP Request Smuggling Attack' Variants (securityweek.com) 9
Some scary new variants of "HTTP request smuggling" have been discovered by Amit Klein, VP of security research at SafeBreach, reports Security Week:
Specifically, an HTTP request smuggling attack, which can be launched remotely over the internet, can allow a hacker to bypass security controls, gain access to sensitive data, and compromise other users of the targeted app. While the attack method has been known for more than a decade, it still hasn't been fully mitigated. Klein has managed to identify five new attack variants and he has released proof-of-concept (PoC) exploits.
He demonstrated his findings using the Abyss X1 web server from Aprelium and the Squid caching and forwarding HTTP web proxy. The developers of Abyss and Squid have been notified of the vulnerabilities exploited by Klein during his research, and they have released patches and mitigations. One of the attacks bypasses the OWASP ModSecurity Core Rule Set (CRS), which provides generic attack detection rules for ModSecurity or other web application firewalls. OWASP has also released fixes after being notified.
Klein told SecurityWeek ahead of his talk on HTTP request smuggling at the Black Hat conference that an attacker needs to find combinations of web servers and proxy servers with "matching" vulnerabilities in order to launch an attack, which makes it difficult to determine exactly how many servers are impacted. However, an attacker can simply try to launch an attack to determine if a system is vulnerable. "The attack is not demanding resource-wise, so there's no downside to simply trying it," Klein said. In his research, he demonstrated a web cache poisoning attack, in which the attacker forces the proxy server to cache the content of one URL for a request of a different URL.
He says attacks can be launched en-masse through a proxy server against multiple different web servers or against multiple proxy servers... While there haven't been any reports of HTTP request smuggling being used in the wild, Klein has pointed out that attacks may have been launched but were not detected by the target.
He demonstrated his findings using the Abyss X1 web server from Aprelium and the Squid caching and forwarding HTTP web proxy. The developers of Abyss and Squid have been notified of the vulnerabilities exploited by Klein during his research, and they have released patches and mitigations. One of the attacks bypasses the OWASP ModSecurity Core Rule Set (CRS), which provides generic attack detection rules for ModSecurity or other web application firewalls. OWASP has also released fixes after being notified.
Klein told SecurityWeek ahead of his talk on HTTP request smuggling at the Black Hat conference that an attacker needs to find combinations of web servers and proxy servers with "matching" vulnerabilities in order to launch an attack, which makes it difficult to determine exactly how many servers are impacted. However, an attacker can simply try to launch an attack to determine if a system is vulnerable. "The attack is not demanding resource-wise, so there's no downside to simply trying it," Klein said. In his research, he demonstrated a web cache poisoning attack, in which the attacker forces the proxy server to cache the content of one URL for a request of a different URL.
He says attacks can be launched en-masse through a proxy server against multiple different web servers or against multiple proxy servers... While there haven't been any reports of HTTP request smuggling being used in the wild, Klein has pointed out that attacks may have been launched but were not detected by the target.
A quick summary of request smuggling (Score:5, Informative)
Here is a quick summary of request smuggling.
HTTP request smuggling is a family of techniques to get an attack past a proxy firewall such as Squid or the dozens of commercial products based on Squid, or any other web firewall.
This is a pair of valid HTTP requests. The first one posts 44 bytes. The second one requests a resource (GET):
POST /hello.php HTTP/1.1 ...
Content-Length: 44
GET /poison.html HTTP/1.1 /target.html HTTP/1.1
Host: www.example.com
Something: GET
A defensive proxy could see and block the request for the target. But what if we send a malformed request? Let's send this:
POST /hello.php HTTP/1.1 ... /poison.html HTTP/1.1 /target.html HTTP/1.1
Content-Length: 0
Content-Length: 44
GET
Host: www.example.com
Something: GET
We have TWO length headers. If the proxy uses the second one and the origin web server uses the first one, we have a problem. The proxy firewall will see it as ONE request, posting 44 bytes of text. There is no request for target.html under this interpretation, so it doesn't get blocked. The origin server may read it as a POST of zero bytes, followed by a request for target.html.
Two content-length headers isn't the only way to do it. Another example is that http requests are supposed to separate lines with \r\n, while servers are supposed to also accept them with just \n as the separator. By mixing and matching \r with \n, we can come up with something that the proxy interprets differently than the origin server does.
Any caches in the path, including if the protective proxy does caching, will cache the wrong page, and ataet serving that content even for perfectly normal requests.
Ps - email has a similar problem (Score:4, Interesting)
By the way, as was mentioned here on Slashdot last week, email has a similar problem. The server that checks whether the from address is spoofed can interpret it differently than Outlook or whatever is displaying email to the user.
Further, with email there are at least three different places where different "from" addresses can be listed. A few days ago I had an email to handle that legitimately had three different "from" addresses - envelope, return path, and resent-from. When you're trying to handle spoofed email, email that claims to be from a different address than it's actually from, which of those three addresses do you use?
Then when you decide which address(es) to check against, when the mail headers are weird Outlook might decide to display a different address than the one that Mimecast checked.
Re: Ps - email has a similar problem (Score:2)
Compare with snail mail. *Envelope from* is the return address listed on the back of the envelope whilst *From* is the correspondence address given in the letter. They don't have to match and there are sometimes good reasons for them not matching, but normally they do.
*Return Path* doesn't have a direct analogy, but would be equivalent to a list of where the letter went on its journey to you, e.g. John > post box on the high street > local sorting office > Postman Pat's van > your letter box.
Re: (Score:3)
Here is a quick summary of request smuggling.
HTTP request smuggling is a family of techniques to get an attack past a proxy firewall such as Squid or the dozens of commercial products based on Squid, or any other web firewall.
This is a pair of valid HTTP requests. The first one posts 44 bytes. The second one requests a resource (GET):
POST /hello.php HTTP/1.1 ...
Content-Length: 44
Because plagiarism deserves a +5 mod. If you are quoting someone else you should explicitly indicate as much.
If I knew the first hello world ... (Score:2)
If I knew who first wrote that example, I'd mention it.
The oldest copy I see in a quick 30-second check is Heled 2005. Note I'm not saying that's the first use - it may well have been a common example by 2005.
The most well-known use of the example is probably MITRE.
But just because sometimes I like to poke people, here is an example of programming:
print("Hello world\n");
httpS (Score:5, Informative)
True. Also, 20 years ago, SSL ended (Score:1)
That's true. Someone might have been thinking https (http over a secure channel) isn't http.
Also "for those interested", in 1999-2000 the internet switched from SSL to TLS.
Not that it matters, that's just a pet peeve of mine.
Re: (Score:1)
Well, the quoted example talks about Squid insecurities. I think it should be noted that Squid isn't very useful these days unless you break https and screw around with ssl splicing. This is something homebrew router enthusiasts will glibly encourage everyone to do.
I struggle to understand (Score:3)
The ridiculous popularity of reverse proxies terminating TLS before application server. People these days are even using "cloud" hosted proxies actually forwarding requests to endpoints on the Internet all in the name of "security"... Some of them even have capability of terminating client certificates and sending forwarding headers with nothing other than checking IP source address.
All this does is massively increase the attack surface of the system for little to no benefit while creating new sources of insecurity.
Request smuggling is just one of many examples of predictable failures resulting from this truly idiotic behavior. Whatever the question terminating TLS in reverse proxies is certainly not the answer. It may be the most convenient answer at the time given whatever constraints you happen to be facing yet it's ultimately the wrong solution.