Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Security Software Bug Apache

Apache Request Smuggling Vulnerability Found 168

An anonymous reader writes "Whitedust is reporting on a HTTP request smuggling vulnerability in Apache. The flaw apparently allows attackers to piggy back valid HTTP requests over the 'Content-Length:' header, which can result in cache poisoning, cross-site scripting, session hijacking and other various kinds of attack. This flaw affects most of the 2.0.x branch of Apache's HTTPD server."
This discussion has been archived. No new comments can be posted.

Apache Request Smuggling Vulnerability Found

Comments Filter:
  • by Atario ( 673917 ) on Friday July 08, 2005 @03:45AM (#13011729) Homepage
    After all, it is Apache server.

    Anyway, it'll get a fix available likety-split. Go, OSS!
  • by hostyle ( 773991 ) on Friday July 08, 2005 @03:47AM (#13011735)
    Damn pirates! They're everywhere.
  • 2.1.6 (Score:5, Informative)

    by DigitumDei ( 578031 ) on Friday July 08, 2005 @03:47AM (#13011738) Homepage Journal
    2.1.6 has been released to fix this. This was responded to quickly, so now its just up to the web masters to update their servers.
    • Re:2.1.6 (Score:3, Informative)

      by Anonymous Coward
      http://www.apache.org/dist/httpd/CHANGES_2.1 [apache.org] The 2.1 branch is still classified as alpha software, however.
    • There's something rather odd about this.
      • The current production version of Apache 2.x is 2.0.54 [apache.org]. 2.1 is alpha-quality code, the unstable development branch.
      • The advisory's dated 5th July, but I certainly haven't seen anything on any of the usual lists about it (and I monitor them as part of my job.)

      Not to say it's impossible, the HTTP request smuggling attack vector is real enough - the paper is interesting reading, see http://www.watchfire.com/resources/HTTP-Request-Sm uggling.pdf [watchfire.com]

      • In reading the paper, and carefully looking at apps I support, I don't see an issue just yet. The paper does outline some very interesting hacks that are possible. Of course, many/most of these require some specific arrangement of servers and server versions to be practical. As noted in other comments, the biggest issue is for those running proxy servers that are a likely vector for the actual cache poisoning attacks.

        If you're pretty sure you have no XSS vulnerabilities in your deployed applications, an
      • Re:2.1.6 (Score:5, Informative)

        by stoborrobots ( 577882 ) on Friday July 08, 2005 @03:55PM (#13016107)
        ...I certainly haven't seen anything on any of the usual lists about it...

        Partly, because it's actually a dupe... well disguised, though...

        http://it.slashdot.org/article.pl?sid=05/06/12/143 3206 [slashdot.org]
    • Re:2.1.6 (Score:4, Informative)

      by tyler_larson ( 558763 ) on Friday July 08, 2005 @10:06AM (#13013108) Homepage
      2.1.6 has been released to fix this. This was responded to quickly, so now its just up to the web masters to update their servers.

      Webmasters don't need to update anything, because there is no vulnerability from their perspective.

      Request smuggling doesn't apply to a single web server, but rather to a combination proxy and web server that use a different method to determine how long a request is. In such an arrangement, an attacker could use smuggling to poison the proxy's cache or what have you, but the only customers who would be affected are others behind the same proxy.

      Because this sort of attack is so limited in scope, the chances of any of us ever even hearing about an actual exploit are very slim. The only people who really need to worry are those who run proxy servers.

      • Lots of sites run a web farm with a proxy/SSL accelerator in front of it. This configuration is potentially vulnerable to the attack.
  • by layer3switch ( 783864 ) on Friday July 08, 2005 @03:50AM (#13011753)
    at least 1.3.x is safe from this, I'll sleep well tonight.
  • Extract: All versions of Apache previous to 2.1.6 are vulnerable to a HTTP request smuggling attack which can allow malicious piggybacking of false HTTP requests hidden within valid content. This method of HTTP Request Smuggling was first discussed by Watchfire some time ago. The issue has been addressed by an update to version 2.1.6. Editorial Comment: The vulnerability involves a crafted request with a 'Transfer-Encoding: chunked' header and a 'Content-Length' can cause Apache to forward a modi
  • Wait a sec.. (Score:5, Interesting)

    by rylin ( 688457 ) on Friday July 08, 2005 @03:52AM (#13011758)
    1.3.x is very stable and production ready
    2.0.x is very stable and production ready, but it doesn't have the same amount of years on its neck as 1.3.x - and thus doesn't have as widespread deployment.
    2.1.x is alpha-quality, and it has the fix..

    messed up priorities?
    • Re:Wait a sec.. (Score:2, Interesting)

      by name773 ( 696972 )
      maybe the 2.1 series had other code changes that made the fix easier to implement
    • This is a common problem with open source software. The latest "bleeding-edge" version is often actually more stable. Especially for home users (not for servers), the unstable version often work much better than the stable one.
      • Re:Wait a sec.. (Score:2, Insightful)

        by CaptainZapp ( 182233 ) *
        The latest "bleeding-edge" version is often actually more stable.

        I think that the Debian [debian.org] folks may have an issue with this statement.

        • by ion++ ( 134665 )
          The latest "bleeding-edge" version is often actually more stable. I think that the Debian folks may have an issue with this statement.
          Lets refrase it then.

          The latest stable version is often actually stale
    • messed up priorities?

      That depends. Wouldn't you rather know that the patch doesn't make YOUR website go down in flames before you patch your company's main webserver?

      Therefore, the testing tree.
      • Re:Wait a sec.. (Score:3, Insightful)

        by rylin ( 688457 )
        I'm not sure what kind of company you work at, but we take precautions when upgrading software.
        For instance, we take a backup of the affected software before installing the new version.

        In other words; yes, I'd rather have my company's webserver down for the minute or two it takes to restore the recently-made backup, instead of having to worry about whether or not someone is trying to compromise our web platform.

        Maybe that's just me though?
  • until Apple releases an update. Probably a month .... sigh.
  • by seneces ( 839286 ) <<moc.liamg> <ta> <jlaicepsa>> on Friday July 08, 2005 @04:00AM (#13011783)
    According to securityfocus, this bug does affect the 2.0.x branch as well as 2.1.x. It says that the 2.1.x version has been released to fix, and that a fix is available in the subversion repository for 2.0.x. I'd suspect that there will be a new version of 2.0.x out soon.

    Securityfocus article is here [securityfocus.com].
  • eh? (Score:5, Insightful)

    by Anonymous Coward on Friday July 08, 2005 @04:06AM (#13011799)
    How can request smuggling affect ONE product? I thought the attack was based on the different ways TWO or MORE different products interpret the same HTTP request.

    Example:

    Product A (web server) uses the FIRST content-length header.

    Product B (application server) uses the LAST content-length header.

    So you include two content-length headers, to slip by A and attack B.

    Replace A and B with whatever proxying whatever setup you can think up.

    So how does Apache by itself have this problem, and how can apache by itself SOLVE the problem?

    Btw, this is a great example of why "be liberal in what you accept" is BS. You should reject all out-of-spec data.
    • Re:eh? (Score:4, Informative)

      by poor_boi ( 548340 ) on Friday July 08, 2005 @07:24AM (#13012263)
      How can request smuggling affect ONE product?

      You're right in that request smuggling requires two entities. In this particular case, the two entities are:

      1. Apache
      2. An HTTP proxy, HTTP caching proxy, or HTTP-aware firewall

      The reason the security flaw affects one product (Apache), is because the flaw does not require abnormal operation from the proxy, cache, or firewall.

    • Of course there are two products with different interpretations of the data:

      Apache, which interpretes things wrong.

      Another product that interprets things correctly.

      Despite there being two products, it should be obvious why Apache is the one at fault and the only one that needs fixing.
    • So how does Apache by itself have this problem, and how can apache by itself SOLVE the problem?

      Any software that can act as an HTTP proxy can solve the problem by refusing to pass on any content-length-related headers (or other cues) that it does not use.

  • by hobotron ( 891379 ) on Friday July 08, 2005 @04:11AM (#13011817)

    by noticing the apache servers were being forced further and further west
  • Another Dupe (Score:5, Informative)

    by fv ( 95460 ) * <fyodor@insecure.org> on Friday July 08, 2005 @04:28AM (#13011863) Homepage

    This seems to be a duplicate of the June 12 article on HTTP Request Smuggling [slashdot.org]. I don't see anything new here, as the original paper [watchfire.com] also talks about Apache being susceptible to this relatively minor (yet still interesting) issue.

    -Fyodor
    Concerned about your network security? Try the free Nmap Security Scanner. [insecure.org]

    • Re:Another Dupe (Score:3, Interesting)

      by BillEGoat ( 50068 )
      Apache admins who read the watchfire paper felt fairly safe as its technique only resulted in limited effects to Apache. The technique described simply used multiple Content-Length headers, which Apache effectively handled. This modified technique incorporates the use of chunked encoding to open Apache up to the wider effects that other servers experienced with the simpler exploit. After reading this, Apache admins should plot their upgrades in short order.
  • by prockcore ( 543967 ) on Friday July 08, 2005 @04:30AM (#13011865)
    Looking at their whitepaper. This seems to only affect a caching service or proxy.

    The attack basically makes the cache think you're requesting one page, but it passes a different request to Apache.

    So unless you have some service between your web server and the public, this vulnerability doesn't seem to affect you.

    To wit: you ask the cache for Page A with a GET for Page B buried in the header. The cache finds that Page A has expired, and passes your request to Apache. Apache instead serves up Page B, the Cache then sticks Page B's data into Page A's cache.
    • ``unless you have some service between your web server and the public, this vulnerability doesn't seem to affect you.''

      You mean like any caching server the public may be using? Like proxies at firewalls to cut out porn and such? So suddenly all my web pages in their cache are screwed up?

      That affects me.
  • Anyone else gonna be working all weekend due to this? Bout 300 non-homogenous servers with non-stock versions of apache on 3 or 4 different platforms should take the better part of two working days. I guess it beats the alternative.

    BBH
    • Maybe you want to check whether you need to, first. It seems this issue is not really an issue for most people.
  • by Evets ( 629327 ) on Friday July 08, 2005 @04:33AM (#13011874) Homepage Journal
    If you want to be secure, either downgrade to apache 1.3 or take a chance on the alpha version of 2.1.

    This is the second major problem in the last several weeks that leaves all the "managed server" users out there very vulnerable. (The first being the XML-RPC problem with php) Most of the managed servers out there run Apache off of an RPM compatible with their manager of choice (Plesk, cPanel, etc.). And a lot of the companies out there will make you pay extra to update your server or even wait until RH or Plesk distributes a new RPM.

    I think it's going to become apparent to a lot of people very quickly that it's worth the money to pay for a managed server from a quality company that provides real support rather than the $99/month for a server and a gig of bandwidth shops that will leave your servers wide open to these vulnerabilities.
    • The first being the XML-RPC problem with php

      The problem wasn't with PHP, but with the way some popular PHP scripts used the XML-RPC functionality.
    • The $99/mo 1TB bandwidth shops are NOT managed server companies. They are UNMANANGED servers and it's up to the customer (who has root access) to decide how to patch their machines.
    • ``I think it's going to become apparent to a lot of people very quickly that it's worth the money to pay for a managed server from a quality company that provides real support rather than the $99/month for a server and a gig of bandwidth shops that will leave your servers wide open to these vulnerabilities.''

      Eh? $99 gets you full root access on a dedicated server...you can upgrade what you like when you like. I don't see the problem. Of course, if you pay $99 for a deal where someone else maintains the sof
      • Eh? $99 gets you full root access on a dedicated server...you can upgrade what you like when you like. I don't see the problem.

        You said it yourself: you CAN upgrade what you like and when you like... which is "nothing" and "never" for most people who don't want to spend the time and effort required to react quickly to vulnerabilities as they are discovered.
  • This has been discussed before, there was a whitepaper posted on Slashdot previously. As others have said, the patches for this are already in 2.1.6 and just need to be backported to 2.0.x. 2.0.55 has been in testing, I believe the patches are there. So one could grab the code and backport them yourself, or wait for 2.0.55 to be released, which I would expect would be very soon.
  • by drspliff ( 652992 ) on Friday July 08, 2005 @04:41AM (#13011898)

    Sure, this effects Apache, but this also effects just about all web servers where the request is first filtered through a cache or proxy...

    What we don't need is people running around like headless chickens screaming 'omg dat aprache server got r00ted.. wher3s the sploit!' as 90% of Apache servers on the internet will be completely uneffected by it.

    It seems the poster didn't read the (very intresting) Watchfire paper before submitting. And editors... do your job, otherwise you'll soon be replaced by monkeys trained to click the 'Accept Article' button all day.

    • by Anonymous Coward
      Sure, this effects Apache.

      So now it's a feature and not a bug? :)

    • And editors... do your job, otherwise you'll soon be replaced by monkeys trained to click the 'Accept Article' button all day.


      I thought that replacement has already happened quite sometime back.
    • And editors... do your job, otherwise you'll soon be replaced by monkeys trained to click the 'Accept Article' button all day.

      Actually, that wouldn't work so well for the OSTG or whatever group Slashdot is part of. The humans must be there to accept monetary bribes for the slashvertisements. Trained monkeys would probaby just accept fruit.
    • And editors... do your job, otherwise you'll soon be replaced by monkeys trained to click the 'Accept Article' button all day.

      What, that hasn't happened yet? :)

    • Apparently the monkeys you hired to do grammar checking were on banana break when you submitted...

      Parent should read:

      "Sure, this affects Apache,...be completely unaffected by it."

      It is true that both affect and effect can be used as both a verb and a noun. However, in common usage, 99% of the time you're looking for affect as a verb, and effect as a noun.

    • What we don't need is people running around like headless chickens screaming 'omg dat aprache server got r00ted.. wher3s the sploit!' as 90% of Apache servers on the internet will be completely uneffected by it.


      Isn't this what slashbot does when a Microsoft vulnerability comes out?

      Hmm... Seems to me like you want to hide problems in the open source community, rather than be open and transparent about it.
  • by RAMMS+EIN ( 578166 ) on Friday July 08, 2005 @05:17AM (#13011989) Homepage Journal
    To me it seems that this is mostly an attack on proxying servers, causing them to misbehave and send malicious requests to Apache (a bit similar to the old FTP PORT exploit). Then how is this a vulnerability in Apache, if it's the proxy that compromised, and Apache just handling what it thinks is a legitimate request?

    Or am I completely misunderstanding what's going on?
  • by FireChipmunk ( 447917 ) <{moc.etile-ecrof} {ta} {pihc}> on Friday July 08, 2005 @05:26AM (#13012009) Homepage
    First, 1.3, 2.0. and 2.1 were all vulnerable to some parts of this security issue.

    Second, it is not a major security issue for most users.

    It can only be useful if you are running mod_proxy. And even then, it just allows unfiltered requests to the backend. Most people don't even use mod_proxy. If you do, this could have bad implications, but someone still needs to eploit your backend server. It doesn't give anyone a shell or anything like that.

    2.1.6-alpha was released with a fix. 2.0.55 should be coming out very shortly.
  • by Da w00t ( 1789 ) * on Friday July 08, 2005 @05:27AM (#13012012) Homepage
    No, you're not pigging back data over the Content-Length: HTTP/1.1 header, you're abusing the HTTP/1.1 header to confuse a required combination of a proxying firewall (or proxy/cache) and a webserver.

    I recently released an internal advisory on this from reading TFA [watchfire.com]. Folks, the sky is not falling. 99% of consumers out there will not be affected. People behind NATing firewalls will not have issue. People behind proxies (Squid to name one), and proxying firewalls (Checkpoint, Symantec, etc) will be the ones "vulnerable" to this "attack".

    The deal is this:

    Proxy A uses Content-Length: header #1, and Webserver A uses Content-Length: header #1 == no problem, no vulnerability.
    Proxy A uses Content-Length: header #1, and Webserver B uses Content-Length: header #2 == problem.

    That is how it's done. TFA says this may be used to bypass intrusion detection systems. Sure, if you don't have defence in depth. Otherwise you're fine.
    • Please refer to page 12 of the whitepaper, where the Apache-specific vulnerability is discussed. The paper discusses many vulnerabilities in many different proxies and web servers. The one you are talking about is NOT the Apache one.

      Also note that the vulnerability is in the Apache proxy, not the Apache web server.
  • by Rogerborg ( 306625 ) on Friday July 08, 2005 @06:12AM (#13012099) Homepage
    Based on the original and detailed exploit report [watchfire.com]. No news on a patch for that, I notice.
  • How is it dangerous? (Score:4, Informative)

    by Vo0k ( 760020 ) on Friday July 08, 2005 @06:34AM (#13012143) Journal
    Well, not very dangerous.
    To affect someone directly, the client browser would have to be compromised to send doctored HTTP requests. If this happens to you, you're already 0wn0red, this little trick might at worst add insult to injury :)

    But imagine this: luser.isp.net connects daily to bank.com through proxy.isp.net
    evil.isp.net has tapped into the same LAN as Luser. evil.isp.net sends a doctored request to secure http://bank.com/login.php [bank.com] with exploit-redirection to insecure http://bank.com/demo.html [bank.com], through proxy.isp.net
    From now on, proxy.isp.net will serve demo.html to anyone who wants to access login.php. So luser happily types his real password and login into demo submit form (not looking at the lock icon) and happily clicks "submit", while evil.isp.net just sniffs the LAN and captures unencrypted POST request containing real password and login.

    That's about as far as it goes. You can't do much if bank.com has DEMO with wide letters across the demo page. You can't redirect to offsite pages, and generally your possiblities are low...
  • by Temporal ( 96070 ) on Friday July 08, 2005 @09:21PM (#13018281) Journal
    There has been a LOT of confusion among posts here. Let me spell it out:
    1. This vulnerability is in the Apache web proxy version 2.x.
    2. This vulnerability does NOT affect the Apache web server, unless an Apache web proxy is running infront of it.
    3. The vulnerability is discussed on page 12 of the whitepaper. The rest of the whitepaper is about other similar vulnerabilities in other software.


    I read the whitepaper in detail because I have written an HTTP server and wanted to know if I am vulnerable to this attack. The paper actually describes a very large number of attacks, most of which have to do with bugs in old web servers and proxies (not even Apache). Most of the people I see posting here, including those who claim they read the article, are clueless, as they did not read through the whole paper to find the one page related to Apache.

    Well, it turns out that this bug is NOT in the Apache server. It is in the Apache web proxy. So, if you use an Apache web proxy infront of your server (regardless of what actual server software you use), you are vulnerable. Also, if you have clients who use an Apache proxy on their end, they are vulnerable. Server administrators should only worry about the former case, obviously.

    Yes, a lot of people run caching proxies infront of their own web server, such that every single request to the server -- from all clients -- goes through the proxy. This is often done for performance with dynamically-generated web sites. If you have not heard of this type of setup, then you clearly don't have one, and you can ignore this vulnerability.

    The following claims, made in other posts, are FALSE:
    - "It's an HTTP vulnerability, not Apache specifically" (Wrong. The Apache proxy clearly mis-handles requests with a Transfer-Encoding header.)
    - "To affect someone directly, the client browser would have to be compromised to send doctored HTTP requests." (Wrong. The paper is about using malformed requests to damage a server. The client would send such requests intentionally, in order to cause such damage.)
    - This entire post. [slashdot.org] (The guy only read the first vulnerability described in the paper, not the Apache-specific one.)
    - "Sure, this effects Apache, but this also effects just about all web servers where the request is first filtered through a cache or proxy..." (No, only ones filtered through an Apache proxy.)
  • by jbminn ( 558726 )
    I RTFA and the white paper. Worth mentioning here (I searched the first 108 comments and saw no mention of this):

    - HTTPS is not affected

    The white paper, while seemingly complete and well written, mentions this almost in passing near the end of the document. That may cause many readers, if they simply skim the paper, to miss this critical point. Further, it discounts using HTTPS as "...an impractical solution".

    If security is engineered into your site from the beginning, there's nothing at all imp

What is research but a blind date with knowledge? -- Will Harvey

Working...