Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Denial of Service via Algorithmic Complexity 257

dss902 writes "We (Department of Computer Science, Rice University) present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures... Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks."
This discussion has been archived. No new comments can be posted.

Denial of Service via Algorithmic Complexity

Comments Filter:
  • by Zach Garner ( 74342 ) on Saturday May 31, 2003 @07:59PM (#6087143)
    I hate to ruin it for everyone, but the second link has the same content as the first, only in PDF format.
  • by cperciva ( 102828 ) on Saturday May 31, 2003 @08:03PM (#6087150) Homepage
    They claim to present a new method of low-bandwidth denial of service attack, but it looks like they're demonstrating quite an old, high-bandwidth, denial of service.
  • by jeffy124 ( 453342 ) on Saturday May 31, 2003 @08:04PM (#6087161) Homepage Journal
    Several years ago Apache had a function that ran in O(n^2) time, and a HTTP GET request was capable of throwing that function into taking a very long time to process, making it easy to DoS an Apache httpd. The interesting part is that the same function could have been done in O(n) time, but wasnt. The Apache team fixed this by substituting the O(n) algorithm.
    • by weezel ( 6011 ) on Saturday May 31, 2003 @08:14PM (#6087213)
      I also remember that problem with Apache. Here's the report [attrition.org] and the code change [apache.org] involved for that particular bug.
      • by kazad ( 619012 ) on Saturday May 31, 2003 @10:39PM (#6087728) Homepage
        For those too lazy to click:

        Problem: remove excess slashes from a URL, so /d1///d2////d3/test.html becomes /d1/d2/d3/test.html

        Original Algorithm:

        If an additional slash is found, copy the remainder of the string over. So

        a///b => a//b => a/b

        This copying, an inner loop, makes the algorithm O(n^2).

        New algorithm:

        Have 2 pointers, and copy from one to the other. One pointer skips additional slashes. At the end, throw on a "\0" to terminate the string.

        a///b => a/b

        The second pointer has a loop to skip over additional slashes, and each slash is only visited once. Thus, the algorithm is O(n).

        Cool stuff ... imagine an input like

        a///...{500 slashes later}...///b

        It could really do some damage to the first algorithm.
        • by Henry Stern ( 30869 ) <henry@stern.ca> on Sunday June 01, 2003 @06:56AM (#6089140) Homepage
          In fact, you can do it with even less memory. All that you need to do is keep an offset stored in a variable.

          The algorithm looks something like:

          offset = 0
          for i = 2 to n do
          if a[i-1] = '/' and a[i] = '/' then
          offset = offset + 1
          else
          a[i-offset] = a[i]
          end if
          end for

          It saves you the trouble of having to manage the memory containing the cleaned-up string.
  • duh... (Score:5, Funny)

    by Anonymous Coward on Saturday May 31, 2003 @08:05PM (#6087163)
    you can use a modem to post a slashdot article with a link to the target computer...
  • by ajuda ( 124386 ) on Saturday May 31, 2003 @08:05PM (#6087164)
    ...But in terms of raw power, nothing can match a slashdoting. Can anyone else read the link?
  • Uh oh! (Score:4, Funny)

    by seizer ( 16950 ) on Saturday May 31, 2003 @08:05PM (#6087165) Homepage
    Speaking of, uh, denial of service...the site's quickly turning into a smoking ruin from where I'm standing. If it had been text only, it might have survived, but all the mathematical symbols are done using images (is that big O or big uh-O ;-) so the server is choking...

    Bring on the MathML.
    • Yes, but this would be a distributed DOS attack, and very high bandwidth.

      It still is just about as funny as an article about how to DOS someone being shown here...
  • by larry bagina ( 561269 ) on Saturday May 31, 2003 @08:05PM (#6087167) Journal
    they don't list slahsdotting!
  • by Convergence ( 64135 ) on Saturday May 31, 2003 @08:05PM (#6087169) Homepage Journal
    The project page is http://www.cs.rice.edu/~scrosby/hash/ [rice.edu] and actually has details about individual vulnerable applications and test files for several systems. (And is kinder on the server for those who don't want to download the whole paper.)
  • DOS attack (Score:5, Funny)

    by IO ERROR ( 128968 ) <error@ioe[ ]r.us ['rro' in gap]> on Saturday May 31, 2003 @08:06PM (#6087170) Homepage Journal
    And by posting our links on /. we can bring our departmental WWW server to its knees with a single HTTP POST request.

    Anyone got mirrors yet?

    • by Adam9 ( 93947 ) on Saturday May 31, 2003 @08:11PM (#6087195) Journal
      Since you asked nicely..

      This [darkfire.net] is the HTML version (with lots of images not mirrored, sorry) and this [darkfire.net] is the PDF version.

      If the PDF starts hogging the pipe, don't be surprised if it gets symlinked to the HTML version.
    • Re:DOS attack (Score:3, Interesting)

      by Alomex ( 148003 )

      I have often wondered if the /. effect is not magnified by inefficiencies in the TCP/IP stack. I mean, a lot of people read /. but say, a server should have enough bandwidth to serve twenty users per second, 1200 per minute, 72,000 per hour... I doubt there is more than that many /. actives at any given time...

    • Just a thought:

      How about using apache's mod_rewrite capability to redirect slashdot users to something a little more interesting rather than flooding the site?

      RewriteCond %{HTTP_REFERER} ^slashdot.*
      RewriteRule ^/$ /www.mpaa.org [L]

      (don't jump on me if I got the syntax wrong, I only discovered mod_rewrite a day or two ago) ;P

      N.
  • by fadeaway ( 531137 ) * on Saturday May 31, 2003 @08:06PM (#6087172)
    This doesn't sit well with me. Should students at a University be studying, developing, and releasing improved methods with which to launch DOS attacks..?
    • by MisterFancypants ( 615129 ) on Saturday May 31, 2003 @08:11PM (#6087197)
      How about if they are researching computer security?

      If you RTFA you'd see they also present ways in which to AVOID this problem, and that's the real goal.

    • yes, they should, because they're also researching how to protect against them. better we find out now from a university that also mentions how to prevent these attacks than on some random day when al queda or some random hacker collective decide to take down the internet and bring a growing part of the world economy to a screeching halt.
    • by Anonymous Coward on Saturday May 31, 2003 @08:14PM (#6087211)
      well, maybe they're figuring out how to fix them. Would you rather a University figuring out how to do it, or some random black hat/chinese hacker?

      With this information it shouldn't be too big of a deal to redesign the software to not be vulnerable anymore.

      What they did is little more than algorithm analysis. They looked at all the different links in the chain then merely decided which algorithms were the weakest links and exploited them.

      This is good because we can now fix those. Robustness abound.
    • ... Or, this information can be used to inform people of the possibility of DOS attacks and possibly prevent them. If function X can be used to DOS a server, rewrite the code for function X or eliminate function X.
      Are people who work at Ford allowed to test whether your car can be hotwired?
      If not well, the price of used Fords just fell by 75% ;-).
    • YES. (Score:3, Insightful)

      Because it forces the fat pigopolists (www.theregister.co.uk terminology?) into making fixes.

      I understand your question, but it, IMO, is like asking if "the people" should be free to speak about the bad things their King does, and wouldn't that just be bad for the Kingdom?

    • by debrain ( 29228 )
      This doesn't sit well with me. Should students at a University be studying, developing, and releasing improved methods with which to launch DOS attacks..?

      Most certainly yes. Others are studying it with purely malevolent intentions. The incentive of the University is benevolent. Carefully consider the consequences of not having public knowledge reflecting the capabilities of those with malicious intentions.

      The onus is now upon the vendors to adhere to adequate standards, rather than resting upon public c
    • This doesn't sit well with me. Should students at a University be studying, developing, and releasing improved methods with which to launch DOS attacks..?

      Because Uiversities are where this sort of research shoud go on and solutions found.

      I find it quite refreshing that this happened in a house of learning rather than "on the street".
    • I think the question that immediately popped into my cynical mind was: "When will the follow-up Slashdot article telling of the DMCA lawsuit against those students hit the front page?" Though I suppose maybe if they don't mention any specific software issue they might be O.K.
    • "This doesn't sit well with me. Should students at a University be studying, developing, and releasing improved methods with which to launch DOS attacks..?"

      What, would you have preferred that it appeared on an obscure malicious cracker's board? It's better that academia finds it before irreponsible people who would only use it to take down web sites.

      And speaking of DDOS, it looks like the RIAA's site [riaa.org] is down again.

    • Students? I suppose technically, but the paper's principle author [rice.edu] is a PhD candidate at this institution [rice.edu].
      This is a far cry from the implied 'student' moniker everyone has been going on about 'round here. :-/
      These are exactly the sort of people that should be looking at security problems. Especially when their findings include ways to avoid the problem, as their paper did.
  • by St. Vitus ( 26355 ) on Saturday May 31, 2003 @08:06PM (#6087173)
    It's a Mansierre!
  • by Anonymous Coward on Saturday May 31, 2003 @08:08PM (#6087183)
    Denial of Service via Algorithmic Complexity Attacks

    Scott A. Crosby Dan S. Wallach
    scrosby_at_cs.rice.edu dwallach*cs.rice.edu

    Department of Computer Science, Rice University

    Abstract:
    We present a new class of low-bandwidth denial of service attacks that exploit algorithmic deficiencies in many common applications' data structures. Frequently used data structures have ``average-case'' expected running time that's far more efficient than the worst case. For example, both binary trees and hash tables can degenerate to linked lists with carefully chosen input. We show how an attacker can effectively compute such input, and we demonstrate attacks against the hash table implementations in two versions of Perl, the Squid web proxy, and the Bro intrusion detection system. Using bandwidth less than a typical dialup modem, we can bring a dedicated Bro server to its knees; after six minutes of carefully chosen packets, our Bro server was dropping as much as 71% of its traffic and consuming all of its CPU. We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks.

    Introduction
    When analyzing the running time of algorithms, a common technique is to differentiate best-case, common-case, and worst-cast performance. For example, an unbalanced binary tree will be expected to consume O(n log n) time to insert n elements, but if the elements happen to be sorted beforehand, then the tree would degenerate to a linked list, and it would take O(n^2) time to insert all n elements. Similarly, a hash table would be expected to consume O(n) time to insert n elements. However, if each element hashes to the same bucket, the hash table will also degenerate to a linked list, and it will take O(n^2) time to insert n elements.

    While balanced tree algorithms, such as red-black trees [11], AVL trees [1], and treaps [17] can avoid predictable input which causes worst-case behavior, and universal hash functions [5] can be used to make hash functions that are not predictable by an attacker, many common applications use simpler algorithms. If an attacker can control and predict the inputs being used by these algorithms, then the attacker may be able to induce the worst-case execution time, effectively causing a denial-of-service (DoS) attack.

    Such algorithmic DoS attacks have much in common with other low-bandwidth DoS attacks, such as stack smashing [2] or the ping-of-death 1, wherein a relatively short message causes an Internet server to crash or misbehave. While a variety of techniques can be used to address these DoS attacks, common industrial practice still allows bugs like these to appear in commercial products. However, unlike stack smashing, attacks that target poorly chosen algorithms can function even against code written in safe languages. One early example was discovered by Garfinkel [10], who described nested HTML tables that induced the browser to perform super-linear work to derive the table's on-screen layout. More recently, Stubblefield and Dean [8] described attacks against SSL servers, where a malicious web client can coerce a web server into performing expensive RSA decryption operations. They suggested the use of crypto puzzles [9] to force clients to perform more work before the server does its work. Provably requiring the client to consume CPU time may make sense for fundamentally expensive operations like RSA decryption, but it seems out of place when the expensive operation (e.g., HTML table layout) is only expensive because a poor algorithm was used in the system. Another recent paper [16] is a toolkit that allows programmers to inject sensors and actuators into a program. When a resource abuse is detected an appropriate action is taken.

    This paper focuses on DoS attacks that may be mounted from across a network, targeting servers with the data that they might observe and store in a hash table as part of their normal operation. Section 2 details how hash table
  • by cperciva ( 102828 ) on Saturday May 31, 2003 @08:09PM (#6087188) Homepage
    Basically, the paper says this: If you have a hash table into which attackers can insert arbitrary keys, you'd better be using a hash function for which they cannot easily generate collisions.

    I don't know if anyone has *published* this before, but it's certainly not new.
    • by Shackleford ( 623553 ) on Saturday May 31, 2003 @09:02PM (#6087391) Journal
      Basically, the paper says this: If you have a hash table into which attackers can insert arbitrary keys, you'd better be using a hash function for which they cannot easily generate collisions.

      That is something that the paper said, but they also gave specific examples of software that was vulnerable to that kind of attack. They determined that the Bro intrusion detection was highly vulnerable to this kind of attack, and mentioned different versions of Perl and Python that also experienced a significant perfrormance decrease given their input specifically made to bring down systems that use hash tables.

      I don't know if anyone has *published* this before, but it's certainly not new.

      It has been done. They mentioned similar related work where it was found that input to quicksort makes it take O(n^2) time instead of O(n lg n) and a that there was a vulnerability in the hash tables of the Linux route-table cache. The concept isn't new, it's just that different software has been found to have this sort of vulnerability.

      And I just couldn't help but laugh at the irony of their PDF file on DOS attack prevention being the victim of the /. effect.


    • The paper goes beyond that. It gives specific examples of such hash tables in linux systems.

    • by Jucius Maximus ( 229128 ) on Saturday May 31, 2003 @09:59PM (#6087552) Journal
      "Basically, the paper says this: If you have a hash table into which attackers can insert arbitrary keys, you'd better be using a hash function for which they cannot easily generate collisions."

      It's publicised. I was warned about stuff like this in a 2nd year CS class I took several months ago. But I did *not* imagine that it could be used to take down web servers.

      • Actually, what they basically say are that one should never use hash tables where one want good performance.

        Wish people would stop using hash tables altoghether.

        I've never used one in a real program.

        • Huh? Perhaps there's a signifigantly more complex algorithm which yields better performance (redblack trees compared to standard btrees) but on the whole, hash tables are extremely efficient and are used everywhere.

          That's like saying, "I wish people would stop using [lists|arrays|trees] where one want good performance."
    • I always had a slight feeling of unease about hash tables. Like in C++, the standard library provides std::map which has guaranteed O(log N) time for insertion and lookup, where N is the number of elements in the map. But std::map is too slow for many people, who use instead a non-standard hash_map class. Such a hash table can only guarantee O(N) time, but 'usually' insertion and extraction take constant time.

      I mean, you'd have to be really unlucky to get a large number of hash collisions, wouldn't you?
      • I'm interested by what the authors say about 'modern universal hash functions' and how they are immune from attack. Does this mean you pick a random string and XOR against it before computing each hash value, or something like that? It would seem there could still be some way to discover the exact hash function being used by means of timing attacks (you can tell when an input has caused a hash collision because the response time might be slightly slower).

        If you use a strong keyed hash function (eg, keyed
  • Humanity (Score:2, Insightful)

    by DreadSpoon ( 653424 )
    So this is like humans - you don't need a lot of information, just something seemingly complex/intricate, and their brains shutdown for a while?

    (Or maybe this only happens to me? ;-)
  • Say what? (Score:4, Funny)

    by Znonymous Coward ( 615009 ) on Saturday May 31, 2003 @08:12PM (#6087202) Journal
    ...we can bring a dedicated _Bro_ server to its knees...

    Why they always trin' to bring the black man down?

  • by Anonymous Coward
    The only 'bro' server I can think of is buddyhead.com ... Not much of an attack if you ask me ...
  • Credits (Score:5, Informative)

    by alain1234 ( 540592 ) <alain@onesite.org> on Saturday May 31, 2003 @08:17PM (#6087224) Homepage
    Reading the paper which begins with "We present a new class of ..." it sounds like these students discovered a new concept from nowhere.
    Maybe their genius has been triggered by the recent advisory about a DOS exploiting hash collisions in netfilter [secunia.com].
    They analyzed some softwares but no word about this known vulnerability. Still a good summary but not a discovery.
    • Re:Credits (Score:5, Informative)

      by Anonymous Coward on Saturday May 31, 2003 @08:29PM (#6087276)
      We cite Florian's attack in the paper, predate it by several months. And, had you read the actual advisory, you'd already have seen our paper, we're on his highly reccomended reading list on his advisory for the last 2 weeks.

      Scott
      • Excellent paper (Score:5, Interesting)

        by smiff ( 578693 ) on Sunday June 01, 2003 @02:10AM (#6088358)
        Hi,

        I found your paper very interesting. I'd like to address a couple things.

        1. You point out that MD5 is vulnerable to a brute-force search for bucket collisions. Isn't any deterministic hash-function vulnerable to the same attack? I know you solved the problem using a keyed version of MD5. With Carter and Wegman, you alluded to a randomly chosen constant and vector. I didn't notice you addressing this issue with UMAC.

        2. The abstract says, "We show how modern universal hashing techniques can yield performance comparable to commonplace hash functions while being provably secure against these attacks." The abstract is ambiguous, but it insinuates that you'll provide a proof. I didn't see one. Perhaps it was in the references? Even if it was, certainly your 20 bit Carter-Wegman construction merits a new proof.

        3. You said, "When analyzing algorithmic complexity attacks, we must assume the attacker has access to the source code of the application," I disagree. The attacker doesn't need the source code. They can reverse engineer the compiled binary.

        Also, I wonder if an attacker even needs the program. An attacker could reasonably guess that a Perl script will store a certain string in an associative array. Many websites automatically process their Apache logs with Perl. Instead of requesting real pages, a blackhat could request strings that will hash to the same bucket (assuming the site uses Perl). When cron starts processing the logs, the website could slow to a crawl. Granted, the attacker knows the hash function from Perl, but they don't have access to the website's custom-made script.

        4. You have a spelling error. Your paper should read, "due to the ten times larger queues of unprocessed events".

        • Re:Excellent paper (Score:4, Interesting)

          by Anonymous Coward on Sunday June 01, 2003 @02:44AM (#6088426)
          Yes, deterministic hash functions are subject to guess&check attacks.

          Second, the universalness of that is from the origional carter&wegman paper. Our only real trick is to try to encode a useful construction without requiring 64-bit multpilication or modulation. There are aspects of the abstract I'm not entirely happy with. The section on solutions with unversal hashing was sorta grafted on later.(And I didn't write that sentence)

          Third, yes, good guessing works too, even without source code.

          Fourth. Damn. Don't you have something better to do? :)

          Scott

    • Re:Credits (Score:5, Informative)

      by The Madpostal Worker ( 122489 ) <abarros@@@gmail...com> on Saturday May 31, 2003 @08:52PM (#6087364)
      If you scroll down to the bottom of the advisory [secunia.com] you'll see a link to orginal advisory [theaimsgroup.com]. Here's the begining of the message:
      List: linux-kernel

      Subject: Route cache performance under stress
      From: Florian Weimer <fw () deneb ! enyo ! de>
      Date: 2003-04-05 16:37:43
      Please read the following paper:

      <http://www.cs.rice.edu/~scrosby/tr/HashAttack.pdf >

      Then look at the 2.4 route cache implementation.

      Short summary: It is possible to freeze machines with 1 GB of RAM and more with a stream of 400 packets per second with carefully chosen source addresses. Not good.
      It seems the advisory stems from the paper, not the other way around.
  • Bugtraq Post (Score:5, Informative)

    by Anonymous Coward on Saturday May 31, 2003 @08:29PM (#6087279)
    Hello. This is to announce a new class of attack which we have named 'Algorithmic Complexity Attack'. These attacks can perform denial of service and/or cause the victim to consume more CPU time than expected. We have a website for our research paper and project and tentative source code illustrating the solution, universal hashing, available at http://www.cs.rice.edu/~scrosby/hash/

    They exploit the difference between 'typical case' behavior versus worst-case behavior. For instance, in a hash table, the performance is usually O(1) for all operations. However in an adversarial environment, the attacker constructs carefully chosen input such that large number of 'hash collisions' occur. Suitable collisions can be computed even when the attacker is limited to as little as 48 or 32 bits.

    These attacks can occur over a very wide gamut of software, with impacts ranging from devestating to innocious.

    We have studied and found the following applications possibly vulnerable to a greater or lesser degree:
    • Mozilla 1.3.1
    • DJBDNS 1.05
    • TCL 8.4.3
    • GLIB 2.2.1
    • Python 2.3b1

    For the last two, we have a tentative attack file, but have not constructed a test program to confirm the attack.

    We have constructed attacks and confirmed the degradation on these:

    • Perl 5.6.1
    • Perl 5.8.0
    • Linux 2.4.20 directory cache (dcache)
    • Squid 2.5STABLE1
    • Bro IDS 0.8a20

    Also related is the recent linux 2.4.20 route cache attack by Florian Weimer. David Miller is working on a patch that fixes other similar issues in other places of the networking stack.

    Our paper discusses a new class of denial of service attacks that work by exploiting the difference between average case performance and worst-case performance. In an adversarial environment, the data structures used by an application may be forced to experience their worst case performance. For instance, hash tables are usually thought of as being constant time operations, but with large numbers of collisions will degrade to a linked list and may lead to a 100-10,000 times performance degradation. Because of the widespread use of hash tables, the potential for attack is extremely widespread. Fortunately, in many cases, other limits on the system limit the impact of these attacks.

    To be attackable, an application must have a deterministic or predictable hash function and accept untrusted input. In general, for the attack to be signifigant, the applications must be willing and able to accept hundreds to tens of thousands of 'attack inputs'. Because of that requirement, it is difficult to judge the impact of these attack without knowing the source code extremely well, and knowing all ways in which a program is used.

    The solution for these attacks on hash tables is to make the hash function unpredictable via a technique known as universal hashing. Universal hashing is a keyed hash function where, based on the key, one of a large set hash functions is chosen. When benchmarking, we observe that for short or medium length inputs, it is comparable in performance to simple predictable hash functions such as the ones in Python or Perl.

    I highly advise using a universal hashing library, either our own or someone elses. As is historically seen, it is very easy to make silly mistakes when attempting to implement your own 'secure' algorithm.

    The abstract, paper, and a library implementing universal hashing is available at http://www.cs.rice.edu/~scrosby/hash/.

    Scott

    ** Affected Products: Extremely widespread. Confirmed vulnerable applications include Perl, the Linux kernel, the Bro IDS, and the Squid HTTP proxy cache. Although unconfirmed, vulnerablities appear to be in the GLIB utility library, the OpenBSD kernel, DJBDNS cache, TCL, Python, and Mozilla.

    We conjecture that many implementations of hash tables in both closed source and open source software, unless specifically designed to be immune, may be subject to attack. It is likely

    • Because of that requirement, it is difficult to judge the impact of these attack without knowing the source code extremely well, and knowing all ways in which a program is used.

      So...can you also say that an attack such as this is nearly impossible to successfully mount against a closed-source system?

      I'm not trolling, just wondering. (And yes, I do know that you can make your program virtually immune by using a more secure hash function, so we don't need to bring that up.)
      • Scott, one of the authors, has stated that it is reasonable to attack without access to the source code. See the post 6088358 [slashdot.org] and Scott's reply at 6088426 [slashdot.org].

        I'm not a security researcher, but I'd guess that disassembling any binary would provide plenty of information to reconstruct any hashing functions used. After all, people have proven themselves pretty clever at modifying closed-source games and finding (other) security problems in closed-source software, having only the binary to work from.

        -Paul Koma
  • by Tsali ( 594389 )
    Wasn't that some command-line prompt game in 1983 somewhere?
  • by spoonist ( 32012 ) on Saturday May 31, 2003 @08:37PM (#6087312) Journal

    I skimmed the Project Page [rice.edu] and aren't a couple of the examples awefully obvious?

    The following one line of code brings every UNIX system I've run it on TO ITS KNEES WITHIN MINUTES!! This is a major vulnerability in EVERY UNIX system! Something must be done!

    main() { while (1) if (fork() == 0) while(1); }
    • Re:glib example (Score:5, Informative)

      by msgmonkey ( 599753 ) on Saturday May 31, 2003 @08:48PM (#6087345)
      Any half decent UNIX system (or should I say admin) would have process accounting enabled that would stop this kind of attack.
      • Any half decent UNIX system (or should I say admin) would have process accounting enabled that would stop this kind of attack.

        Process accounting adds overheard for minimal value (since critical information is left out, like the device/inode of the binary that was exec'd, the full command line, parent process ID, etc).

        It will identify the culprit of a forkbomb, but if deliberate it will not be the individual's own account and if accidenhtal it won't allow you to take any real action against the user. It

  • by reflective recursion ( 462464 ) on Saturday May 31, 2003 @09:47PM (#6087527)
    should not be considered a Denial of Service. If anything, call it what it truely is: shoddy programming. There is nothing being denied when it is the client that has the problem.

    Another thing.. this is nothing new. A number of DoS attacks require the server to do more than the client requests. The attacker's complexity would be O(n) whereas the server would be O(n^2) or some such, where n is any given method of communication. The only "new" thing would perhaps be looking at individual programs and algorithms, which is probably applying a very liberal definition of "new." They even admit it themselves by claiming ping-of-death and stack smashing have much in common with what they describe in the article.
  • And this is new? (Score:4, Interesting)

    by thesupraman ( 179040 ) on Saturday May 31, 2003 @10:49PM (#6087761)
    Well, back in 1990 when I started in university computer science, this *was* one of the main things that a denial of service attack was considered to be.

    DoS attacks were mainly either removal of a service (by crashing it and/or stoping it's reload) or resource starvation, being any of CPU, disk, memory, network, etc.

    Good to see these people have bothered to flick through a bit of history probably over 15 years old by now and call it something new, yawn.

    Having had enough background in large computer systems, it can become quite depressing watching the constant flow of 'new innovations' that have been done may many years before on big iron, but are seen as new by people who just don't have the experience to realise.

    Of course it is easier to DoS a machine by using it's own functionality against it rather than just brute force, welcome to computer science 101.

    Just wait six months and someone will be releasing a fantastic new defense against this by limiting the CPU resources of given tasks to defined amounts so they cannot stop the system and only that particular service.

    I mean come on, this was a common undergrad trick on the vaxes we used to get to play with way back then.
    • I'm not going to bother arguing with the first half of this post, since that's been done already. Here's a neat part.

      Just wait six months and someone will be releasing a fantastic new defense against this by limiting the CPU resources of given tasks to defined amounts so they cannot stop the system and only that particular service.
      How do you propose this should be done? How can you tell if input is malicious or not? I think that would be pretty tricky/impossible.
    • by SysKoll ( 48967 ) on Sunday June 01, 2003 @12:45AM (#6088139)
      CPU hogging isn't new. I agree that fixing it on a Unix-like system is as easy as capping the CPU time of user processes... providing it's practical.

      But consider a commercial app where customers can send requests to a J2EE app server running within a JVM. That's a very popular, very common setup (JBoss, BEA Weblogic, IBM WebSphere, etc.). The JVM is a single process. It is not CPU-capped because it's designed to stay up and running. When a Java thread handles a request and bumps into a CPU-hogging attack, it is not going to be terminated by the J2EE app server.

      So this is potentially a problem, because you currently do not have a CPU-capping parameter in the most popular J2EE app servers. A response to this kind of attacks would require monitoring the amount of CPU consumed by threads processing incoming requests, which is always delicate.

      CPU-capping shouldn't be done lightly. It can lead to disastrous failures. For instance, I once tried to use a graphical web application rendering some do-it-yourself tee-shirt lettering. The application was running on an older IIS and apparently had a CPU-time cap, because I got a message "sorry, your request took too long to process" when my design became a bit involved. Needless to say, my business went to a competitor. So CPU-capping isn't even a sure-fire solution.

      In summary: Sorry, it is an issue.

      -- SysKoll
      • Your examples only demonstrate that those who architected J2EE platforms didn't do enough homework... I still haven't read the article neither do I know these servlet applications but rather than bashing the UNIX capping strategy as "old-hat" I'm more inclined to condemn the newcomers for developing new flashy gizmos without guaranteeing against known problems. Of course if you based your business on IIS it's likely that you went after the flashy fluff; let the lesson sink in and keep away from 'tv commerci
        • Your examples only demonstrate that those who architected J2EE platforms didn't do enough homework.

          No contest that there are deficiencies in the J2EE standard and the commercial app servers. Nobody disputes that. One of the shortcoming is lack of CPU capping. And the article describes how to turn this into a DoS attack.

          J2EE-complaint servers are running all kind of web sites and upgrading them against these attack would be a major concern. You daily life could be impacted if, say, you're trading stock

  • Bro IDS info (Score:2, Informative)

    by Fzz ( 153115 )
    In case you were wondering what Bro [icir.org]is.
  • by Anonymous Coward on Sunday June 01, 2003 @12:14AM (#6088041)
    My brain is resistant to attacks using algorithmic complexity

  • by thornist ( 64703 ) on Sunday June 01, 2003 @12:21AM (#6088069) Homepage
    There's been extensive discussion [python.org] of this over on python-dev. Includes some interesting debunking by Python guru Tim Peters, plus another example of the same type of weakness involving backtracking regular expression engines.
  • The author posted to python-dev (the development list for python) asking for feedback, that feedback was:
    • the suggested replacement for the hash function is slower for typical uses
    • the suggested replacement doesn't acutally solve the problem (it just takes longer for the attacker to find DoS-able inputs)
    • the suggested replacement returns different hash values for the same input for different processes (violating a garuntee in python)
    • input sets large enough to do damage are large enough to be con
  • from the /.ers than this.

    The biggest implication of this is - if you're an open source application, exposing your internals to the outside world, and you are using a weak algorithm, then this sort of attack can be carried out on you.

    Now I'm a fan of open source, but I'm sure that anyone that writes OSS will be taking this on board, so there's no need to worry.

    Just joking, of course - everyone writing an accessible service in OSS needs to read this paper.

    Sorry - rant over, but if you expose your internal wo

  • ...and not by the obvious method of printing a file containing nothing but 500 form feeds.

    When I was at university, about 10 years ago, one of my fellow students came up with a way to tie up one of our shiny new Apple Laserwriters. They were quite expensive to run, and so the university charged us for each page that we printed. The nice thing about this hack, from the evil point of view, was that it produced only one page of output, so the cost to the hacker was negligible - at least in financial terms.

    Wh

Waste not, get your budget cut next year.

Working...