Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Unix

Secretly Monopolizing the CPU Without Being Root 250

An anonymous reader writes "This year's Usenix security symposium includes a paper that implements a "cheat" utility, which allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring that the programs would get 99% of the CPU cycles, regardless of the presence of any other applications in the system, and in some cases (like Linux), in a way that keeps the program invisible from CPU monitoring tools (like 'top'). The utility exclusively uses standard interfaces and can be trivially implemented by any beginner non-privileged programmer. Recent efforts to improve the support for multimedia applications make systems more susceptible to the attack. All prevalent operating systems but Mac OS X are vulnerable, though by this kerneltrap story, it appears that the new CFS Linux scheduler attempts to address the problem that were raised by the paper."
This discussion has been archived. No new comments can be posted.

Secretly Monopolizing the CPU Without Being Root

Comments Filter:
  • Google-cache article (Score:3, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @10:38AM (#19825153)
    For those harboring poisonous grudges against PDFs, the Googlerised HTML version is here [72.14.235.104].
  • by Anonymous Coward on Wednesday July 11, 2007 @10:41AM (#19825191)
    People have been looking for and exploiting *nix vulnerabilities long before Windows was on the scene.
  • Re:A Useful Tool (Score:4, Informative)

    by CastrTroy ( 595695 ) on Wednesday July 11, 2007 @10:42AM (#19825199)
    you could always renice apache and mysql down to a lower priority. Possibly in a log-on/log-off script which would change the priorities and then reset them when you log out.
  • Old news (Score:4, Informative)

    by Edward Kmett ( 123105 ) on Wednesday July 11, 2007 @10:44AM (#19825221) Homepage
    Not quite sure what justifies a paper out of this.

    If you check the linux kernel mailing list for Vassili Karpov, you should find test cases that demonstrate this behavior and tools for monitoring actual CPU usage for a variety of platforms, though I notice no mention of any of that in the paper.

    See http://www.boblycat.org/~malc/apc/ [boblycat.org] for the tool and 'invisible CPU hog' test case.
  • Re:What the?! (Score:3, Informative)

    by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Wednesday July 11, 2007 @10:50AM (#19825285) Homepage Journal
    This is a bit different. It's a way to convince the OS to give you more time slices than you'd normally be allocated. e.g. If you ran that program of yours twice at the same priority level, both instances should get ~50% of the CPU time. If one of the instances implemented this privilege boosting scheme however, it would get to hog all the CPU time while your other spinlocked program starved.
  • by pauljlucas ( 529435 ) on Wednesday July 11, 2007 @10:54AM (#19825335) Homepage Journal

    What?! I'm really not sure what's being said here. I understand the idea behind this, but the wording of the Slashdot piece is difficult to penetrate, even by Slashdot standards.
    I hard a hard time reading it as well, but then I saw it (kind of like when you suddenly "see" the picture in a stereogram). Proper punctuation, whitespace, formatting, and font changes help a lot. It should have been:

    .. allows any non-privileged user to run his/her program, like so:

    cheat 99% program

    thereby insuring ...

    where cheat is the name of the compiled utility that lets you "cheat", 99% is an argument to cheat, and program is the name of some other program that you want to run at 99% of the CPU. I.e., the command line syntax resembles renice.
  • by brunascle ( 994197 ) on Wednesday July 11, 2007 @11:08AM (#19825447)
    and for those who dont have the time to read the paper...

    it works by avoiding running during the exact moment of a clock tick (which would be the moment when CPU usage per-process is checked). to start running immediately after a clock tick is (apparently) easy, but to stop before the next tick is harder. the paper suggests using some kind of get_cycles assembly instruction to count how many CPU cycles there are per clock tick, and use that number to gauge when the next clock tick is going to occur by counting how many CPU cycles have elapsed.
  • How It Works (Score:5, Informative)

    by Shimmer ( 3036 ) on Wednesday July 11, 2007 @11:08AM (#19825453) Journal
    The cheat program hogs the CPU by using it when the host OS isn't looking. As a result, it avoids the scrutiny of the OS's scheduler and is actually given a priority boost by some schedulers because of its good behavior.

    This is accomplished by sleeping for a fixed amount in between OS clock ticks. The timeline looks like this:
    1. Hardware is set to generate a "tick" event every N milliseconds.
    2. Tick event occurs, which is handled by the OS.
    3. OS notes which process is current running on the CPU and bills it for this tick.
    4. OS wakes up cheating process, which is currently sleeping, and allows it to run.
    5. Cheating process runs for M (< N) milliseconds, then requests to go to sleep for 0 milliseconds. This causes the cheating process to sleep until just after the next tick.
    6. Repeat from step 2 above.
  • Re:Old news (Score:5, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @11:09AM (#19825465)
    Publishing papers takes a lot of time, as anybody who ever done it would know... For example, the post you mention is from Feb 2007. By then, according to the usenix-security call for papers, the paper has already been submitted. Also, google-ing "cheat" around revealed this technical report: http://leibniz.cs.huji.ac.il/anon?View=1&num=1&pid %5B1%5D=870&abstract=1 [huji.ac.il] (seems the initial version of the paper) which is dated May 2006.
  • by Wyzard ( 110714 ) on Wednesday July 11, 2007 @11:17AM (#19825553) Homepage

    According to the paper, the reason Mac OS X is not vulnerable is that it uses one-shot timers scheduled for exactly when the next event needs to occur, rather than periodic "ticks" with a fixed interval between them. The "tickless idle" feature introduced in Linux 2.6.21 (currently only on x86, I believe) takes the same approach, and very possibly makes Linux immune too.

    (Ironically, immediately after discussing OSX's ticklessness, the paper mentions that "the Linux 2.6.16 kernel source tree contains 8,997 occurrences of the tick frequency HZ macro, spanning 3,199 files", to illustrate how difficult it is to take a tick-based kernel and make it tickless. But those kernel hackers went and did it anyway.)

    The tickless feature isn't yet implemented on all architectures that Linux supports, though. I think AMD64 support for it is supposed to come in 2.6.23, along with the new CFS scheduler.

  • by cnettel ( 836611 ) on Wednesday July 11, 2007 @11:23AM (#19825651)
    It works by sleeping at the right point in time. You really hack up the timeslices and decrease the overall efficiency (more context switches), so it's only good if you want to steal cycles where you are not really allowed to.
  • by iabervon ( 1971 ) on Wednesday July 11, 2007 @11:26AM (#19825705) Homepage Journal
    They took too long to publish this. Linux 2.6.21 (released in April) added support for using one-shot timers instead of a periodic tick, so it avoids the problem like OS X does. In addition to resolving this issue, tickless is important for saving power (because the processor can stay in a low-power state for long enough to get substantial benefits compared to the power cost of starting and stopping) and for virtual hosting (where the combined load of the guest OS scheduler ticks is significant on a system with a large number of idle guests). As a side effect, while the accounting didn't change at that point, the pattern a task has to use to fool the accounting became impossible to guess.

    The CFS additionally removes the interactivity boost in favor of giving interactive tasks no extra time but rather just quick access to their available time, which is what they really benefit from.
  • Re:What the?! (Score:3, Informative)

    by Random832 ( 694525 ) on Wednesday July 11, 2007 @11:33AM (#19825809)
    "fork while fork" won't have the exponential effect, since fork returns 0 (false) in the child process, terminating the loop and causing growth to only be linear. You'd need fork while true.
  • by KlaymenDK ( 713149 ) on Wednesday July 11, 2007 @11:39AM (#19825893) Journal
    (reply to self after RTFA)

    What 'saved' the Mac OS was its different use of timing triggers. "All" other OS'es use one common steadily ticking clock as a dealer of time slots. This allows the cheat to "skip to the start of the line (queue)" every time it's had its turn.

    OTOH, the Mac uses a stack of alarms set to specific points in the future, and polled in order as they occur. So the difference on Mac OS is that there's no skipping the queue, it's rather "there is no queue, we'll call you when it's your turn".

    I don't know the details of the OpenBSD scheduler, but it's very likely the same (clock tick) method as used by the rest of the susceptible OS'es.
  • Re:sweet! (Score:1, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @11:41AM (#19825911)
    The article seems to indicate that the cheat gets more throughput non-cheating threads on Solaris 10. However, it appears that it would be trivial to reveal such a cheat with the dtrace sched provider and one of the probes such as remain-cpu

    http://docs.sun.com/app/docs/doc/817-6223/6mlkidll 8?a=view [sun.com]
  • by Aaron Isotton ( 958761 ) on Wednesday July 11, 2007 @11:47AM (#19826011)
    The paper is quite long, so here's a summary (take this with a grain of salt, who wants accurate information should still RTFP):

    Most OSes (Linux, Solaris, Windows but not Mac OS X) are tick-based. This means that the kernel is called from hardware periodically (this is the "HZ" value you set in the Linux kernel). Some of them (Linux) simply check which process is running at each tick and compute statistics based on that ("sample-based statistics"). This means that the process running when the tick happens is billed for the entire period of the tick.

    Since ticks are typically "long" (typically 1-10 ms on Linux) more than one process may run during this period. In other words, using this approach leads to inaccuracies in the process billing. If all programs "play by the rules" this works quite well on average though.

    Next thing: the classic schedulers typically maintain some sort of "priority" value for each process, which decreases whenever the process is running and increases when it's not. This means that a process runs for some time, its priority decreases, and then another process (which hasn't been running for some time) takes over.

    You can exploit that by always sleeping when a tick happens and running only in-between ticks. This makes the kernel thinks that your process is never running and give it a high priority. So, when your process wakes up just after a tick happened, it will have a higher priority than most other processes and be given the CPU. If it goes to sleep again just before the next tick, its priority will not be decreased. Your process will (almost) always run when it wants to and the kernel will think that it's (almost) never running and keep its priority high. You win!

    Another aspect is that modern kernels (at least Linux and Windows) distinguish between "interactive" (e.g. media players) and "non-interactive" processes. They do so by looking how many times a process goes to sleep voluntarily. An interactive program (such as a media player) will have many voluntary sleeps (e.g. inbetween displaying frames) while a non-interactive program (e.g. a compiler or some number crunching program) will likely never go to sleep voluntarily. The scheduler gives the interactive programs an additional priority boost.

    Since the cheating programs go to sleep very often (at every tick) the kernel thinks they're "very interactive", which makes the situation worse.

    Some of the analyzed OSes - even if tick-based - do not use sample-based statistics in the kernel but they do use sample-based statistics for scheduling decisions. So the kernel sees that a process is taking more CPU than it should but it will still keep on scheduling it.

    Mac OS X is not affected because it has a tickless kernel (e.g. without periodic interrupts). Because of that sample-based statistics don't work and it has to use accurate statistics, which make it unaffected by the bug.

    This bug can be exploited to (at least)

    - get more CPU than you're supposed to
    - hinder other programs in their normal work
    - hide malicious programs (such as rootkits) which do work in the background

    Here's a list with the OSes (this USED TO BE a nicely formatted table, but the darned Slashdot "lameness filter" forced me to remove much of the nice lines and the "ecode" tag collapses whitespace).

    OS, Process statistics, Scheduler decisions, Interactive/non-interactive decision, Affected
    Linux, sample, sample, yes, yes
    Solaris, accurate, sample, ?, yes
    FreeBSD 4BSD, ?, sample, no?, yes
    FreeBSD ULE, ?, sample, yes, yes
    Windows, accurate, sample, yes, yes
    Mac OS X, accurate, accurate, not needed?, yes

    I guess that Mac OS X doesn't need a interactive/non-interactive distinction because of its different (tickless) approach. I assume that interactive applications can (implicitly or explicitly) can be recognized as such in a different way. Does anyone have more information on that?

    How does tickless Linux compare? What abo
  • by redelm ( 54142 ) on Wednesday July 11, 2007 @11:59AM (#19826177) Homepage
    A very cheap simple patch is add RDTSC instructions at process restart and blocking syscall to count the cycles actually used. That way the extensive tick-code doesn't need to be modified.

  • by swinefc ( 91418 ) * on Wednesday July 11, 2007 @01:43PM (#19827677)
    Windows is affected, but not Vista.
    Vista changed to counting actual CPU cycle count register. The goal was to prevent process starvation in high I/O situations, but it also addresses this issue as well.
    http://www.microsoft.com/technet/technetmag/issues /2007/02/VistaKernel/ [microsoft.com]
  • by vtcodger ( 957785 ) on Wednesday July 11, 2007 @07:24PM (#19832249)
    ***I seem to recall usenet discussions about this circa the time of !uucp!newsglop!..... Not exactly smokin' hot off the press.***

    Not exactly. This is a technique that will, in prinicple, work with any scheduler that prioritizes tasks on the basis of time ticks previously used by the task. That turns out to be most of them. The technique does not require being an I/O driver, other special task, or having unusual user priviliges.

    So yes, it IS news.

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...