Secretly Monopolizing the CPU Without Being Root 250
An anonymous reader writes "This year's
Usenix security symposium
includes a
paper
that implements a "cheat" utility, which allows any non-privileged user to
run his/her program, e.g., like so 'cheat 99% program'
thereby insuring that the programs would get 99% of the CPU
cycles, regardless of the presence of any other applications in the
system, and in some cases (like Linux), in a way that keeps the program
invisible from CPU monitoring tools (like 'top'). The utility exclusively
uses standard interfaces and can be trivially implemented by any
beginner non-privileged programmer. Recent efforts to improve the
support for multimedia applications make systems more susceptible to
the attack.
All prevalent operating systems but Mac OS X are vulnerable, though by
this kerneltrap story,
it appears that the new CFS Linux scheduler attempts to address the
problem that were raised by the paper."
Google-cache article (Score:3, Informative)
Re:So, is vista security good enough.... (Score:1, Informative)
Re:A Useful Tool (Score:4, Informative)
Old news (Score:4, Informative)
If you check the linux kernel mailing list for Vassili Karpov, you should find test cases that demonstrate this behavior and tools for monitoring actual CPU usage for a variety of platforms, though I notice no mention of any of that in the paper.
See http://www.boblycat.org/~malc/apc/ [boblycat.org] for the tool and 'invisible CPU hog' test case.
Re:What the?! (Score:3, Informative)
Re:What does this mean? (Score:5, Informative)
Re:Google-cache article (Score:5, Informative)
it works by avoiding running during the exact moment of a clock tick (which would be the moment when CPU usage per-process is checked). to start running immediately after a clock tick is (apparently) easy, but to stop before the next tick is harder. the paper suggests using some kind of get_cycles assembly instruction to count how many CPU cycles there are per clock tick, and use that number to gauge when the next clock tick is going to occur by counting how many CPU cycles have elapsed.
How It Works (Score:5, Informative)
This is accomplished by sleeping for a fixed amount in between OS clock ticks. The timeline looks like this:
Re:Old news (Score:5, Informative)
Linux 2.6.21 is probably immune too (Score:5, Informative)
According to the paper, the reason Mac OS X is not vulnerable is that it uses one-shot timers scheduled for exactly when the next event needs to occur, rather than periodic "ticks" with a fixed interval between them. The "tickless idle" feature introduced in Linux 2.6.21 (currently only on x86, I believe) takes the same approach, and very possibly makes Linux immune too.
(Ironically, immediately after discussing OSX's ticklessness, the paper mentions that "the Linux 2.6.16 kernel source tree contains 8,997 occurrences of the tick frequency HZ macro, spanning 3,199 files", to illustrate how difficult it is to take a tick-based kernel and make it tickless. But those kernel hackers went and did it anyway.)
The tickless feature isn't yet implemented on all architectures that Linux supports, though. I think AMD64 support for it is supposed to come in 2.6.23, along with the new CFS scheduler.
Re:Sounds great in some respects. (Score:3, Informative)
Fixed recently in Linux (Score:5, Informative)
The CFS additionally removes the interactivity boost in favor of giving interactive tasks no extra time but rather just quick access to their available time, which is what they really benefit from.
Re:What the?! (Score:3, Informative)
Here's the difference (Score:3, Informative)
What 'saved' the Mac OS was its different use of timing triggers. "All" other OS'es use one common steadily ticking clock as a dealer of time slots. This allows the cheat to "skip to the start of the line (queue)" every time it's had its turn.
OTOH, the Mac uses a stack of alarms set to specific points in the future, and polled in order as they occur. So the difference on Mac OS is that there's no skipping the queue, it's rather "there is no queue, we'll call you when it's your turn".
I don't know the details of the OpenBSD scheduler, but it's very likely the same (clock tick) method as used by the rest of the susceptible OS'es.
Re:sweet! (Score:1, Informative)
http://docs.sun.com/app/docs/doc/817-6223/6mlkidl
Summary and Questions (Score:5, Informative)
Most OSes (Linux, Solaris, Windows but not Mac OS X) are tick-based. This means that the kernel is called from hardware periodically (this is the "HZ" value you set in the Linux kernel). Some of them (Linux) simply check which process is running at each tick and compute statistics based on that ("sample-based statistics"). This means that the process running when the tick happens is billed for the entire period of the tick.
Since ticks are typically "long" (typically 1-10 ms on Linux) more than one process may run during this period. In other words, using this approach leads to inaccuracies in the process billing. If all programs "play by the rules" this works quite well on average though.
Next thing: the classic schedulers typically maintain some sort of "priority" value for each process, which decreases whenever the process is running and increases when it's not. This means that a process runs for some time, its priority decreases, and then another process (which hasn't been running for some time) takes over.
You can exploit that by always sleeping when a tick happens and running only in-between ticks. This makes the kernel thinks that your process is never running and give it a high priority. So, when your process wakes up just after a tick happened, it will have a higher priority than most other processes and be given the CPU. If it goes to sleep again just before the next tick, its priority will not be decreased. Your process will (almost) always run when it wants to and the kernel will think that it's (almost) never running and keep its priority high. You win!
Another aspect is that modern kernels (at least Linux and Windows) distinguish between "interactive" (e.g. media players) and "non-interactive" processes. They do so by looking how many times a process goes to sleep voluntarily. An interactive program (such as a media player) will have many voluntary sleeps (e.g. inbetween displaying frames) while a non-interactive program (e.g. a compiler or some number crunching program) will likely never go to sleep voluntarily. The scheduler gives the interactive programs an additional priority boost.
Since the cheating programs go to sleep very often (at every tick) the kernel thinks they're "very interactive", which makes the situation worse.
Some of the analyzed OSes - even if tick-based - do not use sample-based statistics in the kernel but they do use sample-based statistics for scheduling decisions. So the kernel sees that a process is taking more CPU than it should but it will still keep on scheduling it.
Mac OS X is not affected because it has a tickless kernel (e.g. without periodic interrupts). Because of that sample-based statistics don't work and it has to use accurate statistics, which make it unaffected by the bug.
This bug can be exploited to (at least)
- get more CPU than you're supposed to
- hinder other programs in their normal work
- hide malicious programs (such as rootkits) which do work in the background
Here's a list with the OSes (this USED TO BE a nicely formatted table, but the darned Slashdot "lameness filter" forced me to remove much of the nice lines and the "ecode" tag collapses whitespace).
OS, Process statistics, Scheduler decisions, Interactive/non-interactive decision, Affected
Linux, sample, sample, yes, yes
Solaris, accurate, sample, ?, yes
FreeBSD 4BSD, ?, sample, no?, yes
FreeBSD ULE, ?, sample, yes, yes
Windows, accurate, sample, yes, yes
Mac OS X, accurate, accurate, not needed?, yes
I guess that Mac OS X doesn't need a interactive/non-interactive distinction because of its different (tickless) approach. I assume that interactive applications can (implicitly or explicitly) can be recognized as such in a different way. Does anyone have more information on that?
How does tickless Linux compare? What abo
Re:Linux 2.6.21 is probably immune: RDTSC? (Score:3, Informative)
Re:Summary and Questions (Score:4, Informative)
Vista changed to counting actual CPU cycle count register. The goal was to prevent process starvation in high I/O situations, but it also addresses this issue as well.
http://www.microsoft.com/technet/technetmag/issue
Re:It was news,... in 1980 (Score:3, Informative)
Not exactly. This is a technique that will, in prinicple, work with any scheduler that prioritizes tasks on the basis of time ticks previously used by the task. That turns out to be most of them. The technique does not require being an I/O driver, other special task, or having unusual user priviliges.
So yes, it IS news.