Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Security Operating Systems Software Unix

Secretly Monopolizing the CPU Without Being Root 250

An anonymous reader writes "This year's Usenix security symposium includes a paper that implements a "cheat" utility, which allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring that the programs would get 99% of the CPU cycles, regardless of the presence of any other applications in the system, and in some cases (like Linux), in a way that keeps the program invisible from CPU monitoring tools (like 'top'). The utility exclusively uses standard interfaces and can be trivially implemented by any beginner non-privileged programmer. Recent efforts to improve the support for multimedia applications make systems more susceptible to the attack. All prevalent operating systems but Mac OS X are vulnerable, though by this kerneltrap story, it appears that the new CFS Linux scheduler attempts to address the problem that were raised by the paper."
This discussion has been archived. No new comments can be posted.

Secretly Monopolizing the CPU Without Being Root

Comments Filter:
  • by Bios_Hakr ( 68586 ) <xptical@@@gmail...com> on Wednesday July 11, 2007 @10:38AM (#19825147)
    I run several websites off of a single host. If I need to login to do maintenance during peak hours, I'm slowed by Apache and MySQL. This would be a nice utility if it were wrapped into SUDO.
  • Google-cache article (Score:3, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @10:38AM (#19825153)
    For those harboring poisonous grudges against PDFs, the Googlerised HTML version is here [72.14.235.104].
    • by brunascle ( 994197 ) on Wednesday July 11, 2007 @11:08AM (#19825447)
      and for those who dont have the time to read the paper...

      it works by avoiding running during the exact moment of a clock tick (which would be the moment when CPU usage per-process is checked). to start running immediately after a clock tick is (apparently) easy, but to stop before the next tick is harder. the paper suggests using some kind of get_cycles assembly instruction to count how many CPU cycles there are per clock tick, and use that number to gauge when the next clock tick is going to occur by counting how many CPU cycles have elapsed.
    • Re: (Score:3, Insightful)

      For those harboring poisonous grudges against PDFs...
      Speaking of userland processes using 99% cpu...
  • gnome (Score:3, Funny)

    by dattaway ( 3088 ) on Wednesday July 11, 2007 @10:40AM (#19825175) Homepage Journal
    The gnome desktop for years has been hiding processes that h0rk the cpu.
  • What the?! (Score:5, Funny)

    by Rik Sweeney ( 471717 ) on Wednesday July 11, 2007 @10:41AM (#19825183) Homepage
    Using up 99% of the CPU's easy!

    #include

    int main(int argc, char *argv[])
    {
          while (1) {}

          return 0;
    }
    • But YOU have the privilege to eat all of your system resources. The point of the article is that an unprivileged user can while-lock your system and your OS will have no idea.
    • Re: (Score:3, Informative)

      by AKAImBatman ( 238306 ) *
      This is a bit different. It's a way to convince the OS to give you more time slices than you'd normally be allocated. e.g. If you ran that program of yours twice at the same priority level, both instances should get ~50% of the CPU time. If one of the instances implemented this privilege boosting scheme however, it would get to hog all the CPU time while your other spinlocked program starved.
  • Old news (Score:4, Informative)

    by Edward Kmett ( 123105 ) on Wednesday July 11, 2007 @10:44AM (#19825221) Homepage
    Not quite sure what justifies a paper out of this.

    If you check the linux kernel mailing list for Vassili Karpov, you should find test cases that demonstrate this behavior and tools for monitoring actual CPU usage for a variety of platforms, though I notice no mention of any of that in the paper.

    See http://www.boblycat.org/~malc/apc/ [boblycat.org] for the tool and 'invisible CPU hog' test case.
  • ok (Score:3, Interesting)

    by nomadic ( 141991 ) <nomadicworld@gm[ ].com ['ail' in gap]> on Wednesday July 11, 2007 @10:44AM (#19825223) Homepage
    Back in my day we called it renice.







    Yes, I'm kidding. Please don't post a long reply explaining how renice differs from this cheat thing. It isn't necessary.
    • Back in my day we called it renice. Yes, I'm kidding. Please don't post a long reply explaining how renice differs from this cheat thing. It isn't necessary.
      My good sir, you take all of the fun out of trolling slashdot while at work! Now I have no excuse to avoid working on the dbase (Access and VBA, ugh). Jerk.
  • I seem to recall usenet discussions about this circa the time of !uucp!newsglop!..... It seemed the Unix scheduler would let certain IO operations hog the CPU. And if you somehow installed your app as a IO driver or IO completion routine, then your app could hog the CPU. Similarly since day one of Windows soundcards you could set your app to realtime_priority and everything else would suffer. Not exactly smokin' hot off the press.
    • by phasm42 ( 588479 )
      That's not what the paper talks about. The vulnerability is that the scheduler gathers statistics (used to make scheduling decisions) by checking who is running at every clock tick. By running only between clock ticks and never running at the time of a clock tick, your process can use a lot of CPU without the scheduler knowing.
    • Re: (Score:3, Informative)

      by vtcodger ( 957785 )
      ***I seem to recall usenet discussions about this circa the time of !uucp!newsglop!..... Not exactly smokin' hot off the press.***

      Not exactly. This is a technique that will, in prinicple, work with any scheduler that prioritizes tasks on the basis of time ticks previously used by the task. That turns out to be most of them. The technique does not require being an I/O driver, other special task, or having unusual user priviliges.

      So yes, it IS news.

  • This year's Usenix security symposium includes a paper that implements a "cheat" utility, which allows any non-privileged user to run his/her program, e.g., like so 'cheat 99% program' thereby insuring that the programs would get 99% of the CPU cycles, regardless of the presence of any other applications in the system, and in some cases (like Linux), in a way that keeps the program invisible from CPU monitoring tools (like 'top').

    Next up, a virus which senses bad grammar and punishes you by using 99% of

  • I've even gone as far as to compiling a minimal Linux distribution for one of my test machines so my CPU intensive application can squeek out every last drop of performance as possible. Beyond the normal renice -20

    Curious how this works.

    • Re: (Score:3, Informative)

      by cnettel ( 836611 )
      It works by sleeping at the right point in time. You really hack up the timeslices and decrease the overall efficiency (more context switches), so it's only good if you want to steal cycles where you are not really allowed to.
  • by ivan_w ( 1115485 ) on Wednesday July 11, 2007 @11:05AM (#19825423) Homepage
    I wasn't aware the schedulers for those systems were so deficient !

    In my days (yes, I'm an old fart) - the schedulers had basic principles :

    - Voluntary yielding led you to get accounted for the time you spent running.
    - You could stay in the interactive queue for only a certain amount of time. After some amount of time had passed (a few secs) you were either bumped to non-interactive if you were running (with longer time slices but lower priority) or removed off the scheduler list for good (if the time spent there was idling). They had a special 'idle but interactive' (not eligible for dispatching) queue for that.
    - Scheduling a new task restarted a new time slice

    That particular scheduler even had a 3 queue system so that if you got accidentally bumped into the non-interactive queue or if your process was semi-interactive you had a better chance of gaining interactive status again. And they had a 'really' not interactive queue for those CPU hogging processes.

    Of course this requires the hardware to have a precise timing feature (something with a granularity that is finer than the process interleaving time slice time and ideally in the magnitude of instruction execution). And this scheduler wasn't using time sampling and time quantums.. (but something more like the OSX timer on demand paradigm).

    --Ivan
  • How It Works (Score:5, Informative)

    by Shimmer ( 3036 ) on Wednesday July 11, 2007 @11:08AM (#19825453) Journal
    The cheat program hogs the CPU by using it when the host OS isn't looking. As a result, it avoids the scrutiny of the OS's scheduler and is actually given a priority boost by some schedulers because of its good behavior.

    This is accomplished by sleeping for a fixed amount in between OS clock ticks. The timeline looks like this:
    1. Hardware is set to generate a "tick" event every N milliseconds.
    2. Tick event occurs, which is handled by the OS.
    3. OS notes which process is current running on the CPU and bills it for this tick.
    4. OS wakes up cheating process, which is currently sleeping, and allows it to run.
    5. Cheating process runs for M (< N) milliseconds, then requests to go to sleep for 0 milliseconds. This causes the cheating process to sleep until just after the next tick.
    6. Repeat from step 2 above.
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Like in Superman 3.
    • Wait a sec, doesn't the OS know when it does a task-switch and do the timing and billing right then?

      Doing the billing on a clock tick sounds like a recipe for failure.

      • I agree. Using spacetime counters would prevent the
        problem. But to do that properly and with decent accuracy
        would require 64 bit counters to nanosecond level.
    • Red Light Green Light.
      Your goal is to run as quickly as you can towards me. When I turn and face you and say "Red Light" you must stop moving and if I catch you moving I make you start over from the beginning. When I'm not looking and say "Green Light" you can move again.

      In this case, the goal is to cover the greatest total distance instead of just reaching my position; so we could adapt it from running to eating: The "winner" is the one who eats the most and the losers end up hungry.
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Wednesday July 11, 2007 @11:10AM (#19825485) Journal
    We had a user who insisted on abusing the "nice" command, to run his jobs at a higher priority. Pleading and cajoling didn't work, so we decided to get creative.

    We changed nice so that whenever this particular user ran it, it lowered his priority by exactly as much as he was attempting to raise it.

    He stopped coming to work soon after that. I suppose he had the last laugh though -- NYIT continued to pay him for another six months.

    Thad
    • What system is this that allows "nice" to raise priority for users other than root?

      And, you do realize that "nice" with a positive argument lowers priority.
      • What system is this that allows "nice" to raise priority for users other than root?

        I don't know the answer, but there are a LOT of UNIX-like operating systems out there, and contrary to belief, they don't all work the same.

  • Does it work on Solaris? If so I can run my sparse distributed memory simulator on the comp sci depts main server without waiting hours to get results!
  • All prevalent operating systems but Mac OS X are vulnerable
    How does this reflect on the BSDs? (FreeBSD for being the closest relative, and OpenBSD for its goal of trying "to be the #1 most secure operating system")
    • (reply to self after RTFA)

      What 'saved' the Mac OS was its different use of timing triggers. "All" other OS'es use one common steadily ticking clock as a dealer of time slots. This allows the cheat to "skip to the start of the line (queue)" every time it's had its turn.

      OTOH, the Mac uses a stack of alarms set to specific points in the future, and polled in order as they occur. So the difference on Mac OS is that there's no skipping the queue, it's rather "there is no queue, we'll call you when it's your turn
  • by Wyzard ( 110714 ) on Wednesday July 11, 2007 @11:17AM (#19825553) Homepage

    According to the paper, the reason Mac OS X is not vulnerable is that it uses one-shot timers scheduled for exactly when the next event needs to occur, rather than periodic "ticks" with a fixed interval between them. The "tickless idle" feature introduced in Linux 2.6.21 (currently only on x86, I believe) takes the same approach, and very possibly makes Linux immune too.

    (Ironically, immediately after discussing OSX's ticklessness, the paper mentions that "the Linux 2.6.16 kernel source tree contains 8,997 occurrences of the tick frequency HZ macro, spanning 3,199 files", to illustrate how difficult it is to take a tick-based kernel and make it tickless. But those kernel hackers went and did it anyway.)

    The tickless feature isn't yet implemented on all architectures that Linux supports, though. I think AMD64 support for it is supposed to come in 2.6.23, along with the new CFS scheduler.

  • by iabervon ( 1971 ) on Wednesday July 11, 2007 @11:26AM (#19825705) Homepage Journal
    They took too long to publish this. Linux 2.6.21 (released in April) added support for using one-shot timers instead of a periodic tick, so it avoids the problem like OS X does. In addition to resolving this issue, tickless is important for saving power (because the processor can stay in a low-power state for long enough to get substantial benefits compared to the power cost of starting and stopping) and for virtual hosting (where the combined load of the guest OS scheduler ticks is significant on a system with a large number of idle guests). As a side effect, while the accounting didn't change at that point, the pattern a task has to use to fool the accounting became impossible to guess.

    The CFS additionally removes the interactivity boost in favor of giving interactive tasks no extra time but rather just quick access to their available time, which is what they really benefit from.
  • The crux of the problem is that the OS uses statistical sampling to account for CPU usage by user processes. Since the sampling occurs at regular intervals, it can be avoided by a cheating program. I can see two possible defenses against this:
    1. Modify the sampling mechanism so that it occurs at irregular intervals. This makes it difficult (but probably not impossible) for the cheater to avoid the sampler. (Apparently, the Mac OS uses this technique, although not for security reasons.)
    2. Modify the accounting alg
    • Re: (Score:2, Interesting)

      by Tacvek ( 948259 )
      The second one is obviously the better one. I think this is basically what the CFS does. (the following is my understanding. It may be wrong) For processes it figures out what amount of time each process should have (based on the the number of processes. It tracks how much time each process is owed (in the case of 5 processes each deserves 1/5 of the total processor time). It subtracts the time used on each scheduler event (clock tick or voluntary yield.) Each clock tick the scheduler transfers control to t
  • by Aaron Isotton ( 958761 ) on Wednesday July 11, 2007 @11:47AM (#19826011)
    The paper is quite long, so here's a summary (take this with a grain of salt, who wants accurate information should still RTFP):

    Most OSes (Linux, Solaris, Windows but not Mac OS X) are tick-based. This means that the kernel is called from hardware periodically (this is the "HZ" value you set in the Linux kernel). Some of them (Linux) simply check which process is running at each tick and compute statistics based on that ("sample-based statistics"). This means that the process running when the tick happens is billed for the entire period of the tick.

    Since ticks are typically "long" (typically 1-10 ms on Linux) more than one process may run during this period. In other words, using this approach leads to inaccuracies in the process billing. If all programs "play by the rules" this works quite well on average though.

    Next thing: the classic schedulers typically maintain some sort of "priority" value for each process, which decreases whenever the process is running and increases when it's not. This means that a process runs for some time, its priority decreases, and then another process (which hasn't been running for some time) takes over.

    You can exploit that by always sleeping when a tick happens and running only in-between ticks. This makes the kernel thinks that your process is never running and give it a high priority. So, when your process wakes up just after a tick happened, it will have a higher priority than most other processes and be given the CPU. If it goes to sleep again just before the next tick, its priority will not be decreased. Your process will (almost) always run when it wants to and the kernel will think that it's (almost) never running and keep its priority high. You win!

    Another aspect is that modern kernels (at least Linux and Windows) distinguish between "interactive" (e.g. media players) and "non-interactive" processes. They do so by looking how many times a process goes to sleep voluntarily. An interactive program (such as a media player) will have many voluntary sleeps (e.g. inbetween displaying frames) while a non-interactive program (e.g. a compiler or some number crunching program) will likely never go to sleep voluntarily. The scheduler gives the interactive programs an additional priority boost.

    Since the cheating programs go to sleep very often (at every tick) the kernel thinks they're "very interactive", which makes the situation worse.

    Some of the analyzed OSes - even if tick-based - do not use sample-based statistics in the kernel but they do use sample-based statistics for scheduling decisions. So the kernel sees that a process is taking more CPU than it should but it will still keep on scheduling it.

    Mac OS X is not affected because it has a tickless kernel (e.g. without periodic interrupts). Because of that sample-based statistics don't work and it has to use accurate statistics, which make it unaffected by the bug.

    This bug can be exploited to (at least)

    - get more CPU than you're supposed to
    - hinder other programs in their normal work
    - hide malicious programs (such as rootkits) which do work in the background

    Here's a list with the OSes (this USED TO BE a nicely formatted table, but the darned Slashdot "lameness filter" forced me to remove much of the nice lines and the "ecode" tag collapses whitespace).

    OS, Process statistics, Scheduler decisions, Interactive/non-interactive decision, Affected
    Linux, sample, sample, yes, yes
    Solaris, accurate, sample, ?, yes
    FreeBSD 4BSD, ?, sample, no?, yes
    FreeBSD ULE, ?, sample, yes, yes
    Windows, accurate, sample, yes, yes
    Mac OS X, accurate, accurate, not needed?, yes

    I guess that Mac OS X doesn't need a interactive/non-interactive distinction because of its different (tickless) approach. I assume that interactive applications can (implicitly or explicitly) can be recognized as such in a different way. Does anyone have more information on that?

    How does tickless Linux compare? What abo
  • by redelm ( 54142 ) on Wednesday July 11, 2007 @11:53AM (#19826093) Homepage
    Yield()ing just before timer tick is a neat trick to grab cycles, but what use are cycles? This might have been interesting on time-share machines 20 years ago. But now cycles are in gross surplus on most machines. And processes carefully controlled on loaded machines. Until this piggy can be remotely deployed, it isn't much of a hazard.

    A very simple patch is to issue RDTSC instructions at process restart and blocking syscall to count the cycles actually used. That way the extensive tick-code doesn't need to be modified.

    • The point of the paper is that you could have some malware using 99% of your CPU and it wouldn't even show up in top.
      • by redelm ( 54142 )
        I think it shows in `top` as sleeping. What malware needs cycles? Mostly they want ports (esp 25 SMTP outbound) or perhaps disk (searching). Protect the resources that need protecting!

  • Way back in the '90s (Score:2, Interesting)

    by kithrup ( 778358 )

    Chris Torek gave a presentation at UseNIX about how a constant quantum could result in a process having its CPU usage unaccounted.

    His solution was to use a randomized quantum. Not unique per process, but randomized when the kernel starts running each process. That gave you a better accounting of the CPU time (statistics, doncha know :)), but also made this kind of attach much, much harder.

    I'm somewhat disappointed that I did not see Chris and Steven's paper referenced in this one. (I believe that the t

  • Why is this new? (Score:3, Insightful)

    by Quixadhal ( 45024 ) on Wednesday July 11, 2007 @04:35PM (#19829923) Homepage Journal
    Nothing new here.

    I remember seeing this done on the VAX/VMS mainframe back in 1987. In that environment, it simply meant that you kept track of your timeslice and voluntarily gave it up before the scheduler took it away from you. That meant you got put at the top of the run queue, and unless someone else was doing the same thing, you were the next program to run. Voila... 99% CPU for you!

    Of course, ordinary users were given a limited amount of CPU time (as well as connect time, disk space, etc), so for the ordinary student, this just meant they used it up in a day or two instead of having a whole month. But then again, for class accounts, they could usually beg for more.

    Under unix variants, one could do the same by implementing cpu quotas at the user level. I've seen network packet quotas, and I'm sure someone out there has done cpu quotas along the same lines.

Memory fault - where am I?

Working...