Forgot your password?
typodupeerror
Bug Software Linux

Hope For Fixing Longstanding Linux I/O Wait Bug 180

Posted by samzenpus
from the patch-it-up dept.
DaGoodBoy writes "There has been a long standing performance bug in Linux since 2.6.18 that has been responsible for lagging interactivity and poor system performance across all architectures. It has been notoriously difficult to qualify and isolate, but in the last few days someone has finally gotten a repeatable test case! Turns out the problem may not even be disk related, since the test case triggers the bug only by transferring data either between two processes or threads. The test results are very revealing. The developer ran regressions all the way back to version 2.6.15 that demonstrate this bug has more than doubled the time to run the test in 2.6.28. Many, many people working at improving the desktop performance of Linux will be very happy to see this bug die. I know that I, personally, will find a way to send the guy that found this test case his beverage of choice in thanks. Please spread the word and bring some attention to this issue so we can get it fixed!"
This discussion has been archived. No new comments can be posted.

Hope For Fixing Longstanding Linux I/O Wait Bug

Comments Filter:
  • Dang!! (Score:5, Funny)

    by camperdave (969942) on Thursday January 15, 2009 @01:03AM (#26461647) Journal
    Dang! I was going for First Post, but my machine was stuck in some weird I/O wait state.
    • Re: (Score:3, Funny)

      Damn futex_wait states!
    • Re: (Score:2, Funny)

      by Aphoxema (1088507)

      Oh, god, I can't read Slashdot commentary and drink fluids at the same time, I never know when something is really going to be funny and I just found out what happens when I stumble across something hilarious while chugging a bottle of water.

  • by whoever57 (658626) on Thursday January 15, 2009 @01:08AM (#26461697) Journal
    bugzilla.kernel.org?
  • KTorrent (Score:2, Interesting)

    by Anonymous Coward

    I'm not sure if this is related, but has anyone else noticed KTorrent can really bog your system down without showing any excessive resource usage in KSysGuard? For all I know, it may be passing information between one thread and another, and it's disk I/O intensive.

    • Re: (Score:2, Informative)

      There was a bug in ktorrent that cause an infinite loop when udp trackers were present in a torrent file, maybe you check if you have the latest version.

    • ktorrent has many confusing bugs in it. I was having trouble running it in OpenBSD-current at some point because the OBSD developers fixed something (probably gcc) that uncovered a bug in ktorrent. A huge memory leak and strange statistics led me to the solution: time was stuck at zero because of a line chock-full of type casts. Still, it has more elusive problems. It would abruptly crash or just go blank, occurring anywhere from 1-24 hours. How do you debug something like that?
    • ktorrent isnt great for resources but i find nice & ionice can stop it slowing down desktop preformance. I often wonder why the current active program isn't given a nice boost though so i don't need to remember to tell the background programs (torrent, email, irc, etc)

  • by akpoff (683177) on Thursday January 15, 2009 @01:09AM (#26461713) Homepage

    Right. I had to get up in the morning at ten o'clock at night, half an hour before I went to bed, eat a lump of cold poison, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad would kill us, and dance about on our graves singing "Hallelujah." --Monty Python: Four Yorkshiremen [davidpbrown.co.uk]

    Been waiting all of 2 years and change for your precious bug fix, 'ave you? You almost had my eyes tearing up there I tell ya: 25 Year Old BSD Bug [slashdot.org].

  • Desktop??? (Score:5, Insightful)

    by corychristison (951993) on Thursday January 15, 2009 @01:12AM (#26461735)

    I'm not sure about anybody else here, but I was surprised to see that they mentioned that this will benefit 'Desktop' users.

    I think that when it comes to the performance spectrum, Servers would be where this fix is the most needed. Admittedly if you are running a solid server, you should know to use older gen hardware and software that has been proven to be stable. However, some of this 'shiny new' tech coming out is appealing.

    How about the Seagate 1500GB drive hang error? To my understanding Windows has been fixed, but the problem still persists in Linux. Could this potentially make a difference? I've been looking to build myself a nice NAS and those 1500GB drives are _cheap_. I can pick one up for about $160. I remember not too long ago that could only get me 80GB.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      I think that when it comes to the performance spectrum, Servers would be where this fix is the most needed.

      Nope, read the bug. Throughput remains ok, it's the interactivity that suffers.

      This is one of those bugs that no Linux developer will admit to until they reckon they have a fix for it. Then we're supposed to be happy, even though people have been complaining about it for years. Oh well, beggars can't be choosers.

      I've been ionice'ing my backups and a few other tasks because of this issue, so it'll be n

      • Re: (Score:3, Interesting)

        by Compholio (770966)
        I'm actually pretty sure that I've spotted the results of this in "everyday" use. I've noticed that every once in a long while my hard-drive activity kicks up (it's happened when I'm just scrolling on an already-loaded web page and I'm using absolutely zero swap) and literally everything stops responding for a good 5 seconds. My guess would be that the slocate or "tracker" program spawns off on recently added and removed files, but it's not something I've put a lot of effort into figuring out.
      • by sveard (1076275) *

        I've been ionice'ing my backups

        Ionizing? That can't be good for magnetic storage!

      • by gbjbaanb (229885)

        This is one of those bugs that no Linux developer will admit to until they reckon they have a fix for it.

        I suppose we see a lot of this in the OSS world, it reminds me of the Firefox "not a memory leak" bug that only became a bug once it had been fixed.

        Its just developer's pride showing in its lesser aspect.

    • Re:Desktop??? (Score:4, Informative)

      by Anonymous Coward on Thursday January 15, 2009 @01:28AM (#26461853)

      I believe the 1.5tb Seagate linux hang has been fixed. We're using a lot of them (100's) where I work on Ubuntu Hardy servers and haven't had hangs.

    • by jd (1658)

      I remember when that would buy you 60 megabytes! (Hell, I remember when ONE meg drives cost eight times that.)

      If you're running a solid server, you know that mechanical devices are (a) slow, and (b) most under strain when doing anything useful, so you tend to avoid using them when at all possible. Servers should do as much as possible via a RAM-based cache -or- use a RAM disk for data that copies to the hard drive only when necessary.

      (So long as RAM is battery-backed, even if the machine crashes or the powe

      • Re: (Score:3, Interesting)

        by adolf (21054)

        Disk-to-disk operations would then bypass the kernel and asynchronous I/O would consume no primary resources. This was fashionable on some systems (most notably drives that used the IEEE 488 bus) in the 70s and was done to some degree with SCSI, but there's really no excuse for not providing such a capability on any modern drive.

        I bought that line, hook line and sinker, in the late 90's with a bunch of IBM 9ES ultra-wide SCSI disks and a good controller.

        It never was clear to me that, at any time, Linux was

        • by Nevyn (5505) *

          It never was clear to me that, at any time, Linux was actually telling the drives to copy data directly from one disk to any other without the kernel in the middle.

          IIRC it was proposed on lkml, however it would still need to use the SCSI bus which is where the majority of the time is spent anyway. Also nothing else had tried to do that, so everyone was worried that it'd be turned on and would have weird failure cases (which would be very bad.

          • by adolf (21054)

            Wait. So. You mean: Nobody has ever done direct disk-to-disk SCSI transfers in a commodity OS[1]? I can't say I'm surprised, but I am a little offended[2].

            [1]: I'm sure that, somewhere, there has been at least one embedded or special-built system which accomplished this. This, obviously, doesn't count.

            [2]: I bought the big, fast SCSI disks because I needed big, and fast. But it would've been tres cool if copying would've been more efficient. Not that it ever much mattered, as you imply, but the con

      • by Nutria (679911)

        Servers should do as much as possible via a RAM-based cache

        Right. RAM is C-H-E-A-P

        use a RAM disk for data that copies to the hard drive only when necessary.

        Wrong. It means you know more about a dynamic system than the kernel.

        • Re: (Score:3, Insightful)

          by jd (1658)

          The cost of RAM is not that great, compared to the cost of a high-end motherboard on a good server, and is absolutely insignificant compared to even a single hour of downtime in any kind of datacentre. If you want genuine 5N's reliability or better (and you can go a lot better than that), you want as little strain on mechanical components as you can get. There's little point in, say, using Carrier-Grade Linux if the practical lifetime of the hard drive due to usage means your hardware cannot maintain a comp

      • by NormalVisual (565491) on Thursday January 15, 2009 @03:10AM (#26462511)
        not HAL-9000 intelligence, which would be bad for data anyway

        HAL-9K intelligence doesn't pose any problems to the data - it's the *operators* that need to be concerned, especially when giving the system instructions that could potentially conflict with each other.
        • by powerlord (28156)

          Ah yes ... I hear the HAL-9K unit is especially prone to operator overload errors.

          Its also prone to occasional operator overboard errors.

      • It's not quite the same, but there are caching SCSI (and now probably SATA) controllers that take RAM modules. I used to have a Vesa Local Bus (VLB) SCSI 2 caching controller. The system had 32 megabytes of RAM and would take up to 64, while I had 16 MB in the disk controller and it'd take up to 32 MB. I gave (well, lent, but it failed about 6 years later and I hadn't asked for it back yet) that controller to a friend for a household file server he built out of old parts. He had the full 32MB of cache RAM i

    • Re:Desktop??? (Score:5, Informative)

      by cowbutt (21077) on Thursday January 15, 2009 @04:08AM (#26462825) Journal

      How about the Seagate 1500GB drive hang error? To my understanding Windows has been fixed, but the problem still persists in Linux.

      The ST31500341AS requires a firmware update from Seagate to something newer than revision SD19 (more info [kernel.org]). In the meantime, if you're using a drive which hasn't been updated to fixed firmware, there's a blacklist in the current development kernel [ozlabs.org] to disable NCQ on affected models as a workaround.

    • Re: (Score:3, Interesting)

      by BlackCreek (1004083)

      I'm not sure about anybody else here, but I was surprised to see that they mentioned that this will benefit 'Desktop' users.

      They mentioned it because it does hit the desktop: https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.22/+bug/131094 [launchpad.net]

    • There is this strange delusion that somehow Linux will become an excellent Desktop Platform for everyone.
      While most of the development doesn't go in that direction as not to offend the people who want it for a server.

      It makes a good server.
      A great Development workstation.
      great for Appliance applications
      good for high end embedded stuff.

      But as a desktop/laptop system for the average Joe. I doubt it will ever make it. It is just too geeky and quirky (in terms of UI, not stability) for most people to use it.
      It

      • by Xabraxas (654195)

        It will work for the stereotypical Grandma who need to click on Firefox to get to the internet, Open Office Writer to write a letter. It will work fine for the high end "Power User" but there is gap in there for the average user, who wants to play games, mess with their photos, make presentations, listen to music, sync with there other devices, scan documents,

        I don't think any of those things are difficult to do with the exception of syncing devices. This is changing though. As far as "messing" with photo

      • The kernel folks aren't concerned so much with the desktop, because from the kernel space the desktop usually needs about the same things as a server. High speed, low latency kernel calls are good. The desktop is mostly about applications and user interfaces.

    • by CAIMLAS (41445)

      That Seagate 1.5Tb drive problem has, according to a friend of mine who foolishly jumped at buying a large handful of said disks for his data storage needs, been thankfully fixed via a firmware update. If I recall correctly.

      That said, I've run into similar problems in "solid, last generation" hardware from vendors. And of course, there are the 10+ year old bugs in Windows which have been largely worked around/with to the point where we forget they're pretty irritating/serious bugs and not just the way thing

    • by Sentry21 (8183)

      I can pick one up for about $160. I remember not too long ago that could only get me 80GB.

      Pfft, I remember not too long ago that could only get me 16% of a 20M drive.

  • Heh, well i just hit the little link and then hit the link at the top to go back to the main topic... then sent a e-mail to /.
    • by Mordocai (1353301)

      Heh, well i just hit the little link and then hit the link at the top to go back to the main topic... then sent a e-mail to /.

      Yeah, messed up there... meant to put "little comments link"

  • by Anonymous Coward on Thursday January 15, 2009 @01:18AM (#26461771)

    I'm sure kernel.org appreciates these links. Now instead of fixing the bug they're putting out fires in the data center...great job slashdot.

  • by Al Al Cool J (234559) on Thursday January 15, 2009 @01:22AM (#26461811)

    If this get resolved is there any chance the fix could get ported to Windows? I just had my Dad's XP laptop completely freeze after I plugged in a bog-basic USB thumbdrive. The desktop sprang to life only after I unplugged it. I wish some of the AC Windows fanboys who were hassling me here last week were around to see it. "Ready for the desktop" my ass.

    • Re: (Score:2, Insightful)

      by troll8901 (1397145)

      And I'm going to hassle you again.

      (Opps, forgot to check the AC option!)

      Never mind, carry on ...

      (I also have problems with U3 flash drives. I had to use basic flash drives - thus missing out on all the app portability features.)

      So THAT's why we don't have Year of the Linux Desktop! It has performance problems ... just like Vista has performance problems!

    • by hairyfeet (841228)
      Did you plug it in while it was booting? If so, there is your problem. Windows doesn't like you plugging in thumbdrives while booting, especially with certain chipsets. I have found this problem affects the Realtek and Via chipsets more than most. If not, try removing the USB drivers from device manager and hitting refresh, this will allow Windows to reinstall the USB port drivers which can sometimes fix this bug.
    • by Argon (6783)

      "Ready for the desktop" my ass:

      Hope the rest of your gets ready too :-).

  • by Harik (4023) <Harik@chaos.ao.net> on Thursday January 15, 2009 @01:50AM (#26462019)

    wow, not just badsummary, utterly worthless summary. Here's the relevant discussion from LKML. Yes, this is all of it.

    Peter Zijstra

    Andrew Morton
    In http://bugzilla.kernel.org/show_bug.cgi?id=12309 [kernel.org] the reporters have
    identified what appears to be a sched-related performance regression.
    A fairly long-term one - post-2.6.18, perhaps.

    Testcase code has been added today. Could someone please take a look
    sometime?

    There appear to be two different bug reports in there. One about iowait,
    and one I'm not quite sure what it is about.

    The second thing shows some numbers and a test case, but I fail to see
    what the problem is with it.

    This somewhat deflates the excitement evident in the OP. I mean, I know what he's talking about, these apparently random 1-2 second FREEZES while working, but if the guys in LKML arn't talking about it it's probably not being really worked on.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The fscking freezes are in HAL. They have been driving me nuts for more than a year. In my case, the solution is to unplug the CDROM drive.

    • Re: (Score:2, Funny)

      by haifastudent (1267488)

      This somewhat deflates the excitement evident in the OP. I mean, I know what he's talking about, these apparently random 1-2 second FREEZES while working, but if the guys in LKML arn't talking about it it's probably not being really worked on.

      I know, it looks like someone's pet bug made the cover of /. today. For the record, here is my pet bug: https://launchpad.net/ubuntu/+bug/1 [launchpad.net]

    • by bjourne (1034822) on Thursday January 15, 2009 @08:09AM (#26464039) Homepage Journal

      If you haven't used Linux regularly within the last two years, you probably have not noticed that the system has gotten significantly slower with more recent releases. The probable symptom was discussed here [slashdot.org]. Many Ubuntu users, including me, have noticed that the latency of desktop operations got significantly larger around the time Gutsy was released, which coincides with the Completely Fair Scheduler and kernel upgrade from 2.6.18.

      Since it is most likely a latency issue, the problem is extremely hard to diagnose. Alt-tabbing between programs seem a little slower, keyboard input might lag somewhat. You can't measure desktop latency easily.

      • by kwabbles (259554)

        CFS was introduced in 2.6.23, not 2.6.18. CFQ was introduced in 2.6.18.

        • by bjourne (1034822)
          Which is what I said. :) Feisty had the 2.6.18 kernel and was quite responsive, so CFQ is in the clear. Gutsy featured 2.6.23 with CFS and was much slower which means it is a possible suspect.
          • by kwabbles (259554) on Thursday January 15, 2009 @04:46PM (#26473043)

            "Many Ubuntu users, including me, have noticed that the latency of desktop operations got significantly larger around the time Gutsy was released, which coincides with the Completely Fair Scheduler and kernel upgrade from 2.6.18."

            Uhh.. I didn't see anything in there about the Complete Fair Queuing - you just mentioned Completely Fair Scheduler, then kernel 2.6.18.

            "Feisty had the 2.6.18 kernel and was quite responsive, so CFQ is in the clear. Gutsy featured 2.6.23 with CFS and was much slower which means it is a possible suspect."

            This performance bug has been reported since 2.6.18.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      It's very easy to trigger, just unrar an iso from a torrent. Regardless of CPU cores, copious amounts of RAM, and no other real system activity, your desktop experience will grind to a miserable halt until the archive process has completed. renicing makes very little difference. Linux has had this problem for years, certainly more than two. Memory suggests it came along with SATA.

      • by CAIMLAS (41445) on Thursday January 15, 2009 @03:07PM (#26471021) Homepage

        Yep, this is a petty big problem - an easily reproducible one - and it's been around for a really long time. I don't remember when exactly it came about, but I moved from Debian Sid to Ubuntu 7.x about 8 months ago. I didn't have any problem under debian, and I'm uncertain whether the 7.x ubuntus had the problem, but I certainly noticed it in 8.x releases.

        I do recall a bit of a somewhat gradual progression of desktop performance decreases, though, going all the way back to the later 2.0 kernels. Back then, the schedulers would all allow an at-the-time relatively slow machine run a fairly bloaty window manager (like E16) responsively while untarring an archive and running a kernel build at the same time - provided there was 100+Mb or so of RAM for the process, of course. Even still, if you were to dip into swap, the UI would remain pretty responsive. Not anymore.

        The way things sit now, the Linux I/O scheduler results in desktop performance similar to Windows XP during I/O ops. That is completely unacceptable.

        Part of me thinks this is due to a server-centric focus in development (being as the people doing kernel dev largely work for corporations who want server kernels), but I'm not really in the know. If that's the case, we really need to pull one of the old desktop schedulers out of retirement and use that instead of what we've got now, at least for the desktop, and maintain two different-focus schedulers within the kernel instead of just having a couple generally-suited schedulers.

  • I second this (Score:5, Interesting)

    by waslap (1453217) on Thursday January 15, 2009 @04:34AM (#26462933)
    I am overjoyed that my suspicions have finally been vindicated. I've been working 10+hours a day on linux for the last 13years and you tend to get in tune with your environment (i can still today recite my DOS bootup tune on my XT even though I haven't worked on it for 20 years:-) and some time ago after installing a new flavour of linux I immediately started complaining to fellow workers that something has gone wrong in the kernel but it was not annoying enough to really do something about it; you start living with it. It manifests sometimes when I compile - my system simply locks up for 20-30 seconds which is something I never experienced before. I'd say it happens once out of every 50 compiles of the same program with gcc. During such occurrences, I can't access anything on my desktop which annoyes me cause I typically switch to another kterm session to prepare to run the build whilst compiling (to keep up the productivity and all that). I have also seen strange ratios of i/o to cpu wait in 'top' nowadays but can probably ascribe that to CPU's that just became ridiculously fast and the way top calculates its scores. Nevertheless, I've mumbled over and lambasted i/o wait in Linux ever since a very specific time in the past and even though I haven't noted the exact date, I'm sure its related to this. Anyway, I found this intrigueing enough to create a slashdot account after years to share my joy that the bugs days are hopefully numbered now.
  • Problem is Real (Score:5, Informative)

    by Anonymous Coward on Thursday January 15, 2009 @04:50AM (#26463001)

    For what it is worth, the problem is real.

    We have experienced massive negative effects with our MySQL server; downgrading to early linux kernel solves the problem. This has been very difficult to debug as we never guessed that the OS would be a factor... we figured it had to be something we were doing. Only by chance did we try another distro / kernel only to find that everything starts working fine when you downgrade.

  • by Builder (103701) on Thursday January 15, 2009 @07:09AM (#26463721)

    ...when you insist on doing development in the 'stable' kernel tree and expect vendors to stablise it.

    Genius!

    • by CAIMLAS (41445)

      Exactly!

      The current crop of problems observable in the Linux kernel started roughly around the time when the development policy changed. We went to "kernel is stable and works very well for a known subset of things, and builds consistently" to "kernel is stable for some things and works decently for most things, with pretty much everything working to some extent, and barely ever builds consistently (at least from one subversion to another).

  • by rs232 (849320) on Thursday January 15, 2009 @10:59AM (#26465561)
  • I think way too many people are blaming their issues on this bug. Some of them may be valid but others probably have something misconfigured or maybe it only affects certain hardware. I don't expereience this bug. My interactivity does not suffer when I do anything I/O intensive.
    • Re: (Score:2, Interesting)

      by Heather D (1279828)

      I am getting it. This is on Ubuntu running the 2.6.20-generic kernel that came from the distro. My backups (~19GB) are responsive but I am currently running Ben Gamari's suggested method to reproduce it and it appears to be showing up. I get 'small' freezes of ~1-3 seconds when entering text as well as larger freezes of ~5-15 seconds upon maximizing a minimized program.

      It only seems to cause a problem for maximizing minimized programs when it happens at the same time as you maximize the window. It doesn't s

      • by Xabraxas (654195)
        I've done the test and it does not happen for me. Somehow I got modded troll just because everyone and their mom is blaming their issues on this bug without even reporting test results. They are just assuming that any perfromance issue they are having is related to this bug. I'm not claiming this bug doesn't exist but people are jumping to conclusions without even knowing the details of their issues.
  • This I/O scheduler was introduced as the default in 2.6.18 and available since 2.6.13. I wonder if that has something to do with it. I'm going to test it out on my home machines later today and have a look-see.

    Supposedly it can be disabled and the AS scheduler can be used if you change it at runtime in /sys/block/hda/queue/scheduler, or use the "elevator=as" boot option.

Our business in life is not to succeed but to continue to fail in high spirits. -- Robert Louis Stevenson

Working...