Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Raid 0: Blessing or hype? 380

Yoeri Lauwers writes "Tweakers.net investigates matters a bit more clearly and decides that AnandTech and Storagereview should think twice before they shout that "RAID 0 is useless on the desktop". Tweakers.net's tests illustrate the contrary"
This discussion has been archived. No new comments can be posted.

Raid 0: Blessing or hype?

Comments Filter:
  • I use RAID 0... (Score:3, Informative)

    by remin8 ( 791979 ) on Sunday August 08, 2004 @09:26AM (#9912930)
    ... for simplicity. It is nice to have one "large" drive (in windows) instead of spreading all of my files across smaller drives. Useless, it is not! Is it really very practical? I don't think so. I havent had a disk fail yet, but when it does I will be glad I have backups!
    • Re:I use RAID 0... (Score:2, Insightful)

      by Anonymous Coward
      You wouldn't need to use RAID for this. JBOD would be enough.
    • Re:I use RAID 0... (Score:5, Insightful)

      by isorox ( 205688 ) on Sunday August 08, 2004 @09:29AM (#9912943) Homepage Journal
      Sure, lose one drive and you lose everything. There are better ways to store everything on one "drive letter"
      • Re:I use RAID 0... (Score:3, Interesting)

        by boaworm ( 180781 )
        Yep, totally agree. Do you remember the LaCie 1.6GB FW-drive [lacie.com] released to market a few weeks back. This is nothing but four IDE drives in a raid configuration, and it's not redundancy raid...

        I really wonder what the expected lifetime is on such a device. Sure you can replace the broken drive, but you should probably get the same model to replace the broken with, AND if one has gone down.. how long 'til the next one goes ?

        Its probably great for things like offsite backups, you run it an hour here, and hour
    • Re:I use RAID 0... (Score:4, Informative)

      by fostware ( 551290 ) on Sunday August 08, 2004 @09:30AM (#9912948) Homepage
      Have you tried mount points in Windows? It's Disk Manager, right click on a drive and choose "Change Drive Letter or Paths..." - although it has to be an empty partition when you do this... It's just like linking drives to mount points in *nix.
      • Re:I use RAID 0... (Score:3, Informative)

        by Dog-Cow ( 21281 )
        You can link a non-empty partition. You can even link it to a non-empty directory, just like in Unix, and, just like in Unix, it will hide the usual contents of said directory.
      • Re:I use RAID 0... (Score:3, Interesting)

        by shokk ( 187512 )
        For enterprise wide capability, combine the drive letter mappings with Active Directory Dfs. Gives you an automounter type capability centrally managed across the domain tree.
      • Right. Dismount the drive every time you need to defrag it properly. I'd suggest that junctions/reparse points are more akin to *nix mounting, but those too have problems, the least of which seems to be the lack of integration and tools.
    • If that's all you want, you should stick with JBOD. It won't be as fast, but if a drive goes, you have a better chance of getting your data back.
    • That's fine if you don't care about your data. If one drive goes out, all of your data is totally gone, end of story.
      • Most people use a single drive on the desktop. How many people actually use RAID-1 or RAID-5 on the desktop?

        Backups are essential for desktop machines. With current storage technology, they don't appear to be going away anytime soon. Might as well get used to it.

    • Re:I use RAID 0... (Score:2, Informative)

      by jonom ( 109588 )
      I just set my system up with dual 80 gig Barracuda SATA drives in RAID 0 a couple of months ago. My new motherboard had SATA RAID built in.

      I've never seen Windows move this fast.

      Boot time is cut in half, the whole system is more responsive.

    • Makes you wonder why Linux and other Unices have everything under one "/"... the convenience factor is amazing :)

      With NFS, cdroms , USB cards and harddisks in /mnt/ , life is a lot easier .

      Imagine this

      bash$ ln -sf /dev/sda1 /dev/camera
      bash$ mount /dev/camera /mnt/camera

      One "/" to root them all , eh ?
  • by SQL Error ( 16383 ) on Sunday August 08, 2004 @09:26AM (#9912933)
    I'm sure that even here on Slashdot there are some people who aren't running huge multi-threaded database applications on their desktop machines, and for them, RAID-0 probably isn't going to help much.

    But for the majority of us normal people who are running huge multi-threaded database applications on their desktop machines, RAID-0 is much nicer than having to manually allocate all of your database extents across your disks. Of course, RAID-10 would be better, but that would involve spending money...
    • by magarity ( 164372 ) on Sunday August 08, 2004 @09:36AM (#9912979)
      But for the majority of us normal people who are running huge multi-threaded database applications on their desktop machines

      Sorry, most slashdotters are NOT using Longhorn yet.
    • by Tassach ( 137772 )
      While parent said it in a joking manner, he's right: RAID-0 is not appropriate for general desktop use. You use it in applications which are disk-bound. The classic examples of this are databases and video editing. If you're not disk-bound, the risks and disadvantages of RAID-0 seriously outweigh the small performance boost you see.

      A disk-bound application is one where the application's performance is directly proportional to disk speed. EG, a disk-bound app's performance will improve by 10% if you i

  • by Anonymous Coward on Sunday August 08, 2004 @09:27AM (#9912936)
    I don't care what tests people have done or what benchmarks they're spouting off, RAID 0 works.

    I used to have a system which used relatively cheap 5400 RPM drives in a RAID 0 array. There was a quite noticable difference when not using RAID 0. When using 2 or 4 drives the system was damn fast even though the drives were individually slow.

    I don't even read these articles. I know it makes a difference.
  • Desktop performance. (Score:3, Interesting)

    by Zorilla ( 791636 ) on Sunday August 08, 2004 @09:34AM (#9912963)
    My computer is over three years old (P4 1.7 GHz upgraded to 386 MB of RAM from 128) and I've found that the slowest technological advancement seems to be hard drive throughput. This definitely reveals itself because of the fact that games like Doom 3, Far Cry, and Painkiller are all perfectly playable on my computer, but the latter two games take an unbearably long time to load. When I build my next computer, RAID 0 is one of the things I will be looking at, because I absolutely hate waiting more than 5 seconds for a game to load.

    (Yes, I'm aware that only 384 MB of RAM is slowing load times via virtual memory swapping as well)
    • Try getting more ram. Map load times in battlefield 1942 were barely bearable with 256MB suddendly where reduced to almost half by doubling the available memory.
  • by cwm9 ( 167296 ) on Sunday August 08, 2004 @09:34AM (#9912964)
    A common misconception is that striping beyond 2 drives is "worthless." That simply isn't true: remember that the inside of the drives, close to the spindles, has a transfer rate that is nearly half what it is on the outside cylindars. By striping 4 drives togeather, about half the bandiwdth is wasted near the FRONT of the drive, but near the tail, it's almost all being used. The effect is that the drive feels uniformly quick no matter what part of the drive you are reading from!

    I personally jumped from a single drive to a 4-drive SATA raid-0 system, composed of 120GB drives from two different manufacturers.

    The system screams.

    I can't tell you how nice it is to have my computer boot in half the time... how your system feels like you always wished it would feel. You can add all the memory you want, all the processing power you want, but if you can't feed the computer, it's all pointless.

    The only thing I wish now was that my system had a faster and/or wider bus that would allow me to take advantage of all the currently unused bandwidth available from the four drives.
    • by Slack3r78 ( 596506 ) on Sunday August 08, 2004 @10:00AM (#9913071) Homepage
      Here's the whole thing - I *have* tried it. If your workload involves lots of long, sequential reads, it's a great thing. I've personally got 2 machines running drives in RAID 0 as they get used for working with files in the 1.5-2GB range. It makes a difference here.

      The whole point of SR and AT's articles, however, is that for most desktop systems, RAID 0 is pretty much a bad idea. You'll see marginal improvement on more random data sets, but you've spent four times as much, and, more importantly in my mind, your probability of failure has increased from P to P^4.

      So really, I can see some applications where RAID 0 can be useful - I fit one of them. But for most desktop systems, it's not worth the cost. For systems with more than 2 drives anyway, it seems like a patentedly Bad Idea(TM). You really should've gone with RAID 5 - you'd still have striping, but you don't risk losing everything to a single faulty drive.
      • by Tlosk ( 761023 ) on Sunday August 08, 2004 @11:12AM (#9913442)
        Shouldn't that be 1-(1-p)^4?

        p^4 would give you a decreased failure probability.

        So that say there is a 1% chance of failure over 3 years for a given drive. Using the first formula, using 4 drives in raid 0 would increase the chance of at least one drive failing (and consequently all) to 3.94%.
      • by Gopal.V ( 532678 ) on Sunday August 08, 2004 @12:52PM (#9913902) Homepage Journal
        > but you've spent four times as much, and, more importantly in my mind, your probability of failure has increased from P to P^4.

        The probability actually went from P to P ^ 0.25

        p*p*p*p is LESS THAN p for probability terms (0 < p < 1.0)

        You calculated the chances of ALL 4 failing together. But Raid-0 has a problem with even one failing which is the 4th root of P , which is obviously higher.

        Anyway, Raid-0 makes sense if you're doing stuff like Video Editing for the Desktop ... I've been putting some stuff on Sync'd non-swappable Ram Disks - makes a hell of a difference for proper apps who mmap the file instead of reading it into the core.

  • by ergo98 ( 9391 ) on Sunday August 08, 2004 @09:34AM (#9912966) Homepage Journal
    A common theme, revisited several times, in the article is that the other conclusions were wrong because they used low-load testing.

    "A safe conclusion would be that a Business Winstone 2004-benchmark alone is not a good starting point when testing RAID 0 performance. On the contrary: to have some reliable tests, we will need to put heavy loads on the array."

    In essence, if my understanding is correct, they're saying that the value of a RAID 0 setup is under constant extreme loads, not the loads created by business applications or games. Isn't this entirely the point of the articles in question - That given the sporatic, generally light load of even power users, RAID 0 is not really that beneficial (as random access plays even more of a part than gross throughput)?

    Even under perceived heavy I/O loads, the reality is often that the hard disk is under-used - I occasionally compress videos from miniDV to DVD, and my CPU would need a four or five fold increase in speed to even begin to put pressure on the single 7200 RPM hard disk.

    • When a game loads it access the HD a lot. Depends a bit on the game of course and on the amount of memory but the bigger games outthere spend a few seconds loading like mad.

      The faster the HD the faster it loads. Most "desktop" business applications don't really load that much, even a piece of bloatware like office would load pretty fast. At least the reading from HD bit.

      Saying raid 0 is of no use is like saying a 7200 drive is of no use on a desktop. Or a large cache is of no use. Or SCSI is of no use. Ju

      • RTFA.
        Most of the time games spend loading is not disc bound, but cpu bound. Decompressing pack files, initiating bsp-trees, ect.
        Every modern disk can load 50mb/s. THe largest quake3 level has 27 MB.
  • Methodology (Score:5, Insightful)

    by Jeremy Erwin ( 2054 ) on Sunday August 08, 2004 @09:35AM (#9912967) Journal
    Tweakers.net conludes

    And it's not just our benchmark results that support this view: the majority of Tweakers.net readers who at one time or another tried striping, feel that the overall responsiveness of their computer improved when employing RAID 0.


    Of course they do. After all, they've spent extra money and time pimping out their rigs.
    • Re:Methodology (Score:5, Insightful)

      by Slack3r78 ( 596506 ) on Sunday August 08, 2004 @09:49AM (#9913033) Homepage
      Yeah, I did an absolute double take when I got to that part. They spend an entire article bashing the two of the most methodical sites out there on methodilogy and then try to use a completely unscientific poll as backing evidence to their claim? Let alone a poll that's naturally pre-biased to a particular conclusion. It really puts the validity of the rest of the article into question. If that's acceptable evidence, what other shoddy methods are acceptable to them?

      If you've spent the extra money on RAID 0, you're going to believe there's a difference going in. Hell, I've done it myself - I have 2 machines with RAID 0 setups, but that's because they're commonly used for working with multi-gig sized files in photoshop - IE: I actually need the strong sequential speed.

      For normal desktop setups, I'd absolutely agree with AT and SR on this one. Unless you're doing massive amounts of large sequential reads/writes, you're just not going to see a difference in speed worth the cost of another drive and the major increase in potential failure and data loss. Remember, by adding that second drive, your chance of failure goes up *exponentially* which is something a lot of hardcore "tweakers" forget.
      • Re:Methodology (Score:3, Informative)

        by Cryogenes ( 324121 )

        your chance of failure goes up *exponentially*

        Now, that is not true. If d is the chance of failure in a given time interval for a single disk then the chance of failure in the same time interval for a two-disk RAID-0 is 2d - d^2. For small d, this is roughly equal to 2d (or, more generally, nd for n drives). Thus, the chance of failure goes up (at most) linearly.
  • Interesting article. But, I am not quite sure that they understand the rationale many people have for not using striping on their desktop.

    1) Does it matter if you cut .01 seconds off the time it takes to write out the document you just wrote?

    2) Does it matter if you have a disk failure and lose all your data on all partitions in the stripe? Everyone at home makes daily backups.

    • It doesn't matter too much if you lose all your data when "all your data" means a few dozen megabytes of save-games. If you're using the machine for real work, however...
    • by SmallFurryCreature ( 593017 ) on Sunday August 08, 2004 @11:02AM (#9913380) Journal
      Do you really think that people that overclock their CPU's, have overclocked graphics cards and water cooling care all that much about reliability?

      Well I got a very simple solution to that, one that the overclockers I care about all use. It is called a small server with real Raid to store all the "real work" they got.

      The game machine is the game machine and it doesn't need to have a long live as it won't be around longer then a year anyway.

      Raid 0 fits in the "getting 1% extra fps" scene. It does not fit in the office scene.

      Anatech and a whole lot of /.ers just don't seem to get that to some people every bit of extra speed is worth it. You would review a ferrari as a lesser car then a ford focus since a ferrari costs more and who needs the speed.

      Does speed matter? Oh yeah, does reliability? Hell no, this ain't a server. Only thing I could loose is a few hours reinstalling windows and my games. I do that often enough anyway whenever a new piece of hardware arrives.

  • *shudder* (Score:5, Insightful)

    by LordLucless ( 582312 ) on Sunday August 08, 2004 @09:39AM (#9912993)
    Just the thought of using RAID-0 makes me shiver. The only people who should use this are people who keep good backups, and like using them. The speed gains are of little use for individuals, and for the professionals or corporations that might actually want the speed-up, the chances of data-loss are too high.

    That's not to say there isn't a purpose for RAID-0 - it teaches people how useful backups are. The hard way.
    • Or for people who replace the contents on the drive fast enough that losing everything wouldn't matter. I've, uh, 'heard about' these magical FTP servers where the fact that they have >2tb of diskspace doesnt make failure matter, because at a gigabit of connection it will have all the current releases as soon as its brought back up, and old releases stop being useful for trading after a week.
    • the chances of data-loss are too high.

      Why? Yes, you double the change of data loss, but if that makes it 'too high' the change of dataloss was pretty high anyway. If you buy somewhat decent disks the your change of getting 2 disks that will run flawless for years is extremely high. But yes, you should not use raid 0 with disks that have a 10% failure rate, but you shouldn't be using those disks anyway...
  • My RAID-0 drive opens huge files nearly twice as fast. That's useful to me.

    E.g., a 488 MB wave file of Velvet Underground's first album opened in 28 seconds on my D drive, but in only 16 seconds on my RAID-0 partition. All the drives are the same, i.e., Maxtor 80gb 7200 drives.

    Premiere works a lot faster too.

    The only problem is that I have to be extra anal about backing it up. But any insentive to get me to back up my stuff is a good thing, as far as I'm concerned.
  • I won't use Raid-0 on my desktop unless I have a short term need for extra performance. Desktop based hard drives are just too unreliable to lose ALL of your data if you lose one of the striped drives.

    In the two computers I have at my house, I've lost 4 IDE hard drives in the last 6 months! Maybe RAID-1, but even then I'd prefer a backup solution instead of a real-time data redundancy solution. (It's hard to restore a file that you *accidentally* deleted from a RAID based solution.)

    Until SCSI gets cheap
    • Until SCSI gets cheaper or IDE gets more reliable, neither of which I see happening any time soon

      It's not the interface that makes IDE drives less reliable, it's just that manufacturers want to keep server/workstation drives out of desktop machines for good reason - the 10/15kRPM drives need to be cooled, and as soon as people start to put them in desktop machines, they're gonna get a lot of warranty returns. Thereby lowering their profits further, and removing any advantage that they had.

      There are tw

      • It's not the interface that makes IDE drives less reliable

        Maybe not, but I've noticed that SCSI drives ARE more reliable than IDE drives. And personally, I think it's because people who buy SCSI demand reliability so the manufactureres provide it. The tradeoff is that you don't have 250GB SCSI drives. Meanwhile, customers of IDE drives demand size, because they're downloading songs and movies and whatever. The tradeoff is reliability.

        Personally, I'd just like a nice reliable 40GB drive at the $.

  • RAID-0 is stupid. (Score:5, Informative)

    by slamb ( 119285 ) * on Sunday August 08, 2004 @09:49AM (#9913031) Homepage
    Here's why no one in their right mind uses RAID-0 on data that they care about:

    Unlike other RAID-levels, RAID 0 does not offer protection against drive failure in any way, so it's not considered 'true' RAID by some (the 'R' in RAID stands for 'redundant', which does not apply to RAID-0).

    When you have multiple hard drives, it's more likely that one will fail than if you just have one. For the obvious statistical reasons. Plus because of heat problems in many systems.

    In a non-RAID setup with multiple hard drives, when one fails, you lose whatever was on that drive.

    With RAID-n (for non-zero n), you lose nothing. You say "oh well", put in a spare drive, and send the old one back for replacement. (In the other order if you're cheap.) The array rebuilds itself. Without even shutting down the machine, if you have the hot-swappable drive cages.

    With RAID-0, you lose everything on all of your hard drives.

    RAID-0 is considerably less reliable than a single hard drive.

    • With RAID-n (for non-zero n), you lose nothing. You say "oh well", put in a spare drive, and send the old one back for replacement...Without even shutting down the machine, if you have the hot-swappable drive cages

      The really high-budget people here have a hot-spare setup.
    • The same argument applies for a single disk drive. No one in their right mind uses a single disk drive on data that they care about.

      Multiple disk drives increase the chances of disk-related data loss, but failure of a cooling fan does, too. It is incorrect to assume that a two drive RAID 0 is twice as likely to result in data loss as one drive since you need to consider the entire system and the environment it is in.

      Now, is RAID-0 is considerably less reliable than a single hard drive? Depends on how y
    • by isorox ( 205688 )
      One possible use of raid-0 is in video editing. Assuming your original source is on tape (beta, dvcam, whatever). You capture to raid-0, keep your EDL on a seperate drive or even network. Worst case you have to recapture your video.

      The important thing to remember, Raid (1, 3, 5, whatever) is not a substitute for backups. rm -rf will work on a raid volume prefectly well, as will a lightening strike or thieves.

    • However by adding one more harddrive you can be secure. This drive stores a parity. Let's say if a given bit on drive A and B is the same, that bit is zero on the parity drive. If they are different, it's a one. If drive A or B kills itself the parity drive can be used on the remaining drive to recreate the missing drive. If the parity drive goes kaput you just need to make a new one.

      Now I don't know if the parity actually works on the bit, byte, word, or sector level, but this is the easiest way to
  • by XavierItzmann ( 687234 ) on Sunday August 08, 2004 @09:56AM (#9913051)
    Since 2002, I have been using the SIIG Raid 0 http://www.siig.com/product.asp?pid=424 [siig.com] card on a 1999 Sawtooth G4 with 0.48TB of internal storage. Hardware-wise, this is an OEM Acard card; also available from Sonnet and Miglia.

    No disk failures to date ---I backup weekly with Apple's Backup 2.0

    Here are some benchmarks that compare software RAID 0 performance (included free with OS X) vs. hardware RAID 0: http://www.xlr8yourmac.com/OSX/OSX_RAIDvsIDE_Card_ RAID.html [xlr8yourmac.com]

  • For the "feel" of a machine, latency or "response time" is the most important factor. When the user requests an action, it is the time between the request and the machine's response that counts. For instance, the almost 2x speedup of booting XP means a 2x decrease in a very annoying latency, and it makes the system feel much faster even if nothing else changes. The numerous *small* latencies in a system also count -- don't you hate it when you click a menu and you have to wait a full two seconds before it p
  • They're used quite frequently in video editing - specifically as scratch disks. Great performance, and no immediate need to back up either frequently or extensively. This is a great use for RAID 0. In this case, the OS isn't on the RAID 0 partition, so a drive failure isn't too much of a headache to solve.

    A lot of people seem to be hung up on the 'if one drive fails, you lose everything' problem. Well, take 2 scenarios; 2x80GB drives and 1x160GB drive. Regardless of choice, a single drive failure will mean
    • by koali ( 175176 ) on Sunday August 08, 2004 @10:24AM (#9913187)
      Nope, reliability goes down.

      Let's suppose that both the 80Gb and the 160Gb drives have a possibility of failing in a month of 10%.

      Now, with the 1x160Gb you have 10% of having a failure this month, obviously. What's the probability for both drives?

      Well, since each one won't fail 90% of the time, the probabilities of both not failing is 81% (0.9*0.9). The rest, 19% is the possibility that one or both fail, therefore, instead of a 10% failure rate, you get 19%... nearly twice!
      • It doesn't work like that in the real world though. First, the odds of a disk failure are miniscule compared to your numbers. Second, the odds of disk failure are proportional to heat and the odds of a fan failure are one or two orders of magnitude greater than a disk failure. If your setup is dependent on a fan keeping your disks cool enough, then the odds of a disk failure with two disks are probably no greater than one disk. The bad news is that those odds are about 1 in 1. In other words, reliabili
        • As the probability of failure becomes smaller and smaller, then the probability of there being a failure in two drives becomes more and more closer to being doubled. Even if your failure probability was 0.01% for one drive, then the failure probability of two drives would be 0.019999%.

          The fans and other components make it more complicated, but still make RAID-0 often a lot messier. Suppose a cooling fan does die, it might not instantly kill the drive, but will shoot the probability of failure way up. So

      • "Nope, reliability goes down."

        Dead right, but I didn't mention reliability, just performance and cost of replacement. Specifically performance is what is necessary for the guys who use the system. As it's functioning as a scratch disk, nobody particularly cares about reliability, it just means that you can be accessing video data at the same time as spooling over stuff onto the drive from a DV camera which saves a lot of time. If a drive does go down, a new one gets slotted in, striped and the source vide
  • RAID 0 is a start. (Score:4, Insightful)

    by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Sunday August 08, 2004 @10:26AM (#9913196) Homepage Journal
    So long as the operating system can take advantage of it, every spindle you add to your system will add performance. Windows does make it harder to take full advantage of multiple spindles, because you can't easily distribute disks to different parts of your file system to cut down on seeking, but using RAID 0 will help some.

    Ideally, you should bring hot spots on the disk closer together, which is what filesystem optimization tools do, and have one disk for each "hot spot" on your system. %systemroot%, the swap partition, your system temporary files directory, your applications, and your profile could each be given a separate disk so that the disk head that's sitting there writing your cached files doesn't get hauled off to the other end of the disk to read a plugin from %systemroot% or a write an old dirty block to the swapfile. Old timers will remember dedicated swap disks and swap partitions on every drive, fast dedicated /tmp disks, and other system tuning you could do on even medium-sized "big iron". I've done similar things on my FreeBSD home desktop and been quite pleased with the results, though IDE's limitations make it a lot harder to get a big win out of it than SCSI did.

    With enough drives and an OS that's aware of the physical layout, you should be able to get the same kind of performance improvement from RAID 0 on Windows. Hardware RAID, of course, won't help much with the seeking problem because the OS doesn't know it's got two heads to do seek optimization on. Software RAID, if Windows is smart about seek optimization, should give you a superlinear speedup for many workloads.
    • The thing with RAID is that it exists underneath the filesystem. That means the filesystem doesn't know it exists and will not optimize layout because of it. It's possible to add this through "hints" or out-of-band config info but that's not the same as the real thing.

      Fact is that RAID is the best you can do when you can't plug into the filesystem. Once you can, RAID becomes the thing you DON'T want to do for just the argument you make. If the filesystem knew it had multiple disk drives to lay a single
      • The thing with RAID is that it exists underneath the filesystem. That means the filesystem doesn't know it exists and will not optimize layout because of it.

        The filesystem doesn't automatically know about it, no, but in a software RAID environment the filesystem can find out about it and take advantage of it. In a hardware RAID environment it can't.

        There are reasons not to build the RAID completely into the filesystem. It reduces flexibility, since you wouldn't be able to use RAIDed and non-RAIDED file
  • by TheToasterBoy ( 547235 ) on Sunday August 08, 2004 @10:38AM (#9913244) Homepage
    A friend who works with NAS/SAN systems jokingly told me that a hard drive exists in only 2 states:

    1) Failed

    2) About to fail

    Tb.
  • poor article (Score:4, Insightful)

    by dfghjk ( 711126 ) on Sunday August 08, 2004 @10:42AM (#9913263)
    The author produces a lot of words but shows remarkably poor insight. Examples his lack of understanding between sequential access arrays and parallel IO arrays in the introduction, the poor showing of the RAID 5 tests (conveniently avoiding writes in those tests), the difference between RAID techniques and caching, and the association of PCI as the performance limiter in the Promise controller.

    The fact is that the article readily admits that desktop workloads show poor average IOps (under 1.5) and modest average IO size (23K). Those numbers prove that there is little opportunity to accelerate performance either with parallel access or random access designs. The first tests show clearly that the IO sizes in question leave little opportunity for large transfer gains while the lack of decent command queue depths rules out good load balance with larger stripe sizes. Interestingly, the author didn't provide the stripe size for that test. It's easy to deduce from the chart but it demonstrates his limited grasp of the subject matter.

    Regarding the tests dispelling the myth of poor RAID 5 performance, hardly! Poor RAID 5 performance is no myth. First off, the RAID 5 configuration was trounced by lesser RAID 0 IDE drives. Second, the benchmarks consistently avoided writes, notably small writes, where RAID 5 massively fails, and uses a large writeback cache to further hide write performance and to cause the configuration to shine is small read tests. If you are going to sing the praises of RAID 5 for data protection you should probably mention the data integrity disaster that writeback caches introduce. If I were offering the RAID 5 config myself I would feel like I just got my ass kicked.

    Ultimately this article is nothing other that a rant by someone who disagrees with others' contention that RAID 0 is of limited benefit. He justifies his position by saying that performance matters when "performance matters", that is specifically when you create disk-intensive loads you can see a benefit. Well, no shit. When you create large command queue depths through multiple disk-intensive processes then you will benefit. Again, no shit. Boot times can get shaved a little. Big deal. Beyond that he doesn't know what he talking about. There's a big difference between RAID 0 being theoretically capable of superior performance and it being a performance value to a desktop user. This is a subjective matter and he fails to make his case. Just how often does he or any other "power user" actually benefit from these unusual workloads and is that often enough to justify the costs?
  • From the article:

    Both AnandTech's and Storage Review's results of the IPEAK are largely contradictory to Tweakers.net's benchmarks

    So then it's two against one here? And we should believe the minority?
    • by TheLink ( 130905 ) on Sunday August 08, 2004 @11:12AM (#9913443) Journal
      IF you have a decent RAID controller, RAID1 is faster than RAID0 for reads (not writes), this is because with RAID1 the data isn't striped - the same data is on all the drives, so the system can read from the most convenient drive (lower latency), and then do read interleaving after that. Whereas with RAID0, the system has to wait for the drive holding the stripe with the desired data.

      So RAID 0 is OK if you are sequentially reading/writing large blocks (large relative to the stripe size). But it's not so good for small random reads or writes - which could be the case in some desktop situations.

      For decent performance and reliability go RAID1+0, instead of RAID5 (which seems popular amongst many of the obviously ignorant here). RAID5 sucks for writes. RAID5 is only if you want _lots_ more capacity with some redundancy and write performance isn't important.

      As far as I see, disk speed is a bigger issue than disk capacity. Capacity has increased faster than drive speeds have.
      • Please don't (Score:3, Informative)

        by dfghjk ( 711126 )
        This is incorrect. RAID 1 would be faster than RAID 0 for read workloads where there was (1) sufficient command queue depth, and (2) a castrophic inbalance in the workload that prevented the RAID 0 drive from utilizing its disks. Since the second case never happens (except in improper configurations), RAID 0 will outperform RAID 1 with identical numbers of disks. RAID 1 can have more than two disks (requires and even number) although some foolishly believe that striping in RAID 1 makes it RAID 10 or 1+0
  • Article Summary (Score:5, Insightful)

    by Jay L ( 74152 ) <jay+slash @ j ay.fm> on Sunday August 08, 2004 @10:50AM (#9913319) Homepage
    1. Anandtech and StorageReview benchmarked RAID 0 and found that, for desktop applications, RAID 0 is slightly slower than a single drive, because the things that RAID 0 is good at are not the things that desktops need.

    2. So we changed the benchmarks to really need the things that RAID 0 is good at.

    3. And now, RAID 0 improves things!

    4. Therefore, the benchmarks in #1 were wrong.

    Summary of the summary:

    I'm looking for my keys under this lamppost because the light is better here.
  • by jellomizer ( 103300 ) * on Sunday August 08, 2004 @11:07AM (#9913410)
    You people with all your performance in mind seem to forget the time it takes to restore a lost drive. And some time the information my be unattainable to return. Hey Look I can save 1 minute transferring a gigabyte of information. The next month... Man I spent 3 days putting all my stuff back into my drive after it crashed. Using raid 0 is useless even with any speed increase. If you are doing anything important you may want to use the higher RAIDS so you get the performance and the backups yea the drives will cost more, but it is worth the investment.

    On a different note, I really wish that laptops and desktops came with duel hard-drives standard w. Hardware raid 1 installed. Especially laptops.
  • My buddy found out the hard way the raid 0 was not recoverable when he lost a drive last week. Mirror your OS drives. if you want to use raid 0 use it for anything you can afford to lose.
  • by Yaztromo ( 655250 ) on Sunday August 08, 2004 @11:32AM (#9913532) Homepage Journal

    There are a number of desktop applications that can benifit from RAID-0, but you have to be smart about it.

    RAID-0 is perfect for booting an OS and loading large applications. It's also excellent for swap space, and initializing your JVM.

    It's less well suited, however, for small documents and anything important (like documents).

    Thus, my strategy would be to use a RAID-0 array for my OS, JVM, applications, and swap space, and a non-RAID drive for application data. A good way to achieve this on Linux would be to format the single non-RAID drive and mount it as /home, and install everyting else onto the raid array.

    Seems to be a good strategy for a desktop system to me. Add in some backup for the single disk mounted as /home, and if anything goes wrong with the RAID, you're important data is completely protected.

    Yaz.

  • by Gailin ( 138488 ) on Sunday August 08, 2004 @11:43AM (#9913586) Homepage
    Everyone keeps mentioning about the lack of fault tolerance in Raid 0. Personally I do not know anyone who runs a Raid 0 configuration on drives that containe data that would be considered important.

    Personally I'm a hardcore gamer, and I run Raid 0 on two WD SATA 36G Raptors. These drives are used for my system drive and where I install my apps. Anything that is important is shoved off to a set of big, slow IDE drives that are running in a Raid 1 configuration.

    So MTBF really doesn't matter to me, as when one of the drives fails it takes me a grand total of 18 minutes to reinstall Windows XP (timed it), add in another hour for driver configuration and updates, and I'm back to where I was before the drive failed.

    Raid 0 can work out just fine, as long as your realize its limitations and store your data accordingly.

    Gailin
  • by EulerX07 ( 314098 ) on Sunday August 08, 2004 @12:23PM (#9913790)
    If you read Anandtech's article, you'll see that his test only covered the fastest drive available, the 10K rpm WD Raptor. The price /GB (in canadian dollard) for this drive where I live is 4.61$/GB, compared to 0.86$/GB for a WD Caviar drive.

    What I was looking for was a 0+1 array, striped and mirrored, using inexepensive drives. I'm one of those old fashioned people that didn't switch to using "independent".

    So Anand shows that if you take the fastest drive available, you don't get much by striping it. But what about the average 7200rpm drive, is there a performance increase? Does it get close to a single raptor?

    How would you think about using a Raptor as your main drive where application would reside, and a mirrored array of inexpensive 200GB drive to store your various collections of files, would that be a better choice?

For God's sake, stop researching for a while and begin to think!

Working...