Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Sun Unveils RAID-Less Storage Appliance 249

pisadinho writes "eWEEK's Chris Preimesberger explains how Sun Microsystems has completely discarded RAID volume management in its new Amber Road storage boxes, released today. Because it uses the Zettabyte File System, the Amber Road has eliminated the use of RAID arrays, RAID controllers and volume management software — meaning that it's very fast and easy to use."
This discussion has been archived. No new comments can be posted.

Sun Unveils RAID-Less Storage Appliance

Comments Filter:
  • by jmorris42 ( 1458 ) * <jmorris&beau,org> on Monday November 10, 2008 @06:33PM (#25712535)

    > Correct me if I'm wrong, but doesn't charging enterprise prices for simplified hardware
    > that relies on commodity software solutions, kind of defeat the point?

    Yea, that is amazing. Ya could put in a pair of 1U servers with RAID1 on each for a fraction of that pricetag. Use any of a number of ways to make the two units cluster, including using OpenSolaris and you get everything they are selling except the pretty front end for about half the sticker, Go SCSI/SAS on all of the drives in 2U machines if you want to spend about what they are charging and still come out with a redundant cluster.

  • by thanasakis ( 225405 ) on Monday November 10, 2008 @06:47PM (#25712717)

    Considering that they've purchased MySQL, StorageTec and Cluster File Systems (of Lustre fame), developed ZFS, implemented CIFS in OpenSolaris from scratch (not Samba based), participated in NFSv4 and constructed the thumper, these machines hardly come as a surprise.

    For the last two years, almost all their moves are targeted towards one goal: Enter the storage market from a non-conventional angle. They want to do it unconventionally, because they know that storage more than anything else is becoming The commodity and today's toys won't cut it. Plus, at this point, all the mainstream storage vendors have difficulty tapping the low end. They may be able to sell their expensive products to clients with deep pockets, but for small businesses it's a different story. No to mention that they are unwilling to reinvent themselves. OTOH with all these inventions Sun may be trying to do what it did with workstations when it started in the 80s, start low and increase. Remains to be seen whether they can pull it.

  • by Archangel Michael ( 180766 ) on Monday November 10, 2008 @07:24PM (#25713167) Journal

    Okay perhaps not with SSD.

    I've built 1.5 TB systems over a year ago, RAID for $1200. FULL including swappable drives, gig ethernet and plenty 2 GB of ram to cache the system. They are FAST and reliable. They worked FIRST TIME, and have worked exactly perfectly for 1.5 years. Drives fail, we have hot swaps available. While not quite 2 TB, they are also 1.5 years old now, and I'll be replacing them in another 1.5 years with bigger drive systems.

    And these, will be spares and lower priority sytems when I update them with newer stuff in a year or two.

    Expensive Technology for the sake of all that other stuff you listed is just silly. It is exactly why SUN doesn't get it, and why some pointy hair boss is buys the bs.

    Production quality storage means that it works for the time needed. I've actually had WORSE reliability from Name brand "Server" quality stuff. We've got HP Proliant Servers in production, and at least THREE from three different lots have all failed due to MOBO Failures. While they do send out a tech to replace the MOBO, it is really really annoying to have to tell people that the server is down because the MOBO failed. And all the great diagnostic tools HP has on those servers didn't predict nor would they fix the errors.

    You can build it 1/2 as much then you can easily have two on hand, in case one dies.

  • With the same level of assurance that the solution will operate, first time - every time?

    Sure.

    With the same level of confidence that Some Vendor will bend over backwards to fix it if it doesn't work?

    Heck, I'll even throw in the same vendor!

    Will your solution be as well tested and engineered?

    Even better. It will have had the same testing and engineering, PLUS a pre-existing history of operating in the marketplace.

    I give you, the Sun Fire X4500 Server [sun.com]:

    12TB (48x250GB) - $23,995.00
    24TB (48 x 500GB) - $34,995.00
    48TB (48 x 1TB) - $61,995.00

    Let us compare with Sun's new line, shall we?

    11.5 TB (46 x 250GB) - $34,995.00
    22.5 TB (45 x 500GB) - $71,995.00
    44.0 TB (44 x 1TB) - $117,995.00

    So... twice the price for the same storage? To steal a line from a very famous "programmer":

    Brillant

  • by Architect_sasyr ( 938685 ) on Monday November 10, 2008 @07:50PM (#25713503)

    We've got HP Proliant Servers in production

    Some things you should keep to yourself no matter how bad it operates ;)

    This is a real case of quality, support and "bling" factor. To use a (bad) car analogy: There is no need to buy a Mercedes when you can own a Nissan for half the price and it has exactly the same features (it may even be more powerful in some cases). However anyone can drive a Nissan (or can afford to), so there is a certain bling factor to driving the Mercedes. Just like there is a hell of a "bling" factor to owning Sun equipment as opposed to the "hack job" we can all put together. Personally I would prefer to spend twice as much and know that it's no longer my problem, even if it crashes, but that's just the opinion of one Network Admin.

    Completely off hand: I've never had a mobo fail in any server, IBM or Dell based.

  • by Anonymous Coward on Monday November 10, 2008 @08:41PM (#25714111)

    You're not driving 48 SATA drives on one bus. The x4500 has 6 disk controllers each driving 8 disks. There appear to be 3 PCI buses, each running 2 controllers. I'm not deep into the magic of storage, but I can at least putty into one of our x4500s and take a look at what's going on before I start talking.

  • by segedunum ( 883035 ) on Monday November 10, 2008 @08:43PM (#25714153)

    It's not like you can just grab 3 1TB SATA drives, throw them into RAID-5 and say that you've got 2TB of production ready storage. Well, you can, but you'd be an idiot.

    That's exactly what Google and many others do, and they spend their money, and significantly less than this, on managing that storage effectively. It works. When it boils down to it, you can have all the exorbitantly expensive and brilliant 'enterprise ready' tools you want but the bottom line is you need redundancy - and that's pretty much it.

    Your "home brew" solution will not meet any of the objectives Sun are achieving with this product.

    Sun say they are targeting small businesses, and they have lost already with this poor showing. They have advanced no further than when they stiffed all the Cobalt Cube customers and withdrew the product, who then went out and bought Windows SBS servers ;-). If you think people are going to jack them in for this then you need a stiff drink.

    Your spindle count will suck, so concurrent access will be slow.

    Ahhh, shit. I'm heart broken. What I'd like to know is how a small business will handle a behemoth like that, how they'll fund the electricity for all those drives and who'll manage it all. I expect that will be an ongoing cost to Sun support ;-).

    Keep to building crappy 3 or 4 disk RAID-5 systems using extremely large drives for storing your music, movies and pr0n on, but don't ever ever ever ever think about using those in any situation where your financial livelihood depends on that data.

    I have news for you. People have been doing it for years, and the reason why Sun's business has gone down the toilet to commodity servers, Linux and Windows, especially with small businesses, for the past ten years is exactly for this reason.

    Sun need to stop pretending that they can package up some commodity shit with some features very, very, very, very, very few need (and is waaaaaaaaaaaaay outside their target market) and label it as 'enterprise ready', which they think justifies an exorbitant price tag and ongoing support. They lost with this strategy with x86 and Solaris where they tried to protect SPARC, they lost with the exodus from SPARC after the dot com boom and they will keep on losing.

  • oh ok... (Score:4, Interesting)

    by phaetonic ( 621542 ) on Monday November 10, 2008 @08:57PM (#25714297)
    Fortune 500 companies typically standardize hardware, so people who say they can buy this from here, that from there, one more thing from eBay are rediculous.

    Also, to those who say small businesses can't afford this, its really an option. Some places like open source hodgepodges of hardware and some do not because their small business generates enough money that investing in enterprise class hardware with gold 4 hour response from a solid company with a history of UNIX experience and integration with Solaris.

    Also, said Fortune 500 companies get massive discounts, as what you're seeing is retail price.
  • by myifolder ( 1155809 ) on Monday November 10, 2008 @10:35PM (#25715231)
    Dell sucks. I have had more problems with Dell than HP and would never buy another Dell product. HP all the way. I do agree about SUN products being very well build and so is IBM. But I will put it to you like this, every thing breaks and nothing is 100% and everyone like different things. That is why Dell is still in business today and the same goes for HP and IBM. One thing for sure this SUN line is way too much for me.
  • Re:DL180/185 (Score:5, Interesting)

    by level4 ( 1002199 ) on Monday November 10, 2008 @11:38PM (#25715791)

    Guh. Sorry. I'm tired, and re-reading my comment the english is well-formed but the concepts are jumbled nonsense. Let me try again, by your leave...

    Yes, it's unavoidable to rebuild when you lose a disk, and there will be a performance hit unless you go for full on 100% redundancy, and not many companies can afford to do that with a lot of data.

    ZFS offers a number of benefits, though, in the event of drive failure-triggered rebuild, in that it basically knows where the data is and only bothers with that. A hardware controller has no idea what's data and what is blank space and so just redoes everything. In theory, assuming the MB/s of rebuild is the same, a ZFS rebuild of a half-full array should take half the time of a traditional controller.

    It is also much more intelligent about *what* it rebuilds, starting at the top and then descending down the FS tree, marking it as known good along the way. This means that if a second drive fails halfway through the resync, instead of a catastrophic failure you still have the data up to the point of failure.

    I can't remember where I read that; maybe here: http://blogs.sun.com/bonwick/entry/smokin_mirrors [sun.com]

    But I didn't even want to talk about drive-failure rebuilding, what I actually wanted to say that ZFS is, in theory, less likely to get itself into an inconsistent state in the case of power fluctuations, controller RAM failures, drive failures w/ pending writes, that kind of thing. That's the kind of rebuild I meant - after some kind of catastrophic failure. I should probably have said "integrity checking" though.

    By design, ZFS never holds critical data in memory only and so at least in theory should always be consistent on-disk. Basically it shouldn't need to fsck. That is a giant advantage to me, if it turns out to be as good in reality as it sounds on paper. Of course, that also has a lot to do with the capabilities of the FS proper, but removing the evil, evil HW controllers from the picture can only be a plus.

    I don't know why, but RAID controllers are the most unreliable pieces of hardware I have ever known, besides the drives themselves (but at least they are consistent and expected to fail). Get a few of them together and something WILL go wrong, more often than not in a horrible and unexpected way. When some RAM goes bad in a HW RAID controller you are in for a whole lot of subtle, silent-error-prone fun. Anything that gets the HW controllers out of the picture is a win for me.

    And don't even mention the batteries in HW raid controllers. They are the wrong solution to the power failure problem, especially since it's always after a failure that a disk will decide it's had enough of spinning and would just like to sit still for a while, thank you very much. Drive failure with pending writes! Exactly the words every administrator wants to hear. Almost as good as power failure with pending writes. Combine the two (highly likely!) for maximum LULZ. Ok, this is turning into a rant, I better stop.

    Anyway, thanks for the corrections. My original comment (and probably this one) came across as a confused mess upon re-reading .. sorry .. will sleep now : )

  • by jcnnghm ( 538570 ) on Tuesday November 11, 2008 @12:34AM (#25716233)

    Small Businesses are businesses that make under $25M/year by definition. I can imagine small businesses being in the market for inexpensive, high throughput, SANs.

  • by dosguru ( 218210 ) on Tuesday November 11, 2008 @01:57AM (#25716701)

    This must have been what my Sun sales guy was talking about a month ago at lunch when he had a product that he described that was very similar to this, but he couldn't even give me the full details. I wanted to buy about a PB of then then and there, but we'll see what my other vendors come back with as counters. The new HDS arrays are changing to SAS from FC and have excellent virutalization on them. I've been pushing for 10K SATA2 drives to really shake up enterprise level storage. I'd love to have these mixed into the ~6PB of disk I help design and plan for.

    IOPS and repose time are the keys in storage, not the actual cost or specific technology. Just like the space shuttle main computers, it doesn't matter if they cost a lot or are old. They have to be right 100% of the time and do what they need to do in the time allowed.

  • Re:Looks great.. but (Score:3, Interesting)

    by this great guy ( 922511 ) on Tuesday November 11, 2008 @06:19AM (#25718051)

    Due to deliberate licensing issues we won't have native ZFS in Linux any time soon.

    It's funny how this viewpoint is always the one promoted on slashdot. One could argue that the Linux GPL is the problem. FreeBSD and Mac OS X had no problem integrating ZFS into their code precisely because the ZFS license (CDDL) allowed it.

If you want to put yourself on the map, publish your own map.

Working...