Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
IBM IT Technology

Cell-based Server Blade Demonstrated 365

slashflood writes "Only a few clients in a hotel room near Los Angeles had the chance to see the first Cell based server blade running Linux 2.6.11. 'We demonstrated the prototype to show that Cell continues to mature. The product is expected to have several times higher performance compared to conventional servers,' said an IBM engineer."
This discussion has been archived. No new comments can be posted.

Cell-based Server Blade Demonstrated

Comments Filter:
  • Great (Score:4, Funny)

    by pHatidic ( 163975 ) on Friday May 27, 2005 @12:17AM (#12651853)
    Now we'll have to put up with people's web servers ringing in movie theaters.
  • huh? (Score:3, Funny)

    by TimeForGuinness ( 701731 ) on Friday May 27, 2005 @12:19AM (#12651860) Journal
    Only a few clients in a hotel room near Los Angeles had the chance to see the first Cell based server blade running Linux 2.6.11.

    sounds like a drug deal going down.
  • ...all across the country so they could all develop software autonomously.

    They could call the program "Cellular Automata."

    • by nietsch ( 112711 ) on Friday May 27, 2005 @04:15AM (#12652849) Homepage Journal
      Though it is very nice to see that IBM ported linux this quickly, I think they cut some corners. The cell has a central powerpc core, and 8 (or more) accesory processing units. The processing power lies in these APU's, not in the central power core. The APU are also very specialised, so you will ot only have to allow acces to the cell from the OS(and manage those), but you also have to write the userland programs that take advantage of the APU's strong points.
      That applies to every program you want to use the apus, so the chance that this happens overnight/soon is pretty slim. Heck, they might even need to rewrite the benchmark programs for it.

      Because they have not released any real benchmarks and only talk about theoretical numbers, i think they have not finished the porting fully (or have very disappointing benchmark numbers).

      Giving early acces to LUGs would be nice for the street creds, but will not speed the code development of the mostly proprietary code that needs to run on it. Giving it to Gimp/Blender/other developers might work, if it comes with a crash course cell programming.
  • I don't get it (Score:5, Insightful)

    by Anonymous Coward on Friday May 27, 2005 @12:19AM (#12651863)
    The Cell is just a PPC with 8 little miniprocessors tacked on. The miniprocessors have explicit control over and direct access to the contents of their own cache, but can only access data in awkward ways; and are super-optimized for vector/SIMD instructions and floating point operations, but are not so good at algorithmic or complex flow operations.

    The Cell's bonus processors are absolutely great for DSP and multimedia apps, such as that we see in the Cell.

    But, they are going to be at a strict disadvantage in data retrieval and pushing operations-- which is, incidentally, exactly what most servers, such as a file, web or database server, need to be best at!

    What kind of servers *ARE* these??
    • Achitecturally they're meant to be the business at shifting jobs to other processors, not just those on the same die.

      makes you wonder why sony wanted these for standalone consumer devices but the on-the-fly clustering opportunities should be very attractvie to a lot of IT shops.
      • I seem to recall Sony suggesting that in the future, all of your "standalone consumer devices" will be networked via ip over powerloop or something. The idea being that the Cell in your kettle and fridge can kick processing power to the PS3 in the living room.
    • Re:I don't get it (Score:5, Insightful)

      by chill ( 34294 ) on Friday May 27, 2005 @12:29AM (#12651912) Journal
      Unless you buy into the whole "utility computing" paradigm, like IBM does. In that case, servers are going to be doing more than just handing up files and indexing databases.

      Using a two-tier or three-tier approach to client/server architecture, with something like a full-duplex GigE connection to fat, diskless clients and you have some real potential.

      A fat client (512+ Mb RAM, 1 CELL processor) that can use the backend for the more heavy-lifting tasks would be a fantastic setup for a lot of businesses.

      -Charles

      • Re:I don't get it (Score:4, Insightful)

        by YU Nicks NE Way ( 129084 ) on Friday May 27, 2005 @12:41AM (#12651966)
        Sorry, but it really wouldn't. Think about it.

        If you're running a rendering farm, then the Cell is a great tool. (And, if you think about it, a game console is essentially a low end rendering farm.) If you're running a word processor, however, SIMD instructions are useless. If you're performing a standard query against a database, SIMD instructions are useless. If you're sending an electronic mail, hey, guess what? SIMD instructions are useless.

        I think that IBM Microelectronics is trying to Cell their new processor in the hopes of Celling their bosses on the (dubious) proposition that they can recoup the losses they've seen on their contract with Sony. They've packaged up the right buzzwords, and they're creating a lot of fog. I sort of doubt that it will work.
      • Using a two-tier or three-tier approach to client/server architecture, with something like a full-duplex GigE connection to fat, diskless clients and you have some real potential.

        Maybe it's late, but am I the only one who thought he was saying that IBM had "fat, dickless, clients"?

    • "What kind of servers *ARE* these??"

      I think Pixar may want one or two.
    • I think people who crunch numbers are getting woodies just now...
    • You're probably right-- Dell's not worried, but Cray, NEC, and Sun might be.
      • What are any of those companies worried about? The best parallel platform at the moment is owned by IBM as well. It's called Blue Gene.

        If I were any of those companies, I'd be a lot more worried about that.
        • Cost. Blue Gene costs a fortune.

          The PS3 using the Cell is going to drive the price of the base components down. While IBM blades may stay a bit pricey, you can bet there will be others at a fraction of the cost of big iron.
          • Ok. I know that Sun makes entry level machines that do this sort of business, but Cray? I thought that supercomputing was sort of their only business.
            • True. However, last I heard they were almost out of business. A last minute gov't contract saved them from getting dissolved; class action lawsuit about the stock; share price at a 52-week low; etc.

              I also can't remember who owns them now... ...

              Ah, bought by SGI. Spun to separate business unit. Sold to Tera. Renamed back to Cray.

              Rumors were they got the gov't contract because too much "supercomputing" was turning towards clusters and the gov't wanted to make sure at least one HPC company survived in th
          • Be careful. Do you know the difference between the words "cost" and "price"?
    • TCP offload, stream encryption, and compression are all things you could run on a cell SPE. Someone else mentioned building a database kernel to run on SPEs for simple queries. Sure, it's still more suited for numerical computation, but a lot of servers do some crunching on the side.
    • but are not so good at algorithmic or complex flow operations .... they are going to be at a strict disadvantage in data retrieval and pushing operations

      Data retrieval? You have to remember that there's a 25.6 GByte/s bandwidth to main mem! And they each have 256k of single-cycle access memory (i.e. take your 32k L1 and shove it).

      It's not like the SPEs are only capable of SIMD vector operations... they are real CPUs with the memory limitation and single-thread limitation. It's just that they do have

    • Re:I don't get it (Score:3, Interesting)

      by Andy_R ( 114137 )
      "What kind of servers *ARE* these??"

      Cheap ones.... at least as far as IBM is concerned. A large chunk of the money customers are paying is no longer going to be poured straight into the bank account of their competitor, Intel. They can either make huge mark-ups, or more likely bump up the specs to amazing levels to add to the buzz around Cell.

      IBM's long-term strategy always has been to position the Power architecture as the successor to x86, so this is a logical move, after their success in ensuring that
  • These things work fine alone, but when connected together they really shine. Built-in clustering hardware interfaces makes this a nerd's wet dream.

    Putting them together into a rackable case looks to be very cool and finally putting a nail in the Windows coffin will be a delicious treat for IBM (the Cell ain't x86).

    I can't wait to get my hands on my PS3 and see what I can do.

    In the meantime, I just wish IBM had Cell samples available for a reasonable price. I just can't afford one for hacking yet!
  • by Anonymous Coward on Friday May 27, 2005 @12:21AM (#12651876)
    "We demonstrated the prototype to show that Cell continues to mature..."

    ...I thought Gohan killed Cell?
  • by guyfromindia ( 812078 ) on Friday May 27, 2005 @12:22AM (#12651879) Homepage
    Guess it is time to invest in Sony and IBM! This technology really looks promising, especially when you read this article --> http://www.blachford.info/computer/Cells/Cell5.htm l [blachford.info]
    The first Cell based desktop computer will be the fastest desktop computer in the industry by a very large margin. Even high end multi-core x86s will not get close. Companies who produce microprocessors or DSPs are going to have a very hard time fighting the power a Cell will deliver. We have never seen a leap in performance like this before and I don't expect we'll ever see one again, It'll send shock-waves through the entire industry and we'll see big changes as a result.
    • It is exciting...but (not to sound trollish)...I'll believe it when I see it.
    • by soricine ( 576909 ) on Friday May 27, 2005 @12:47AM (#12652000)

      After you've read Blatchford's write-up, read this for a reality check:

      http://arstechnica.com/news.ars/post/20050124-4551 .html [arstechnica.com]

      It uses such terms as 'hogwash' and 'wild-eyed and completely unsubstantiated claims'. Ouch.

    • by ad0gg ( 594412 )
      Cell is a normal PPC chip with 8 SIMDs [ibm.com]. Last time I checked, MMX didn't revolutionize PC chips. Being MMX [x86.org] is Simd instructions for pentiums. Cell is nothing new, they just call the simds,a Synergistic Processor Unit. Gotta love marketing speak.
      • Re:Uhhh (Score:3, Insightful)

        by gorim ( 700913 )
        The equivalent for Mac / PPC - altivec, velocity engine, or vmx (whatever you want to call it) certainly revolutionized that platform.

        The fact that on the x86 platform there was little revolution, or one little seen, may be more a reflection of the platform itself.

        Honestly, people who can't see the value of making true and powerful use of SIMD are missing the boat. That is what the future is all about.

        You look at your cellphone, mp3 player, mp4 codecs, digital tvs and radios, it is SIMD that makes all th
        • Re:Uhhh (Score:3, Informative)

          by TheRaven64 ( 641858 )
          The reason SIMD did not really revolutionise anything on x86 is that MMX sucked, and sucked badly. It added extra registers, which could not actually be used 90% of the time (just adding half a dozen GPRs to the x86 ISA would have given more of a speed boost than MMX in many cases, as x86-64 has shown). It required a context switch (very expensive on x86) to use, and it could not be used at the same time as the regular FPU (as I recall). Oh, and I seem to remember that it only worked with very small vect
      • That's because MMX was so badly designed. Overloading it on the floating point registers was just plain brain dead.

        Go write some VMX code and compare it to MMX - especially if you have any streaming/number crunching to do.

      • The only reason MMX instructions exist, is because at the time (about 1996), there where some large companies which had initiatives for building powerful multi-media chips for PC add-on cards.

        MMX was introduced by Intel to kill off those initiatives.

    • Keep your money in Sony. Chip fab doesn't make a lot of IBM's money, IIRC. The PS3, OTOH, is a huge part of Sony's future.
  • by Lehk228 ( 705449 ) on Friday May 27, 2005 @12:23AM (#12651888) Journal
    Wast the benefit with Cell supposed to be that the programmable DSP's worked somewhat like pixel shaders except useful for all kinds of complex serial data so that operations on serial data could be massively improved, which does not seem to me like it would be a major help in a server, unless it is running a specialized app that just happens to be on a server for data access rather than using the Cell to speed up web servers etc.
    • by NovaX ( 37364 ) on Friday May 27, 2005 @01:10AM (#12652120)
      I think its because web servers have thread pools, so a cell processor could handle many of these light-weight threads simultaniously. This makes it perfect for a blade server.

      Sun's Niagara is aimed at this market, where the work is of great quanitity, not huge number crunching. This could mean searching, web page serving, and streaming media. So if you need to handle thousands of requests, this type of processor is ideal. Of course we won't truly know until one of these massively multicore beasts is out in the wild and can be tested in a realistic scenario.
    • a straight webserver i doubt would be much help, however i bet a database or something similar doing lots of sorting/searching could probably be greatly improved by the design of the cell architecture. they tend to deal alot with organized and comparing serialized data.
  • by wyldeone ( 785673 ) on Friday May 27, 2005 @12:27AM (#12651904) Homepage Journal
    "If operated at 3 GHz, Cell's theoretical performance reaches about 200 GFLOPS, which works out to about 400 GFLOPS per board"

    From TFA. Interesting, considering that they're claiming that the PS3 will run 5-10 faster than this.
  • I'm just curious (Score:4, Interesting)

    by SixDimensionalArray ( 604334 ) on Friday May 27, 2005 @12:27AM (#12651906)
    Ok, so the way I see it, we have invented a lot of ways to increase our MIPS and our processing power.. something along the lines of this->

    1) Single CPU
    2) Multiple CPU
    3) Multiple Machines in a grid with single CPUs
    4) Multiple Machines in a grid with multiple CPUs
    5) Multiple grids with many machines
    6) Multiple cores in a single CPU
    7) Multiple cores in multiple CPUs
    7) Multiple cores in multiple CPUs in a grid
    8) ..what next?

    We also went from 8-bit to 16-bit to 32-bit to now 64-bit and beyond. 64-bit words.. nice! Of course, more parallelism means more threads for more simultaneous processes, and 64-bit means twice as much "word" space than 32-bit, but what next?

    It's truly mind boggling, and it's a great time to be in IS/IT!

    What I want to know is, how much further? How can we increase the multiples more? For example, what happened to quantum processing and multiple states for a bit instead of 0 and 1? When can I count my bits 0, 1 and .5? Any supercomputer geeks care to postulate?

    • by BiAthlon ( 91360 )
      Ok, I've got five mod points and you make me post something to this story instead of mod.

      What would you do with a 'bit' that was "pretty close to 1" or "just a bit over 0"? You no longer have any exact state of data which every language I've ever used has depended on.

      I like my 1's and 0's just fine thanks ;)
      • Let's say you had a computer that stored decimal digits. The number of digits necessary to hold the value 9 is 1, compared to 4 bits (1001); the number of digits needed to store the value 256 is 3, compared to 9 bits (100000000)... so on and so forth. If a digit (or a trit, or a hexit, or whatever) could be implemented in hardware in the same way as bits, we would drastically improve storage capacity.

        There's also absolutely no reason that a digit can't be treated as boolean... if it's zero, it's false

      • Here's one possible example (from a patent):

        "Trinary signal apparatus and method

        Extended trinary signal apparatus includes window comparator logic having first and second inputs for first and second trinary input signals, wherein each the trinary input signal can be a high, low or mid state, and an output for outputting signals dependent on the states of the first and second trinary input signals. A switch, which is connected to one of the first and second inputs, can be selectively activated in one phase
    • 7) Multiple cores in multiple CPUs
      7) Multiple cores in multiple CPUs in a grid
      8) ..what next?


      Profit!!!!!!!
      Actually, you have two sevens, 8) should read: 9) Profit!!!!!!!
    • Re:I'm just curious (Score:2, Interesting)

      by birge ( 866103 )
      What I want to know is, how much further? How can we increase the multiples more? For example, what happened to quantum processing and multiple states for a bit instead of 0 and 1? When can I count my bits 0, 1 and .5? Any supercomputer geeks care to postulate?

      Don't worry about quantum computing. It's only going to help the NSA as there are only a limited number of algorithms which will be worth it, namely factoring prime numbers. The power requirements are going to be huge, and by the time they figure ou

  • OS X on Cell? (Score:3, Interesting)

    by Mr. Flibble ( 12943 ) on Friday May 27, 2005 @12:33AM (#12651934) Homepage
    Imagine OS X on cell... with the collusion between Apple and IBM, and OS X running on open hardware... This could be the killer OS that supplants windows.

    Linux wont do it (not in the desktop arena, it does kick ass in the server area though) but OS X could very well.

    That would be something to see, and I would bet, that much software that was OS X capable on Cell would ALSO be Linux capable (perhaps a recompile by the vendor? maybe native... not certan here.)

    Would be nice to have a stable easy to use OS as the dominant platform. Of course, the irony would be that if this did become the case, then I suppose that Apple would eventually become as lazy and as dominant as Microsoft.

    *sigh*

    Still, nice to dream!
    • I feel this may find itself in a Apple appliance. Maybe an advanced multimedia device that fetures an embedded version of OSX and iLife that can connect to HDTV. In two years, the desktop may be replaced with application specific devices and laptops. Only professional systems may remain.
    • Re:OS X on Cell? (Score:5, Insightful)

      by nokiator ( 781573 ) on Friday May 27, 2005 @01:16AM (#12652142) Journal
      There are several contexts which can lead to Cell processor being used in future Apple platforms:
      • As a media co-processor in next generation PowerMacs, and potentially even high end iMacs, similar to "AV" badged Macs from a few years back. Cell can work as a pretty good general purpose media co-processor to offload video encode/decode operations from the main processor(s). Even the current high end dual processor PowerMacs are being challenged when decoding HD H.264. A co-processor that can enable real-time H.264 encoding would make a big impact on the user perception.
      • As a physics modelling co-processor for Macs to accelerate animation and games. This is really what the Cell processor is designed for in the first place, and there is likely to be plenty of libraries/engines written for PS3. This will go a long way to eliminate the existing perception that Macs are inferior game machines. The same capabilities can be used by professionals for 3D animation work.
      • As the core of a home media center that can encode/decode/store/stream video/audio. If the Cell can fit the thermal and cost constraints of a game console, it would also be a good fit for a next generation media center.
      • Re:OS X on Cell? (Score:5, Interesting)

        by gsfprez ( 27403 ) on Friday May 27, 2005 @01:53AM (#12652271)
        anyone here who's ever worked on Final Cut, After Effects, Motion, Logic, Shake, or Maya or any number of hundreds of applications that often require seconds to HOURS of rendering can imagine Cell processors in their Macs, you collective morons that say stupid things like "why do you need a DSP processor to make file serving go faster"?

        Some of us in content creation could use a little help here...

        That some people are stumped by the utility of using Cell processors in Apple-built blade servers to take the place of XServes, or for what purpose would there be in 1-4 Cells in the Mac of the average Joe makes me really doubt the usefullness of our public schools.

        Power users being able to add 15 animation effects with translucency and kenetics in Motion to a video with 8 layers of HD video and then watch it automatically copy straight to a DVD-R - without rendering time - makes us wet with anticipation...

        and wondering when the hell hard drives are going to be able to catch up.

        I would easily give up my right nut for a Apple-based blade server now that the Pro apps are starting to use XGrid for co-processing the heavy lifting portions of our work. My DVD projects of 2 hours still take long over an hour to render 2-pass MPEG2 on a high end DP G5. Multiples faster than realtime, could the Cell do.

        But the average Joe? Why does Safari have to use a 8 core DSP? its doesn't, dumbass. But that's not all people do with their macs...

        That iDVD render? What render? The lag is now your DVD burner - 100%.

        Encoding settings for your iPod, vs. encoding settings for your files. iTunes could EASILY convert - on the fly - Apple Lossless encoded files to some kind of smaller, lossy codec as it filled your iPod. No waiting except for your iPod's slow-ass hard drive.

        all this - while the Cell is still using a basic G5 at its core... so, no, Word isn't going to get any punch - but if Cell processors are as cheap as G5's, then what the hell is the issue here?

        I'm damn ready for radical leaps in DSP... i'm fscking sick and tired of watching progress bars, DAMNIT! and if the Cell can do everything IBM says it can - hell, yeah, bring it on.

        Server guys - try to think beyond your damn file services.
    • I'll take Linux on Cell over OS X on anything any day.

      Apple is already as arrogant and obnoxious as Microsoft. For example, despite the fact that OS X in its current form would not exist without the efforts of the Open Source community, Apple is still actively working behind the scenes [macworld.co.uk] in Europe to destroy the ability of the open source community to work with their proprietary formats [news.com.au].

  • ! Graphics only (Score:5, Insightful)

    by theid0 ( 813603 ) on Friday May 27, 2005 @12:34AM (#12651939)
    I've been trying to ignore everybody's outspoken assumptions about the Cell being a graphics chip which can't do general processing for a desktop computer. The fact is that it's rightly a multi-core chip with loads of vector processing capacity. It might not be as fast on a single-threaded task, but the software world is going to adapt quickly for this type of setup because it's where the hardware is going. No semiconductor lab can (cost) effectively compete in a megahertz race anymore, so more power = more transistors (more cores).

    Server programs are ahead of the curve at this point because they've had multiple CPUs in abundance for a long time. However, even today it doesn't make sense for games like Doom III to avoid taking advantage of this hardware when possible (for instance, the G4/G5 systems have had dual processors for YEARS but Id won't use them properly). For petessake, calculate audio on one processor and AI on the other...
    • My guess as to why id does not support this is manpower vs. usage. IIRC, you could use dual procs with Quake 3 only on NT/2k/XP... This is likely because Carmack realized that the most common userbase for dual proc and gaming would be users on a windows machine. With signifigantly fewer users on Mac and Linux, and even fewer of THOSE running Dual Procs, it probably did not make much sense for Carmack to change the code for these OS's to add dual processor capability.

      Carmack is one smart cookie. I suspect t
    • Actually, as far as I know, Doom3 is a multithreaded engine. I have an opteron-based system, running 64-bit linux (but a 32-bit linux binary, as id won't release a 64-bit one). I went from 1 to 2 processors and saw a 25% increase in framerate. AND keep in mind that I run at a resolution (1280x1024x32 with all effects enabled) where graphics tend to be more GPU bound (in my case a stock GeForce 6800) than CPU bound.

      Anyway, the cell will hopefully drive forward the adoption of multithreaded game engines (for
  • by AntiFreeze ( 31247 ) * <antifreeze42.gmail@com> on Friday May 27, 2005 @12:35AM (#12651941) Homepage Journal

    This won't go anywhere if IBM doesn't clean up its blade management console.

    I've been doing extensive research on blade servers recently for my company, and when it comes down to it, IBM's centralized management for blade servers is hands down the worst in the industry. RLX used to be the best, but they're out of the business now. HP was #2, now they're the leader. Egenera is doing some really cool things, but their setup is just way too expensive (almost 5 times the price of the other leading blade systems).

    So, even if these cell blades were to be the coolest thing ever, if IBM doesn't make an investment into improving their management software, no one's going to buy these things unless they already have a large investment in IBM hardware or are just downright masochistic.

    Basically, what it comes down to is, someone needs to buy the RLX software, it's on the market now. If I were IBM, I'd buy this and retool it for IBM blades. What I'm scared of is Dell buying the RLX software. Dell blades suck, but with the RLX console, even I would consider buying Dell blades, that RLX management software is just that good.

    In short, if I were IBM, I'd buy RLX in a second, and catapult myself to being the industry leader in blade servers.

    • I'll one up your bid; IBM should either 1) Hold a $$million dollar bounty on Open Source Blade Management software, or 2) Buy RLX and open it to us!

      It's more likely that the first will happen, as IBM has had problems enough acquiring other's software and using it for any purpose (take a look at the SCO case). And with their investment so deeply in Linux right now, I'd say this is just a bit over the horizion for Big Blue.
    • IBM has a vested interest in making things difficult and complicated for its customers. After all, it makes its money from support.
  • I am curious to see how this will work out, especially since the Apple+Intel article came out in the Wall Street Journal.

    (Think Secret's take: http://www.thinksecret.com/news/0505itunes49.html [thinksecret.com])

    I think this is a better indication for Apple's future processors, as opposed to the Intel rumours.
  • Deep thought... (Score:3, Interesting)

    by kernel_dan ( 850552 ) <slashdevslashtty ... m minus math_god> on Friday May 27, 2005 @12:42AM (#12651968)
    If IBM has ported the Linux kernel to the Cell processor, does that mean that they have to release the source code as a derivative work of the GPL if they ever sell a Cell-Blade with Linux?
  • x86 emulation? (Score:4, Interesting)

    by Timbotronic ( 717458 ) on Friday May 27, 2005 @01:00AM (#12652058)
    Just wondering, could one or more of the supplementary cores be used for translating x86 instructions to RISC (and back) for the Cells main processor? I'm not really familiar with the Cell's architecture but it'd be interesting to see what companies like VMWare could do if this was the case.
  • Low enough heat... (Score:3, Interesting)

    by Anonymous Coward on Friday May 27, 2005 @01:13AM (#12652128)
    Well,

    If the Cell has low enough heat to be fitted in a blade, perhaps a future version could be cooled well enough to find its way into a PowerBook?

    Would *that* shut up the "Apple has to switch to Intel to have faster cooler laptop chips!!! or they're D000000Med!!!!! " crowd? Maybe? Perhaps?


    • Have you ever seen... or even listened to... the kind of fans IBM has in their BladeCenter?

      Two large drum fans - the entire thing sounds like a jet engine when it powers up. It takes it a while before it settles down to a much much slower speed - but I suppose it is surplus cooling capacity which could prove handy to ship a blade which has high heat dissipation needs.
  • Heat sinks (Score:2, Interesting)

    by CBob ( 722532 )
    You'd that that with all the time & $ invested, they'd at least show 'em off with active cooling a bit more advanced than the BIG sink/BIG fan combo.

    An alpha teaser I wonder, or a bit of intended misdirection?
  • I wonder (Score:2, Interesting)

    by bersl2 ( 689221 )
    Was it really an engineer who said these things?

    If so, did he say them of his own accord, or was he instructed to say certain things? And even if that is so, it is still refreshing to hear somebody besides a marketing or management bot speak to the press.
  • show us the numbers (Score:4, Interesting)

    by cahiha ( 873942 ) on Friday May 27, 2005 @03:27AM (#12652605)
    With Cell, IBM keeps talking about "theoretical GFLOPS". I don't care about theoretical numbers. What I care about is how fast the thing runs when I run normal code compiled with a normal compiler and (possibly hand-optimized) numerical libraries.

    So, what kind of SPECfp numbers does the thing get? What kind of BLAS performance does it get?

    They have 2.6.11 running on it, so compiling the benchmarks should be trivial. If they haven't published anything yet (I haven't seen it), we have to believe that the numbers are less than impressive.

    (Another company used to make inflated claims about the performance of their processors by computing theoretical maximums for a few SIMD instructions, unachievable in most real code. When people actually did some real benchmarks and published them against the wishes of the company, they found that their processor was no faster MHz for MHz than Pentium on real code with real compilers.)
    • The Cell is basically a Vector Processor. So it's not gonna be really fast for compiling or such things like that. IBM just took the opportunity of the PS3 to develop the perfect processor for super-computing (whose task are often matrix-based). Server with cell ? No advantage, or so few. Games ? Becomes interesting, but that's all. Supercomputing ? Here you are.

      I have to say, this Cell is really a great marketing coup ! Everyone is speaking of this processor, even in the biggest newspapers of the mainstre
    • by argent ( 18001 )
      What I care about is how fast the thing runs when I run normal code compiled with a normal compiler and (possibly hand-optimized) numerical libraries.

      It'll run them exactly as fast as any other PPC 970 core. As far as I can see from the information that's been released so far, to use the coprocessors at all you'll need to redesign your application around an asymmetric coarse-grained parallel processing model, with explicit memory management to feed data to the shared RAM the SPUs have access to.
  • IBM sucks! (Score:4, Funny)

    by IntergalacticWalrus ( 720648 ) on Friday May 27, 2005 @04:49AM (#12652975)
    They are ruining our "Yeah, but can it run Linux?" jokes by going right ahead and using it in their first demos!
    • by iapetus ( 24050 )
      Even worse - with so many SPEs, they're even making 'Imagine a Beowulf cluster...' comments redundant. And because it has two Cells, two sets of RAM etc, they've made snide remarks about dupes redunant too.

      The only consolation is that with this new processor, Dragonball Z 'Perfect Cell' jokes will never get old.

      Oh, wait...
  • by FellowConspirator ( 882908 ) on Friday May 27, 2005 @06:50AM (#12653400)
    I don't know why people pan these things as servers. Are people not aware that there's more to contemporary computing that HTTP daemons and database transactions?

    I work in the biotech industry and we use computer farms and grids for all sorts of computationally intensive tasks: biopolymer sequence alignments, docking simulations, protein modeling, high-throughout 3D mass spectral analysis, etc.

    A server with cell-blades and some minor tweaks to our software would generate a tremendous "bang-for-the-buck".

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...