Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel IT

IBM to Drop Itanium 181

Hack Jandy writes "Xbitlabs is reporting that IBM chose not to persue Itanium in their next generation server lineup because of the "market acceptance issues" of the platform. They will still continue with new revisions of Xeon servers, however. With IBM's investments in Power, I can't help but think the writing was already on the wall. The article also hints that IBM might start using Power in their high end server products."
This discussion has been archived. No new comments can be posted.

IBM to Drop Itanium

Comments Filter:
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday February 27, 2005 @03:02PM (#11795372) Homepage Journal
    No one outside of certain specialized environments that demand loads of floating point is interested in itanic. Flush the damn thing already and work on EM64T, will you intel?

    WTF does "The article also hints that IBM might start using Power in their high end server products" mean anyway? The processor is called POWER, and IBM already uses it in their high-end server products, like the ones that used to be called RS/6000. As for Power, well, show me a transistor that works without it.

    • AMD64 (Score:4, Insightful)

      by bstadil ( 7110 ) on Sunday February 27, 2005 @03:20PM (#11795484) Homepage
      work on EM64T, will you intel?

      baloney, call it by its proper name AMD64

      • If I'm addressing anyone but intel, I will. I don't think intel will listen to me if I rub it in their face :)
      • Re:AMD64 (Score:5, Interesting)

        by Anarke_Incarnate ( 733529 ) on Sunday February 27, 2005 @04:16PM (#11795866)
        AMD64 is the real deal. EM64T is a kludge that is mostly compatible with AMD64. AMD64 has better performance and better handling of >4GB RAM.
    • by IGnatius T Foobar ( 4328 ) on Sunday February 27, 2005 @04:01PM (#11795772) Homepage Journal
      IBM already uses it in their high-end server products, like the ones that used to be called RS/6000.

      Actually, that hardly does it justice. pSeries (formerly RS/6000), yes, but also iSeries (formerly AS/400) is now POWER. The new OpenPower [ibm.com] line of systems from IBM can run AIX, i5/OS (formerly OS/400), and Linux. In fact, it can run them simultaneously thanks to IBM's really good server partitioning technology (you can partition down to 1/10 of a CPU!).

      I'm currently doing some development work on one of these boxes (running Linux on POWER) and let me tell you, it just smokes. Runs circles around Itanium, even before you start parallelizing (which is usually the case, since you're always going to have a dual-core chip, maybe even several of them).

      IBM has absolutely no reason to continue supporting Itanium. It doesn't buy them anything. Itanic is an architecture nobody wants. If Intel hadn't sank so much R&D into it while still being able to live off the revenue from their 32-bit processors (and now, their AMD64 clones), Itanium would have been shelved a year ago.
    • The processor is called POWER, and IBM already uses it in their high-end server products, like the ones that used to be called RS/6000.

      I think what they're talking about is using them in mainframes (zSeries) [com.com], which currently use a different processor than the iSeries (AS/400) or the pSeries (RS/6000). Apparently they're going to converge the hardware of their server lines as much as possible, and differentiate them mostly through the OS.

      Makes sense, if they can leverage the same technologies across all
  • From the summary: (Score:5, Informative)

    by Noose For A Neck ( 610324 ) on Sunday February 27, 2005 @03:03PM (#11795376)
    "The article also hints that IBM might start using Power in their high end server products."

    What? IBM already uses POWER in it's high-end server products. What do you think they develop it for, anyway?

    • Re:From the summary: (Score:4, Informative)

      by superpulpsicle ( 533373 ) on Sunday February 27, 2005 @03:08PM (#11795419)
      http://www.llnl.gov/computing/tutorials/ibm_sp/

      Here's a link to the history of IBM's processors POWER. This is one of the best sites out there IMHO, and it still seems mighty confusing.

      IBM never had a good history of marketing their processors like Intel and AMD. They fight competition with raw numbers.

      • Re:From the summary: (Score:5, Interesting)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Sunday February 27, 2005 @03:13PM (#11795441) Homepage Journal
        IBM doesn't market IBM products. They market IBM. You buy into IBM, you do everything their way, and everything will work. These days, they add "...and you will get the best possible performance" to that, too. Nothing out there has the performance of the latest-generation POWER processors. As IBM has been busy proving building supercomputer after supercomputer, it scales pretty well too :) Granted, highly parallel opteron processors are pretty slick, but given a level playing field I know what I'd pick. Intel introduced itanium too late and at too high a cost to make enough inroads before opteron started to take off, and now itanic has no chance to proliferate enough to ever become inexpensive. IBM has been putting work specifically into making their cores as modular as possible so they can easily turn them into other versions ever since the PowerPC 601, which is why we new PowerPC cores so closely follow the release of new POWER cores.
    • Re:From the summary: (Score:4, Informative)

      by Henriok ( 6762 ) on Sunday February 27, 2005 @07:14PM (#11797471)
      What? IBM already uses POWER in it's high-end server products. What do you think they develop it for, anyway?

      No they don't!
      the pSeries and iSeries isn't considered "high end" by IBM, they are considred low end and midrange servers. The high end is the zSeries and they doesn't use POWER/PowerPC processors just yes. Word has it that the future POWER6 processor will converge the three server lines on one processor platform. The eClipz project is tied very closely to this. "e" as in eServer, "l" as in Linux, "i" as in iSeries, "p" as in pSeries and "z" as in zSeries will.. eclips the Sun.
      • by macshit ( 157376 ) * <snogglethorpe@NOsPAM.gmail.com> on Sunday February 27, 2005 @07:50PM (#11797762) Homepage
        The eClipz project is tied very closely to this. "e" as in eServer, "l" as in Linux, "i" as in iSeries, "p" as in pSeries and "z" as in zSeries will.. eclips the Sun.

        I knew IBM had good research people, but I never realized their acronym technology was that far advanced! My god, with an acronym like that ... their competitors might as well just give up now and save themselves the embarrassment.
  • I'll miss it (Score:5, Insightful)

    by m50d ( 797211 ) on Sunday February 27, 2005 @03:05PM (#11795392) Homepage Journal
    Say what you like, but Itanium was a nice architecture. The compiler is the proper place for the optimisations, the processor should be left to do the actual processing. It's still the most efficient way I know to do raytracing or anything multimediay, and I predict there will still be a market for them for some time.

    On second thought, maybe they'll start appearing cheaply on ebay. That'd be nice.

    • Re:I'll miss it (Score:5, Insightful)

      by Schweg ( 730121 ) on Sunday February 27, 2005 @03:13PM (#11795445)
      With Itanium, Intel attempted to tackle a set of issues that isn't new, and other companies have tried (and failed at) before. It's hard to shift almost all of the burden of optimization, because it's a case of "early optimization" (the root of all evil). Optimization by the processor at run-time allows one to deal with data-dependent issues, and base decisions on statistics gathered by modern processors (such as branch history, caching behavior, etc). Intel made a good try at it, but ended up making a very power-hungry processor that exposed a lot of complexity to the programmer, and whose advantages compared to other processors on the market were not very clear.
      • Re:I'll miss it (Score:3, Insightful)

        by johannesg ( 664142 )
        because it's a case of "early optimization" (the root of all evil)

        While you may very well be right on this issue, it is taking this quote very much out of context. Early optimization in software is bad because it tends to reduce maintainability and wastes effort on code that is likely not performance critical anyway. In contrast, the need for maintainability in compiler-generated assembly is questionable, and it doesn't really matter if the compiler spends some extra time optimizing every last statement

        • Sorry for ranting, but I can't stand it when people take these kinds of "common wisdoms" and then display a complete lack of understanding of the actual issues behind them...

          Only if you view the issue in the most naive sense. There are a number of reasons to avoid early optimization, and code maintainability is only one of them. The issue I was referring to is attempting to optimize before you know what conditions to optimize for. Early assumptions (e.g. a general case, "what kind of sort should I use

    • Re:I'll miss it (Score:3, Insightful)

      by Waffle Iron ( 339739 )
      The compiler is the proper place for the optimisations, the processor should be left to do the actual processing.

      On the contrary, the compiler has no insight into the actual run-time behavior of the current dataset, and compiler development can lag updates in CPU features by many years.

      Nobody knows how to optimize for the exact version of the CPU that a program is being run on than the CPU itself.

      • Compiler development can lag the CPU, but it shouldn't. The compiler knows how to optimize for the CPU better than the CPU, because the compiler is dedicated to optimizing and designed to be, wheras CPUs were originally meant to ocassionally find time to, you know, actually do the processing. Assessing run-time behaviour is overrated. The CPU is deterministic, anything you can do at run-time you can do at compile-time - just add a detection routine if you really need to, most of the time you won't. Certainl
    • Maybe in an academic, theoretical sense it's the "proper" place for optimization, but as far as I can tell, the lack of good compilers is one of the major misfeatures that ended up stalling adoption until better, more compatible architectures like x86-64 came out.
      • x86-64 is not better, it's more kludgy and horrible than even x86. There is at least 3 levels of emulation going on there, it might seem nicer from a high level perspective but working with it it's really horrible. Yes, the lack of a good compiler for a while is what killed itanium, but it really didn't deserve to die, especially now icc is working and working well.
    • Re:I'll miss it (Score:2, Interesting)

      by e-r00 ( 559774 )
      I'll miss it too. It was the very best CPU for our computationally-intensive applications. Is usually won with P4s and Athlons with 2 times higher clock speed... We'll miss you, Itanium :(
    • It was a helluva space heater - even if you didn't need it for computin'.
    • I'd like to see Microsoft's .NET framework really take off. If we get to the point where Windows and all the apps for it are JIT compiled, I think different architectures would have a much better chance of succeeding. I wonder where Itanium would be today if you could run your normal OS (and let's be honest, Windows is the normal OS for most people) and apps on it when it was introduced a few years ago...
      • Re:I'll miss it (Score:1, Interesting)

        by Anonymous Coward
        Java has had that for ages, and do some very neat CPU-specific optimisations.

        I don't know about the Java perfomance on Itanium in particular though.

      • Opensource would solve the same problem too, Once a kernel such as Linux, a libc and a compiler is ported, most of the userspace apps are quite easy to compile and run, especially since 64bit opensource os's have been around for years, so a lot of code is 64bit clean nowadays. Just look at how quickly linux supported itanium and amd64.
    • by extra the woos ( 601736 ) on Sunday February 27, 2005 @04:54PM (#11796127)
      If Intel released them at a low price and with desktop motherboards that were affordable. If an average geek could build the latest itanium system for $200 more than the latest athlon system, well, people would buy it because it's something different, it performs well, and because they want to mess with the architecture. It shuld have been marketed like the P-Pro. Too much for the desktop user, but if you want one you can afford it and you can build it/buy it!
      • percisely, Intel wanted the midrange market of 5-10K servers so it could create it all to itself....rather than evolving the community as before. Itanium came out not compatible with ANYTHING...the 32bit support was "hacked" on only in the second version of the chip.

        Intel's real problem was relying on Microsoft as the sole support for the chip. They pretty much blew off Linux as an also-ran until MS screwed them over with an overpriced and undersupported version... funny how that happens!

    • Others have already corrected this comment but I feel I have to correct it too, because the notion is widespread and quite wrong.

      Often the compiler simply CANNOT KNOW much about what is going to happen at run time. You should not expect an "omniscient compiler" to be developed any more than you should expect "omniscient weather forecasting" to be developed. In the absence of good information about the future, you have to be able to dynamically adapt to changing conditions. JIT compilers can do this to some
      • The compiler can know pretty much. A compiler can be updated and modified much easier than a hardware optimiser. And a ridiculous proportion of the typical x86 chip is doing optimisation rather than actual processing.
    • Re:I'll miss it (Score:3, Insightful)

      by dvdeug ( 5033 )
      The compiler is the proper place for the optimisations, the processor should be left to do the actual processing.

      The theory behind the highly-touted JIT optimizations for the JVM is that it's often better to optimize at run-time, when you know the data, then at compile time when you don't. And compilers don't usually have even the minimal knowledge the programmers have about which switches will be taken.

      Intel's iAPX432 should have warned it about depending too much on the compiler. The iAPX432, the repla
      • Intel's iAPX432 should have warned it about depending too much on the compiler. The iAPX432, the replacement for the 80286, was an intrepit chip of unique design that was sunk, in part, by a lack of compilers that could create compentent code for it. The benchmark compiler would always use the 700-cycle procedure call instead the cheaper, more specialized procedure calls available, for example.

        Actually the iAPX432 predates the 8086 ... like itanium it was an attempt by Intel to make a quantum leap. The

      • There is some benefit to JIT, but a lot of what is being claimed simply isn't true. I have yet to see any real performance increase over an ordinary interpreter except in some specialised benchmarks, and the slowness of java apps belies the claims of faster performance than compiled code. Yes, Itanium was perhaps too dependent on the compiler. But there are good modern compilers, and I think if it had been launched today it would work. I don't blame intel for overestimating compiler ability, I would have do
  • Cell ? (Score:5, Insightful)

    by polyp2000 ( 444682 ) on Sunday February 27, 2005 @03:07PM (#11795406) Homepage Journal
    lets face it when Cell arrives formally theres going to be little point in ploughing resources into something thats effectively headed for obsolesence
    • Re:Cell ? (Score:2, Informative)

      by Harry Balls ( 799916 )
      Cell is very fast, but only has single precision floating point, i.e. it will not qualify for scientific applications, which demand double precision.

      Cell is going to be great for gaming and rendering and such, but you won't see scientific applications running on it any time soon.

      • Cell is very fast, but only has single precision floating point, i.e. it will not qualify for scientific applications

        Take a look here [slashdot.org].

        I also remember reading a while ago on slashdot that IBM is intending to use it on high end Unix workstations.
        [Although I can't remember if it mentioned for scientific of rendering applications.]
        Tried searching for that post ... to no avail.
      • This is no flamebait. The single-precision floating point numbers are on the order of 250 GFLOPS/s. However double-precision is less than 10.

        Cell will be good for graphics and gaming. It won't be useful for most scientific apps because of the double-precision issue. It won't be used on the desktop because of backwards compatibility and programmability issues.

    • The Cell will be heavily used in game consoles, thats about it. Don't buy into the IBM marketing machine.

      Sorry but having 8 SPE "coprocessors" or whatever you call them is not for general-purpose computing. Programmers can hardly deal with multhreading on a symmetric multiprocessor. What makes you think that programming a heterogeneous machine will be easier?
      • From what I understand, programming the Amiga was a lot easier than programming the PC at the time, and the Amiga was essentially a "heterogeneous machine", as you describe.
        • Well, the processor was a much cleaner design, no segmented memory etc, and most of the time you were programming the cpu.. You also could guarantee the hardware and os (or you could totally bypass the os), whereas on an x86 machine you could have any number of incompatible sound/graphics cards and incompatible methods of accessing memory above 640kb.
  • Getting leaner, IBM? (Score:5, Interesting)

    by osewa77 ( 603622 ) <naijasms@NOspaM.gmail.com> on Sunday February 27, 2005 @03:10PM (#11795424) Homepage
    First they drop a PC line tha was not making them money. Then they drop a server line that's clearly not the future of that space. I think they're making some right decisions here. If the POWER platform succeeds, as it more likely would when resources are focussed on it, and it is accepted as a viable alternative to the PC platform, the ensuing competition would probably be good for all of us.
    • Even as a POWER supporter, I find it hard to say it will be accepted as a viable alternative to the PC platform. POWER has existed since 1994 and its failed to make a huge dent in x86, even though it has always been much faster.
      • POWER (or at least PowerPC) has a good chance to dominate the midrange server market as that market increasingly accepts Linux. Linux has a tendency to make the most of whatever hardware it's on (if not right away, then eventually) and IBM has already demonstrated a willingness to put engineers on GPL products where it will benefit IBM. This is an amazing reinvention of self for a company whose licenses used to state that any software developed on their system became their property.

        The reason POWER never

        • by Anonymous Coward on Sunday February 27, 2005 @03:52PM (#11795707)
          No, the reason that Power(PC) never made a dent in x86 is that IBM promised everyone that it would scale better and it simply did not. Furthermore, IBM themselves quashed cheap PowerPC workstations due to internal politics surrounding OS/2, never provided good chipsets to third parties, etc.

          Hey "Blame Microsoft For Everything" is fun, but IBM never seriously attempted to position PowerPC in the mainstream x86 market.
    • POWER is not going to be accepted as a viable alternative to the PC platform. It is as likely as Debian being accepted by the general business world as a viable alternative Windows
    • I doubt POWER will ever go further in market share on the desktop than it is already because nobody can get a POWER ATX motherboard on the open market. There are no available off-the-shelf chipsets for POWER, no appreciable demand for POWERs outside of Apple.

      When they make me President of IBM, the first thing I'll demand is inexpensive chipsets and reference boards (with free design licensing) for the Taiwan motherboard makers.

      • I doubt POWER will ever go further in market share on the desktop than it is already because nobody can get a POWER ATX motherboard on the open market. There are no available off-the-shelf chipsets for POWER, no appreciable demand for POWERs outside of Apple.

        Businesses buy systems and solutions, not Power ATX motherboards with chrome-plated heatsinks and positronic cache boosters.

        When they make me President of IBM, the first thing I'll demand is inexpensive chipsets and reference boards (with free de

    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
  • by Anonymous Coward
    IBM will be using the new Power based CELL CPUs in their new servers. Two of my friends are already working on the new architecture but unfortunately can't talk about any of the details. Both, IBM and SONY, will be using CELL CPUs in virtually all of their new products from DVD players to supercomputers. Anyone wants to make a bet with me?
    • If next-generation media formats demand high floating-point performance, then using CELL makes perfect sense. Otherwise, not. Personally I want to see a VAIO with a Cell running Linux, that would really make my day if it wouldn't cost a hojillion dollars. Maybe if it were also a Playstation 3 it would actually sell.
      • If the Cell is all it's cracked up to be, a couple of them on an AGP video board are going to trash the host CPU performance-wise. Will we see a huge twist of irony in which PCs are just reduced to a shell that supports the SMP Linux system hosted on the "graphics" card?

        It's not so far-fetched. Linux runs on Cells, they're designed for good graphics, they'll be widely available, and if all else fails there's Cygwin to handle X output on a standard graphics card for die-hard Windows bunnies.

        Vik :v)
    • by pmonje ( 588285 ) on Sunday February 27, 2005 @08:48PM (#11798159)
      OK, I'm not supposed to talk about this, but I have two friends who work for the illuminati and all the new mind control chips are going to be based on the CELL. Unfortunately they couldn't give me to many details because of the NDAs that their reptilian overlords made them sign. Anyone wanna bet me?

      seriously slashdot needs a "-1 talking out your ass rating"
    • "Both, IBM and SONY, will be using CELL CPUs in virtually all of their new products from DVD players to supercomputers."

      Doubtful. DVD players rarely have much of a processor at all - usually a low-end CPU combined with a custom DSP. All decoding is done in hardware, so there's no need for high FP performance like the CELL offers.
      • If the cell is mass produced it may become cheaper to produce than a custom dsp, also this will allow dvd players to be upgraded more easily in the future to handle new formats, tho companies nowadays dont want you to upgrade your existing hardware, they would prefer you to throw it away and replace it.
  • IBM's High end (Score:5, Insightful)

    by MagnusDredd ( 160488 ) on Sunday February 27, 2005 @03:30PM (#11795540)
    Article has some strange ideas about what constitutes a High-end server. I'd imagine a IBM P595 [ibm.com] which supports up to 64 Processors would be high end... IBM seems to think so too. But then again what do they know about high end. I mean, they are only #2 in the High end server market (over $1,000,000 per server), and #1 in the mid-range server market (between $100,00 and $,1,00,000 per server).
  • Not accurate (Score:5, Informative)

    by Anonymous Coward on Sunday February 27, 2005 @03:33PM (#11795553)
    TFA and moreso the summary and headline is not fully accurate according to what I've heard.

    What IBM has decided not to do is support the Montecito IA64 chips. Apparently Intel initially approached IBM about licensing the X3 technology for an chipset to support Montecito, IBM agreed and shut down their own program to develop a chipset and redeployed the resources, Intel came back a few weeks later and said they had changed their mind, would IBM build an X3 chipset for Montecito but by this point they had also announced that the next post-Montecito Itanium chip would be plug compatible with Xeon. Hence the market opportunity for Montecito is about 18 months so it's not worth IBM's effort to build a chipset for only that time.

    IBM has therefore decided to continue to sell the existing x455 servers through this year, skip Montecito and support Itanium again with X3 when it becomes plug compatible with Xeon. That means that for about a year they will have no server that will support Itanium.

    Two years is a long time in this business so who knows if anyone outside of the HP/UX install base will care about Itanium by then but IBM does have a plan for continued IA64 support if current trends continue.

    This is not good news for Itanium but it's also not a complete cancellation.

  • by SpamMonkey ( 850104 ) on Sunday February 27, 2005 @03:42PM (#11795618)
    Ok, so we all know the various CPU names and who makes them etc but do we actually know how they compare? Me and the team I work for have total ownership of 7 SAP Application servers and 1 database server, total ram in the DB server is 48GB and the App servers have been 4 and 12GB's each (normal compared to batch processing). They're all running on either IBM P630 to P670's. What does that mean? I have NO idea except that they are able to comfortable deal with 1200 active users at any given time.

    now, if someone can tell me that Itanium will give us better performance for more we'll look into it, if it's Xeon then it's Xeon (pah but you get the idea). What I fail to see is why it's important what hardware is being used as long as it does the job it needs to do!

    Thanks.
    • I think the winner goes to the processor with the coolest-sounding name. Of course, "Itanium" sounds like someone at Intel misspelled titanium, and "Xeon" sounds like the chip contains a mixture of xenon and neon gases. Maybe it does. At least "Pentium" vaguely suggested "fiveness" (as opposed to "fourness" in the 80486) but Intel's latest naming schemes could just as easily apply to automobiles or copy machines.
      • In Italy the word "Tinanium" sound very weird but the word "Centrino" take the trophy cause a "Centrino" in italian is the name of a little silk handmade work. You can find it over a round table behind a flower pot. I think that in future amd and intel will be very attracted by biblic or greek name like: "Thanatos", "Strategos", "Chronos" and so on.
    • What I fail to see is why it's important what hardware is being used as long as it does the job it needs to do!

      It depends on how broadly you define "needs to do". If you can get the "job done" in terms of performance (i.e. concurrent users) yet get it for significantly less money on different hardware, and no software/UI changes for the end users, would you consider it? Especially if the other hardware had a longer life expectancy?

      Life expectancy and cost are factors included in "gets the job done" IMO.
  • Gee, reminds me... (Score:1, Informative)

    by Anonymous Coward
    ...of the IAPX-432 debacle. (And if you don't know what that is, google it. It was the Itanic LONG before the name Itanic was cool.)
  • They already use POWER in their iSeries and pSeries servers, which are the highest-end single servers on the market.

    As far as the decision goes...I think the Itanium wasn't a profitable platform for IBM in the first place, which made it easy to scapegoat marketshare. :-)
  • by Kymermosst ( 33885 ) on Sunday February 27, 2005 @05:11PM (#11796265) Journal
    Proof that the best way to accelerate an Itanium is at 9.8 m/s^2.

    (That's 32 ft/s^2 in ye olde units.)
  • When you think about it, the Power PC Cell processor is much more advanced than the Itanium.
    The porcessor architecture is very scalable and I
    believe will not only be used in the playstation

    Check ou this blurb I found on the net:
    ell provides a breakthrough solution by adopting flexible parallel and distributed computing architecture consisting of independent, multi-core floating point processors for rich media processing. With the capability to support multiple operating systems, Cell can perform both PC/W
  • Visionary (Score:3, Insightful)

    by SunFan ( 845761 ) on Sunday February 27, 2005 @05:57PM (#11796721)

    Whoever said that the ISAs would condense down to only x86, PowerPC, and SPARC appears to have been correct. Alpha is gone, mostly. MIPS is gone in the desktop/server, mostly. Itanium kinda came and went, it appears. PA-RISC is still popular...but but HP wanted Itanium.
    • Alpha is popular too, and still outselling itanium, even tho HP is jacking up the prices on Alpha hardware to try and discourage sales..

      • "Alpha is popular too, and still outselling itanium..."

        Irony, thy name is Hewlett Packard. PA-RISC is clearly out-selling Itanium, and if Alpha is too...LOL
  • I want my next server to be IBM and the sticker on it to say "Powered by Power".
  • Why Itanium Failed (Score:3, Insightful)

    by bani ( 467531 ) on Sunday February 27, 2005 @09:07PM (#11798299)
    what happened with itanium is intel made a number of huge gambles on technology.

    in order for itanium to be successful, every single one of them had to pan out.

    what happened is virtually none of them panned out.

    intel blew their load on a high risk gamble, and lost. they still can't quite come to grips with the fact and are still sinking billions of dollars into a doomed architecture -- despite the fact that just about every original itanium partner has already given up on it (err.. "jumped ship", hence the itanic joke)

    intel has been beating on itanium for nearly a decade and it still hasn't lived up to a single design goal.

    and before the itanium defenders go "no, itanium was only ever intended for rackmount servers", that is 100% contrary to intel's own marketing literature [intel.com] which states that "workstation" is one of the target markets of the itanium.
    • But was Itanic really a total failure?

      No - not really. The mere existence of Itanic caused MIPS, PA-Risc and Alpha to simply roll over and fold before the competition even got underway. Intel has just essentially destroyed three competing architectures.

      So now that MIPS, PA-RISC and Alpha are dead, those vendors were all supposed to move to Xeon.

      But poor old Intel didn't see Opteron coming.
  • [BLOCKQUOTE]With IBM's investments in Power...[/BLOCKQUOTE]
    Aw, heck, so now my Black Lotuses and Moxes are even more expensive? I mean, diversifying a portfoilio's good and all, but collectible cards games might be stretching it a bit far....

    Jouster
  • If Itanium dies, what do we have left for a 64-bit chip? POWER5 and Opteron? SPARC still really lags, and while they're going down the crazy multi-core/hyper-threaded route, I'm not sure it will make up for the horsepower gap.

    HP is the only major manufacturer left of the Itanium line - their Superdome Integrity SMP boxes are wonderfully fast machines. After using both a Superdome and an IBM pSeries SMP(though, it was POWER4 based), I have to say I'm supremely impressed with both. But now, POWER5 is

For God's sake, stop researching for a while and begin to think!

Working...