Forgot your password?
typodupeerror
Intel IT Hardware

Dual Core Intel Processors Sooner Than Expected 257

Posted by CowboyNeal
from the rolling-out-like-autobots dept.
Hack Jandy writes "AnandTech reports that Intel's Smithfield processors are going to get here sooner than they originally predicted; most likely within the next few months. Apparently, the Intel roadmaps reveal that the launch dates for next generation desktop chipsets, 2MB L2 Prescotts and Dual Core Smithfield processors (operating at 3.2GHz per core) are almost upon us - way ahead of the original Q4'05 roadmap estimates. Hopefully, that means Intel will actually start shipping the new technology instead of waiting four months after the announcement for retail products."
This discussion has been archived. No new comments can be posted.

Dual Core Intel Processors Sooner Than Expected

Comments Filter:
  • Bleh... (Score:5, Interesting)

    by GreyWolf3000 (468618) on Saturday January 29, 2005 @01:35PM (#11514120) Journal
    I want to see dual-core Pentium-Ms.

    At the rate that power consumption and heat dissipation are increasing on these chips, I consider Pentium-Ms to be the only processor worth using.

  • Great news (Score:5, Funny)

    by Junior J. Junior III (192702) on Saturday January 29, 2005 @01:35PM (#11514122) Homepage
    This means I can shut my furnace off this winter, instead of waiting until the end of 05.
  • by agraupe (769778) on Saturday January 29, 2005 @01:36PM (#11514128) Journal
    As I see it, the smart step to take would be to start with consumer-level 64-bit chips, make them as fast as they can be, and then move on to dual-core. The only way dual-core could be better at this point is if it is given to the server market, where 64-bit Intel processors already exist.
    • by StevenHenderson (806391) <{moc.liamg} {ta} {nosrednehevets}> on Saturday January 29, 2005 @01:38PM (#11514140)
      But will they be 64-bit?

      Sure. 2 cores x 32 bits/core = 64 bits. Duh.

    • You mean the Itanic (Itanium)chip? That one never lived up to it's promise, way overpriced, too hot, and with problems. I tried to order a HP box with that chip as a server for a company I worked with as they wanted cutting edge performance. The HP Sales guy said he would wouldn't take it if they gave it away! Perhaps things are better now, that was about a year ago, an eon in the processor chip market and the chip was brand new.
    • But that would be following AMD's footsteps. I think what they're trying to do here is one-up the Athlon. "Oh you've got 64bit? Well *we've* got dual-core, so ha!"

      Admittedly, I don't know what the roadmap is for AMD dual core chips, so maybe Intel's just trying to keep up by pretending the whole 64-bit thing never happened.
    • But will they be 64-bit?

      Yes. The link to the article started at the 3rd page, skipping the part about Intel's move to 64-bit on the desktop (even Celerons) in Q2 2005. From the 2nd page [anandtech.com] of TFA:

      Single Core Desktop

      Now for desktop processing; we have good news and better news. The good news is almost all desktop Prescotts (including the Celerons) will get a 64-bit makeover real soon. Unfortunately, you'll still need to buy a new processor but the roadmaps indicate there will be virtually no price premium

  • Just to be clear... (Score:3, Interesting)

    by Lisandro (799651) on Saturday January 29, 2005 @01:41PM (#11514157)
    ...these are the 130 watts power-hungry dual P4s mentioned in a previous article right? No thankie.
    • Ever since AMD became the definitive number 1 in the market, I seriously lost interest in Intel stuff.

      • by Lisandro (799651) on Saturday January 29, 2005 @02:47PM (#11514584)
        I wouldn't write Intel off that quickly, but yes, AMD offerings are much interesting from every conceviable point of view: performance, price and power consumption. You can get yourself a dual AMD Athlon64 system for the price of a single DC Intel Smithfield. It will run cooler aswell and most likely perform better.

        I don't know what's up with Intel lately. They're giving too much away in the x86 market to AMD, and they can make good processors (P-M, for example).
        • I know what you mean.. I used to remember when the processor wars were actually FUN to read about and follow. All of the exciting new innovations coming out of the Intel camp, AMD copying them onto their chips, along with their own technologies, along with ramping their clockspeeds the best they can. It seems like as soon as the second revision of the Pentium 4 hit, innovation went the way of the dodo and all the sudden BOTH companies were just in a constant clock speed rush.

          Don't get me wrong, there h
      • Ever since AMD became the definitive number 1 in the market, I seriously lost interest in Intel stuff.

        Except that if Intel takes his dual core out first, they'll be numer 1 again

        BTW, Intel still owns 80% of the market, no mater how nice opterons can be. AMD can hardly be called "number 1". They'll become quickly 1 if they release the dual core stuff faster, though.

        Remember, AMD was the first to get a 1 Ghz CPU, and that didn't give them the market.
        • AMD wasn't first to market with a 1GHz CPU. It was a virtual dead heat. AMD announced a couple days earlier but after Intel set its announce date. Any difference is irrelevent and PC makers did not use AMD processors much at that time.
  • Office use? (Score:4, Interesting)

    by spectrokid (660550) on Saturday January 29, 2005 @01:49PM (#11514215) Homepage
    I can see how this is good for gamers, but normal office use? The biggest waiting time I have on my centrino is network. (In a big company, network by Siemens, it can take 15 seconds between O and a complete list of network drives. Go figure.) Servers will opt for the 64 bit thingies, your secretary doesn't need one; is gamers a big enough market share to make money on this shit?
    • Re:Office use? (Score:3, Interesting)

      by EpsCylonB (307640)
      I can see how this is good for gamers, but normal office use?

      I'm not even sure gamers will notice the difference at the moment, how many games are multithreaded these days ?. Iam sure some games do take adavantage of it if its there but only to a small degree. The vast majority of games today are designed to be played on single cpu computers (this includes the current consoles).

      Of course both the ps3 and xbox next make use of parrellism so in another year or two almost all games will probaby run better
      • Re:Office use? (Score:3, Insightful)

        by bersl2 (689221)
        You do know that when playing a game, it is not the only executing process on the system. The graphics and sound subsystems are all heavily used and take cycles away from the game.

        Sure, the speed-up isn't nearly as large, but having a spare core sure would prevent many slowdowns.
        • Yeah, especially if you use an affinity tool and assign your game its very own CPU... That means the only slowdowns will come from drivers blocking one another, or random processes spawning on your windows box :)
      • I think this is something that can benefit gamers. If you use a NVIDIA 6800 GT, the card's performance is severely bottlenecked by just about any CPU today. Faster processors will definitely aid high end graphics cards, especially setups involving SLI dual cards, which will be bottlenecked by just about any CPU around today. Presently, graphic cards are leaving CPU's in the dust, and CPU's need to catch up.
        • Re:Office use? (Score:3, Insightful)

          by EpsCylonB (307640)
          Your right that GPU's are ahead of CPU's but games need to be written to take full advantage of dual cores/cpus. At the moment they are not.
      • I see the advantage being that you can run the server and client for a multiplayer game on the same machine without introducing a lot of lag. Saves you a machine at a LAN party, I suppose.
    • There are more uses for computers besides games and spreadsheets. It's not hard to imagine, just try and look beyond your own navel.

      Me, I'd really want it for quicker compiles. Think quick 3d rendering of models, etc.
    • I can see how this is good for gamers, but normal office use?

      You're gonna need these to run Longhorn... ;) They're planning for the future...

    • I can see how this is good for gamers, but normal office use?

      I can't see today's processors being much of a challenge to 'normal office use'. Modern processors are exceedingly fast -- throw RAM at 'em and I bet they're good for a long while to come.

      There will always be a market for faster CPUs. Someone needs 'em. But I think processors are beginning to out strip consumer needs by a long shot.

      Think of all the people who probably never really use more than a few percent of their CPU power -- and we'r

    • Actually its not good for gamers. Its good for office/multitasking environments. Gamers typically have one thread, and they need to run it as fast as they can. They want the single 4 or 5GHz processor. The advanced office user running many applications at once (or someone like me, running VS 2003, AutoCAD, SQL*Plus and Firefox) would benefit from multiple cores and a slightly lower processing speed.
  • Programs (Score:5, Insightful)

    by Bios_Hakr (68586) <xptical.gmail@com> on Saturday January 29, 2005 @01:53PM (#11514239) Homepage
    I ran dual P3s for a while last year. While I loved the responsiveness of the system, I hated the lack of programs avalible to take advantage of SMP.

    How is this year going to be different?

    Even if you *could* get SMP aware versions of your software, would it be worth it? Lots of problems are harder to solve when you add SMP to the mix.

    Gamers will be put off by the fact that games can't take advantage of SMP.

    Home users will be put off by the fact that their $500 Dell surfs the world-wide e-mail just fine.

    Buisness user may take advantage of this in servers, but there's only so much cooling and power you can provide to a 1-U server.

    So, how is dual core going to ever be anything bigger than Itanium, Xeon, or any of the other technologies that fail to meet customer expectations?
    • Re:Programs (Score:4, Informative)

      by Unknown Lamer (78415) <`gro.remalnwonknu' `ta' `notnilc'> on Saturday January 29, 2005 @02:09PM (#11514341) Homepage Journal

      ffmpeg/libavcodec takes advantage of SMP now so I can encode videos almost twice as fast as before. Quake III kind of uses it, not very much to be noticeable.

      I also run more than one program at a time so the entire system is faster.

      Two dual core processors would rock hard (when my AthlonMP 2800+ system stop being usable I'm going to get dual dua-core Opterons, or PPC64s if they exist).

      • Re:Programs (Score:2, Informative)

        by X43B (577258)
        "or PPC64s if they exist"

        Are you serious, the POWER4 and POWER5 IBM workstation/server chips have been out for years. If you want a consumer friendly version, try the 64 bit dual G5's from Apple. The G5's are not available dual core yet, but the IBM ones have been for quite some time already.
        • I want dual dual-core PPC64s. I know that single core ones are available now and have been available for a while.

          And I'm broke right now so this is a few years out so they should exist at a more reasonable price.

    • We have at the moment we have kind of reached a plateau where everyday office desktop use rarely stretches the fastest cpu's.

      The cpus in question are being developed for the areas of computing where more power is needed, primarly servers, games, and media work (video in particular). These are areas where people are willing to throw in multithreading if it increases performance despite the complexity it also brings.

      We might not see these cpus in desktops any time soon, it depends on how proccessor intensi
      • Re:Programs (Score:3, Insightful)

        by Tim C (15259)
        We might not see these cpus in desktops any time soon, it depends on how proccessor intensive longhorn is.

        No - you said it yourself, an area where extra CPU power is useful is video work. More and more people are getting digital camcorders, and want to transfer the movies to PC to email to friends, burn to DVD, or whatever. Ordinary people are going to ask for PCs that are "good at video".

        Further, if the CPU manufacturers move exclusively to dual-core procs, where are the OEMs going to get single-core on
    • "Even if you *could* get SMP aware versions of your software, would it be worth it? Lots of problems are harder to solve when you add SMP to the mix."

      For the time being, dual-core chips will be primarily for people that stand to benefit. Video, graphics, compiling, servers that have multiple processes or threads, etc.

      "Gamers will be put off by the fact that games can't take advantage of SMP."

      Games have been updated to take advantage of hyperthreading. Not only does that allow some games to benefit immedi
    • Well, you don't need multi-threaded apps to benefit from smp.

      I've been running dual p3's for a while in Linux, and it's nice being able to compile, check slashdot, stream music, etc. without a problem.

    • Consider that 5 years ago, the amount of software capable of running SMP was even smaller. As more and more systems become SMP or SMT, more applications are written to take advantage of this.

      Also, consider that when one multitasks, the loads are split between processors. Also, these new chips aren't even marketed towards consumers just yet. Instead, they will be going into the server market and the high-end workstation market. These markets are usually the first to receive any major changes to the way
    • by joss (1346)
      You get a lot of value even using non-multithreaded apps, eg make -j2 almost doubles compile speed on largish projects on a dual processor system even though the compiler is not multithreaded.

      Until multiprocessor systems are more widespread, its barely worth the effort. Writing multithreaded apps is a royal pain, and the development tools don't help either. For instance std::string in VC6 is not thread-safe - you dont even find these things out until trying to do multithreaded stuff.
    • Well I think the idea was to introduce HT to get app developers thinking about it, the hope being that by the time dual-core comes out applications will be (nearly) ready to take advantage. Theres no reason games or really any app can't take advantage of SMP, they just don't generally get written for it.
    • Gamers will be put off by the fact that games can't take advantage of SMP.

      I believe Id took advantage of dual G4 CPUs in Quake 3. From what I recall one CPU was assigned to AI and audio while the other handled graphics and such.

      As AI gets more realistic, it needs more cycles. I think dual core CPUs have their place.

    • I ran dual P3s for a while last year. While I loved the responsiveness of the system, I hated the lack of programs avalible to take advantage of SMP.

      If you multitask at all, you're "taking advantage of SMP".

      So, how is dual core going to ever be anything bigger than Itanium, Xeon, or any of the other technologies that fail to meet customer expectations?

      Because dual core setups are a *lot* cheaper than dual CPU setups. So you'll get most of the benefits (and performance) of an SMP machine, without the f

  • by jackalope (99754) on Saturday January 29, 2005 @01:54PM (#11514242)
    I find it interesting that Intel has code named these chips using the same name as one of the world's largest pork processors, Smithfield Foods.

    I expect that these chips will be large power hungry pigs.

    • And for those who are going to ask: But does it run Linux? The answer is: Linux == communism; Systems run by large powerhungry pigs [online-literature.com] == communism --> Smithfield will be Linux only. Or maybe it will come bundled with GNU Hurd and Duke Nukem Forever.

      (And for large powerhungry pigs with modpoints: This comment is meant as an attempt at humour)
  • Let me get this straight..... Intel(or most any other company).... getting something out earlier than expected ... I felt a great disturbance in the force. The earth has stopped moving.
    • Let me get this straight..... Intel(or most any other company).... getting something out earlier than expected ... I felt a great disturbance in the force. The earth has stopped moving.


      Guess Duke Nuke'Em Forever must be due soon. :-P

    • Well, they haven't actually shipped it yet...
  • It's about time. (Score:2, Interesting)

    by tu_holmes (744001)
    I would have thought Dual core chips would have already been available by Intel already.

    People complain a lot about Sun Microsystems, but the Dual Core in Sun's SPARC IV has been out since last April or May I believe.

    Doesn't AMD already have dual core cpu's shipping as well? IBM is working on a dual core G5 as well aren't they?

    Heck, is this even news?

    Shouldn't we be talking about 4 core cpus that are already working in development labs around the world. Sun and IBM both have those... I would bet money t
  • by fuzzy12345 (745891) on Saturday January 29, 2005 @02:20PM (#11514413)
    Hopefully, that means Intel will actually start shipping the new technology instead of waiting four months after the announcement for retail products.

    Want to change Intel's behaviour? Don't give them any press when they announce "real soon now" stuff, only when they actually ship. But if /. (and other media) print every press release, the press releases will keep coming.

    • Reminds me of the race to 1 GHz [com.com] in March 2000. After Intel "moved up" (rushed) the release of their 1 GHz Pentium III, AMD trumped Intel by moving up the release of their 1 GHz Athlon to beat Intel by 2 days. Of course, it took both companies more than 3 months to ship their 1 GHz chips in volume.

      Will AMD respond by moving up the "release" (in very limited quantities) of their dual-core CPUs? Will the race to dual-core cause Intel to release a chip that's not ready, like the 1.13 GHz Pentium III [com.com]?

  • my epiphany... (Score:5, Insightful)

    by ltwally (313043) on Saturday January 29, 2005 @02:27PM (#11514453) Homepage Journal
    Has anyone stopped to look at modern software while thinking about Dual-Core?

    Both Intel and AMD have decided upon dual-core as the future of desktop computing. There will be no more massive Mhz increases... instead the focus is now on parallel computing.... But, seriously, how many CPU intensive applications outside of the server arena take advantage of SMP?

    As someone who has ran dual-cpu workstations for years, I can personally attest to the fact that 99% of CPU heavy tasks do not make use of SMP.

    Think about it... That copy of Doom3 or Half-Life 2 that you just bought, that runs like shit on even top-of-the-line hardware, isn't going to run any better on Dual-Core, because these games are not designed to run multiple threads simultaneously. Neither do most archival programs (WinAce, WinRar, WinZip, SevenZip, etc etc). Nor do many of your encoding tools (though FlaskMPEG and GoGo-No-Coda are noteworthy exceptions).

    As a geek, I can attest that the *nix arena isn't much better. Just because the source is open and available does NOT mean that the author(s) ever considered coding CPU intensive tasks for multiple processors. And "porting" tasks from single threaded to multiple threads is NOT a simple task. This is one of the reasons that there are Computer Science degrees -- writing good SMP code isn't something you learn at technical schools (or even half the full Universities out there).

    Don't get me wrong... as someone who has ran SMP boxes for the past 10 years, I'm really excited about Dual-Core. But don't expect it to be worth a whole lot for the immediate future... as no one outside the server arena really codes for SMP.
    • Re:my epiphany... (Score:3, Insightful)

      by bconway (63464) *
      I don't know what decade you're living in, but no modern game runs single-thread, single-process. Try opening up top or task manager. They all take advantage of SMP or HyperThreading to some degree, and the added responsiveness is priceless.
      • Re:my epiphany... (Score:3, Informative)

        by Tim Browse (9263)
        The core work of most modern games is done on a single thread. A lot of games use a thread to read input, but that's about it.

        Do you know of any examples of games (other than, I believe, Quake 3) that use threads to actually divide real work, as opposed to a minor scheduling convenience?
      • Re:my epiphany... (Score:4, Insightful)

        by ltwally (313043) on Saturday January 29, 2005 @09:26PM (#11516992) Homepage Journal
        As already stated by another reply: just because a game is running multiple threads does not mean those extra threads are doing CPU intensive work.

        Somebody mod this guy down, he's talking out of his ass, and does not deserve an "Insightful" mod.

        Sorry if that sounds harsh, but he really doesn't know what he's talking about. He should try running a dual-cpu box before he makes comments on the state of software and SMP.
    • CPUs are fast enough for most home user type purposes now. The only major exceptions besides gaming (important) are 3D graphics for video, and video encoding. Video processing and 3d graphics are already aggressively multithreaded applications in most instances. Having multiple CPUs means users will be able to watch a dvd and unzip a file at the same time without either one slowing down, so from that standpoint, it's worth it today. If multiprocessor systems become common, so will multithreaded applications
    • by Bill Dimm (463823) on Saturday January 29, 2005 @02:47PM (#11514579) Homepage
      As someone who has ran dual-cpu workstations for years, I can personally attest to the fact that 99% of CPU heavy tasks do not make use of SMP.

      CPU-heavy tasks aren't the target. Intel and AMD have picked up on a very important trend in computing that you are overlooking. While one core runs your word processor, web browser, spreadsheet, etc., the other core handes the 100 spyware programs that are running on your computer. Sure, a few years ago one core would have been enough, but not for the modern Windows user.
      • Well, shit... I knew there was a reason that one of my processors was always running at 100%. Damn you Windows!

        But, seriously... There really isn't a reason to code applications like Microsoft Word for SMP... but why the hell aren't game companies future-proofing their games?

        I mean, take Half-Life 2 for example... If sales of the original HL are any indication, they should assume HL2 will be selling in volume for the next couple of years, at least, and yet the game does not take advantage of SMP.

        Too
    • iTunes encodes faster on a dual processor mac. Photoshop is faster. FCP is faster. Compressor is faster. Most of those apps I use every day, and are DEFINATELY ouside of the server arena.

      As far as games go, they certainly CAN take advantage or multiprocessor machines. Giants, Citizen Kabuto for OS X speeds up around 80% on a dual processor mac. The game wasn't even designed with SMP in mind of the PC side. If they can do it, why can't future games?

      It doesn't take much imagination to think of even more e
    • I can personally attest to the fact that 99% of CPU heavy tasks do not make use of SMP

      Would you ever want to run more than one of these at the same time? Or for that fact run any application at the same as one of these CPU-intensive apps? If so you could still see a real-time benefit.

      There is also continuing research in automatic parallelization, so even your legacy single threaded apps can take advantage of some of that extra cpu. For the most part the speedups attained this way are quite modest, but
    • Take a look at what happened on Apple's side of the fence in the last couple of years. The G4 was lagging in speed. So they started offering out-of-the-box SMP offerings. This brought SMP into the hands of any mere mortal who could afford one (i.e. no special home-built config. You want the fastest mac? get a PowerMac)

      Now, software developers had no choice but to build their apps multithreaded if they wanted to keep their clientele coming. But even more interesting is that the OS became more and more SMP-a
    • The decision to go towards dual core is IMO more because making single core go much faster is really hard right now. So assuming no advances in semis in the near future which change that reality, app developers are just going to have to learn to code SMP if they want good performance. It kinda sucks, and I'm sure if Intel could produce an efficient 6GHz processor they'd go that way instead, but SMP appears to be the reality going forward.
    • I believe more programs are going to be SMP-capable. I am certain games can improve in using SMP, just that there weren't enough SMP systems to take advantage of this to make it worth developing for them.

      Not a whole lot of programs really need a fast CPU, but it helps to have a fast CPU and more than one of them if you run a lot of little programs.
    • The software developers don't really have a choice in the matter. Parallel computing is where it's at for at least several years until the next breakthrough in process technology comes about. Everyone is doing it, from Sun (Niagra), to IBM (POWER and Cell), to Intel and AMD. Unless software developers want the performance of their programs to remain unchanged for several years (and hence have Doom IV looking pretty much the same as Doom III), they'll just have to adapt.
    • "Think about it... That copy of Doom3 or Half-Life 2 that you just bought, that runs like shit on even top-of-the-line hardware, isn't going to run any better on Dual-Core, because these games are not designed to run multiple threads simultaneously."

      Actually, they are. Since Quake III, Id has been doing SMP support. Both HL2 and Doom3 have support for SMP out-of-the-box.
      • I can feel myself becoming stupider, merely for replying to you... but you need educating, sonny-jim (or Jameth, or whatever your name really is).

        Neither Doom3 or HL2 are actually written for SMP. They are NOT SMP-aware. They do NOT scale to multiple processors. Trust me on this, I'd know -- I run dual-cpu boxes (and I'm not talking that pseudo-SMP "hyperthreading")

        As to Quake III Arena... yes, it is SMP aware. But was it written well? Oh HELL NO. It rarely sees anything above a 25% performance bo
    • Re:my epiphany... (Score:5, Insightful)

      by Rob_Bryerton (606093) on Saturday January 29, 2005 @08:18PM (#11516655) Homepage
      Frankly, I'm bewildered at the responses here resisting the change to SMP. I've never understood the focus on pure MHz as opposed to parallelism and MHz. Anyone on an SMP box that is multitasking sees the benefits of SMP immediately. You can work with a completely responsive system even when you have a compute-intensive non-SMP-aware process hogging a CPU. This is not the case with single CPU sysems.

      What we have here is simply the fact that, as always, software is years behind the hardware it runs on. This is a classic chicken-and-the-egg situation. "There's no SMP software, so why by a dual?" vs. "Nobody has SMP hardware, so why write SMP-aware apps?".

      Thankfully, there are many SMP-aware apps available, not even getting to the fact that with single-threaded apps on SMP you can for example encode video and do other CPU-intensive tasks simultaneously and at their "native" speeds.

      Games are probably the worst example to use for touting SMP benefits because they are written with the single-CPU mindset. This is a software shortcoming, yet many posters see this is a flaw of SMP? Silly. If you're using games as an SMP detraction, then you're not the target for SMP until the software is written to take advantage of SMP. Again, this is a software shortcoming, not a hardware flaw.

      Then we have the "well office-type users have no need for SMP". Well, that may be true, but so is the fact that office use does not require >1GHz CPU's, yet offices are filled with >1GHz machines. The nature of the "CPU business" is such that your products must constantly improve, or you will soon become irrelevant. You can only make CPU's run so fast in the physical world, so after you've wrung all the easy MHz gains out of a process, what's the next "easy" gain? Parallelism. We don't expect Intel, AMD, et al to just say "Well, that's it, we can make them no faster", do we? Heck no. Instead of more MHz, we now have more cores. The software will follow, and in the meantime the hardware is usuable now.

      The fact of the matter is this: there are real, physical limitations to the manufacture of ever higher speed CPU's. We're going to hit the brick wall shortly using current processes, so the next logical step is to parallelize the CPU. If you can't make 'em faster, then you divide and conquer.

      As someone who runs a few SMP systems, I, for one, welcome our dual-core overlords. So I can run dual-core? Heck no, that's for the gamers and office-workers ;). I'll settle for no less than dual dual-cores, getting more accomplished in a shorter frame of time with little to no effort on my part.

      This will lower the barrier of entry for SMP use for the masses. After they are dragged, kicking and screaming to SMP, people will notice a smoother, more productive computing environment. Also, us dual-CPU folk can now move up to quad cores with relatively little additional expense. As SMP moves into the mainstream, the software will follow. Any programmer worth his salt knows that it is trivial to parallelize many compute intensive tasks such as media encoding/manipulation, imaging, rendering etc. Now that the hardware is (almost) here, the apps will follow.

      I am sincerely interested in hearing any response to these points I've made.
      • Well, firstly, where in the world do you get the idea that I'm resisting the move to SMP... seeing as how I've already stated that I've been running dual-cpu boxes for years.

        Secondly, Dual-Core is a good idea, and I'm happy about it (as I've already stated...)

        ... HOWEVER, this does not change the validity of my point: all that software that is out there today does not actually make use of SMP boxes. Sure, that will change... in time. But for now, you'll see all these people buying Dual-Core machines
  • by MrBandersnatch (544818) on Saturday January 29, 2005 @02:31PM (#11514480)
    Im a software developer and REALLY hate the movement towards dual-cores. While dual-cores will be great for some things (I tend to write everything using threads where its easy to leverage performance) there are many apps (many of which I have no control over, no source access or the cost of re-writing (legacy apps) to be multi-threaded is too high) which need pure-raw processing power and this means its going to take far longer for that power to be available.

    Its a bad move IMO on AMDs and Intels part - personally rather than head to dual cores I'll be looking more and more towards how to get the maximum (i.e. overclock) out of the higher rated single core processors - and this is from someone who normally upgrades every 12-18 months.

    That said if the dual-cores overclock well my stance may change....
    • by JamieF (16832) on Saturday January 29, 2005 @08:55PM (#11516865) Homepage
      >Im a software developer and REALLY hate the movement towards dual-cores.

      Tough. Chip makers are up against a technology barrier right now, and clock speed increases in the CPU don't make RAM or disk or interconnect faster anyway. How about just putting a 4MB cache on-die? That wouldn't require a massive clock speed increase but it would speed things up. I'm not an EE but I'm just pointing out that there are many, many things that have been left in the dust by Moore's law that could catch up and make quite a difference. Does your computer have 4+GB of DDR memory? ATA-133 drives with 8MB cache? PCI-X? A 64-bit CPU and an OS that knows how to use it fully? In what other ways are CPUs waiting on everything else, that could be improved to make things run faster overall?

      Learn to parallelize your code where possible. Optimize your existing code. Software optimizations yield stunning improvements compared to incremental clock speed bumps anyway, and (unlike hardware) affect every installation of your app.

      >Its a bad move IMO on AMDs and Intels part

      OK genius, what's the alternative? No improvements in processors for years, until somebody makes a breakthrough that enables 4+ GHz processors? What happens when they hit the next roadblock?

      Hardware has been so far ahead of software for so long that we've become accustomed to solving bloat with "just buy a new computer". It wouldn't kill us to spend a little time profiling code. The economics have been (in many cases) such that it just made more sense to throw money at new hardware. If that no longer makes sense, throw money at software optimizations for a little while. It doesn't exclusively mean that we have to force every algorithm to operate in parallel. It could be as simple as releasing fat binaries of apps that are compiled to target recent CPUs (no more shipping 386-optimized code to every customer), or *gasp* writing more efficient code in the first place.

  • Funny thought (Score:2, Interesting)

    by Anonymous Coward
    I was just reading that most people here don't like the idea of multi core processors because their games like Quake won't run any better.

    Lately I have been doing a lot of work on distributing software to the internal network and RARing files. I would like the option of just RARing and not have my system turn to mud. Having one core running flat out giving me a chance to still do work is a great idea! Besides I'm sure a better balance with all of those 50 processes on my Windows box would be nice.
  • Considering their low power useage, I am surprised that Transmeta has not pushed into multi-cores. Of course, I noticed a powerpc with > 700 mips and using < 1 watt of power.
  • Picture This (Score:5, Interesting)

    by MOMOCROME (207697) <momocrome@[ ]il.com ['gma' in gap]> on Saturday January 29, 2005 @04:32PM (#11515177)
    Today's CPUs are, in the final analysis, little different than the 386 launched in 1985. Notable exceptions are in details like feature size and operating frequency. Other significant differences are in the pipelining logic, crufted on instruction sets (mmx anyone?) that are rarely called into action, cache and pinouts.

    Now, take a step back and imagine what a classic 386 would look like on a .09 micron process... consider that the 386 had 275,000 transistors- compared to the P4s 42 million. You could fit around 150 386s in the space (on the die) of a single P4.

    Now, of course there are many advances to consider over the 386, but fundamentally, that processor logic is capable of handling 99% of 32 bit computing tasks. They may have done so slowly, but there you are.

    My thinking is, they could use some of this old logic, buff it up a little to accomodate some modern techniques and carve it all into a single die. Imagine a CPU with 64 simple processors, 4Mb of cache and some controlling logic running at 3-5 Ghz. All this in the space of and at the (manufacturing) cost of a single P4.

    This chip could be used in clusters like nobody's business. An array of 128 of these processors could simultaneously handle 8,192 active threads.

    What use would it be? Off the top of my head, this would be perfect for real-time monitoring, transaction processing, switching and so forth. There would also be serious advantages in the desktop space as compilers and kernels were built to adapt to the new distribution of resources. Image processing could be handled using the same techniques as SLI cards use to split the tasks up over two or more video cards, and any other large body of data could be simlarly broken up. Compilers would be designed to break a program up not into a paltry 2 or 3 threads, but into dozens. Speed and responsiveness would skyrocket, while fab costs and board speeds remained stable.

    This might be the logical outcome of the current drift towards multiple CPUs per die, and it could also unite and surpass the schools of CISC vs RISC, as strategies from both would benefit the endeavor.
    • This chip could be used in clusters like nobody's business. An array of 128 of these processors could simultaneously handle 8,192 active threads.
      Sounds like the INMOS Transputer [wikipedia.org]. Except that was a machine built to do what you suggest.
    • ...they want their Cell Architecture back.


      Also note, at 12 MHz, 128 386's could do an amazing 1.5 GHz clocks in total, not counting all the overhead for getting memory to flow between 128 different processors not even designed for dual-processing.

    • Actually all modern x86 processors are radically different than 386s. I am not sure exactly what point (PII or PIII), but the internal workings of Intel and equivalent AMD processors switched from being x86s and over to being fast RISC processors with x86 command interpreters, since the CISC x86 design did not transition well into more modern processor design. As such, having an actual x86 processor in a modern computer could cause some problems. Not to mention that almost no applications are multithread
  • zerg (Score:3, Informative)

    by Lord Omlette (124579) on Saturday January 29, 2005 @07:57PM (#11516486) Homepage
    RAWR, no discussion of dual-core CPUs is complete w/out a mention of Herb Sutter's The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software [www.gotw.ca]!

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...