Forgot your password?
typodupeerror
Intel IT Hardware

Intel Reveals Next-Gen CPUs 515

Posted by Zonk
from the low-power-means-more-flavour dept.
EconolineCrush writes "Intel has revealed its next generation CPU architecture at the Intel Developer Forum. The new architecture will be shared by 'Conroe' desktop, 'Merom' mobile, and 'Woodcrest' server processors, all of which were demoed by Intel CEO Paul Otellini. Rather than chasing clock speeds, Intel is focusing on lowering power consumption with its new architecture. Otellini claimed that Conroe will offer five times the performance per watt of the company's current desktop chips. He also ran the entire keynote presentation on a Merom laptop, and demoed Conroe on a system running Linux."
This discussion has been archived. No new comments can be posted.

Intel Reveals Next-Gen CPUs

Comments Filter:
  • Power concerns (Score:5, Insightful)

    by bigwavejas (678602) * on Tuesday August 23, 2005 @02:28PM (#13381612) Journal
    With Laptop sales "Surging" and technology growing exponentially, isn't it time to look at the batteries? You hear a lot about faster video cards/ CPUs and memory, but almost nothing about Next-Gen batteries. Battery technology hasn't really evolved at the same rate as other computer components, has it? I personally feel the bottleneck resides in the batteries and for the industry to progress (on a whole), they're going to have to take a look at all aspects.
    • Re:Power concerns (Score:5, Insightful)

      by Epistax (544591) <epistax@gmail.LIONcom minus cat> on Tuesday August 23, 2005 @02:32PM (#13381651) Journal
      I agree however I believe at least 50% of our battery life extension will come from developing ways to use less stored energy instead of storing more.
      • Given finite R&D resources, improving performance of the power consumer (rather than the power producer) seems to be a more direct way of paving the way for longer-lasting portables. That nips the problem in the bud. It's like the three R's: reduce, reuse, recycle. Before trying to increase (power) supply, one should try to reduce (power) demand.
    • by FLAGGR (800770) on Tuesday August 23, 2005 @02:33PM (#13381668)
      A better battery doesn't get any more polygon's out in Quake 4.
    • by ShaniaTwain (197446) on Tuesday August 23, 2005 @02:34PM (#13381681) Homepage
      We now have batteries powered by urine! [theregister.co.uk]

      Who hasn't wanted to pee on their new laptop? Marks your territory and provides hours of power!

      what else could you want?
    • Re:Power concerns (Score:2, Interesting)

      by Anonymous Coward
      That's been a common thread in mobile technology for a long time. It's a lot more difficult to optimise the same chemicals to store more energy in a smaller container than it has been to build smaller and smaller computing components.

      As a matter of fact - reading around a little bit will show that basically mobile device design is driven around the battery. We could go much smaller, much faster, and generally far niftier with our devices if we didn't have to strap a car battery to it.

      • Re:Power concerns (Score:3, Interesting)

        by wulfhound (614369)
        As an aside, there is an argument that, for reasons of safety, you only want to go so far with power density. A fully charged Li-Ion battery already packs a pretty large amount of chemical energy in a small space -- laptops catching fire is fortunately a rare occurence, but not a pleasant one. Go too far with chemical energy density, and essentially everybody is carrying potential bombs around.
        • Re:Power concerns (Score:5, Informative)

          by timster (32400) on Tuesday August 23, 2005 @04:07PM (#13382591)
          Sorry, but this argument doesn't hold a lot of water. Fat, for instance, has an energy density of 38 kilojoules per gram, whereas lithium-ion has a density of 0.72 kilojoules per gram. Fat, while flammable, is far less dangerous than lithium-ion.

          Lots of materials have a high energy density and are still very safe and stable. The problem, of course, is that extracting electrical energy from them is not incredibly easy to do. However, we should not say that high energy density is inherently unsafe.
          • Re:Power concerns (Score:3, Insightful)

            by slittle (4150)
            E=mc^2

            Everything is energy. The thing is, you need to be able to get the energy out of it quickly and easily. As far as releases of energy go, you don't get much faster and easier than a bomb.
    • Re:Power concerns (Score:5, Informative)

      by LWATCDR (28044) on Tuesday August 23, 2005 @02:37PM (#13381731) Homepage Journal
      Because batteries are more mature than electronics.
      Honestly there just is not that much room for improvement unless someone makes a huge break through.
      If you think about the requirements for a battery they are pretty harsh.
      1. Relatively none toxic
      2. Relatively none explosive,
      3. Last a long time.
      4. Cheap.
      • Re:Power concerns (Score:5, Informative)

        by drgonzo59 (747139) on Tuesday August 23, 2005 @02:58PM (#13381969)
        Good point. The #3: "Last a long time" is usually equivalent to "stores a lot of energy". And mostly it contradicts with #1 and #2. Whenever you have anything that produces and stores large ammounts of energy you are bound to have toxicity, explosive potential and other harmful effects.

        For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.

        Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...).

        But I remember that there was an article about someone developing such a battery here [sciencedaily.com] the link, I think.

        • Re:Power concerns (Score:3, Informative)

          by LWATCDR (28044)
          Actually I was talking about durability not power density.
          Their are many one shot batteries that have a pretty good energy density but that would be wastful.
          The Navy used Silver/zinc batteries for a high performance test submarine.

          "For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.

          Either it has so much shielding t
          • by spun (1352) *
            Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...). "

            Maybe you could grow a tentacle down there and go on to have a great career in hentai.
        • Nuclear batteries (Score:5, Informative)

          by jeti (105266) on Tuesday August 23, 2005 @05:34PM (#13383392) Homepage
          For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.


          Considering that plutonium beta cell batteries were used in pacemakers, I wouldn't be too worried about that. I think the shielding could be lightweight enough.
          But getting rid of used batteries could be a real problem.

      • Re:Power concerns (Score:3, Insightful)

        by Epistax (544591)
        Take #3 and divide it by #4, multiply it by 100 if it's rechargable. That's your new #3.
      • Most laptops only use ~30W of power, hopefully less as time goes on... This makes portable solar cells an option.

        I'm not really sure about wind power... but an interesting idea would be to make a "3-in-one" alternative laptop powerer.

        You could have a 30W solar pack, and two small windmill things (maybe with detachable fins for easy carrying). The fans could double as hydro generators if you stick them in a river.

        You know... for all those times you're next to a river with your laptop (and there is no wind).
      • Re:Power concerns (Score:3, Insightful)

        by stienman (51024)
        2. Relatively none explosive,

        As any battery manufacturer will tell you, batteries do not explode. They may, however, "vent with flame."

        -Adam
    • Re:Power concerns (Score:5, Insightful)

      by realmolo (574068) on Tuesday August 23, 2005 @02:45PM (#13381824)
      It's hard to improve batteries. You might as well ask "how come we don't have gasoline that gives us 100 mile per gallon in an average vehicle"?

      Because there are physical limits to how much energy you can store in given materials. You can't "design around" these limits. All you can do is try and come up with better materials/better combinations of materials. And we've already tried every combination that is practical.

      Which is why fuel-cell powered notebooks are interesting. But who knows if those will ever actually get produced.
    • Different Physics (Score:5, Interesting)

      by sterno (16320) on Tuesday August 23, 2005 @02:48PM (#13381857) Homepage
      The problem is that the physics for how to increase the number of transistors on a chunk of silicon is very well understood and the physics of how to make better batteries is not.

      To double the number of transistors on a processor is primarily a matter of lithography, that is etchich smaller and smaller lines into an existing wafer. Same materials, more or less, and same technique, more or less. With batteries, it's far more hit and miss.

      The technology and fabrication process to make a lead-acid battery is vastly different than NiCd. NiMh is somewhat similar to NiCd, but then Lithium Ion is rather different and requires a lot more technology to make it work. Then you've got fuel cells as a possibility, and that's vastly different from anything I just described.

      There's a lot of effort being put into battery research because everybody understands what a fundamental limitiation it is to everybody's dreams of pervasive wireless. It's rather ironic to describe these internet coffee shops as having "wireless" when you still have to have A/C power to do anything. The problem is that it does not have the clear and obvious path that CPU's have had.

      I expect that fuel cells will eventually be the way to go. Still there's a certain inconvenience in them. If I want to charge my laptop batteries, i just plug in my laptop. If I've got a fuel cell, do I have to buy numerous cells? Do I have to fill them up with methanol, etc? It doesn't seem like there's a panacea for portable power (and other p words) anytime soon.
    • Re:Power concerns (Score:3, Interesting)

      by mnmn (145599)
      If the current laptops could run for 20+ hours on the current batteries, this will be a non-issue. It will be even less of a non-issue when laptops can run off solar power.

      OTOH, should batteries change, you have a whole lot of electrical/chemical issues that come with high amperage, including temperatures high enough to fry your lap. Of course theres a huge demand for high power batteries in the industry. But batteries have changed little and will change little (NiCD was invented in 1899), while moore's law
    • Re:Power concerns (Score:3, Informative)

      by freidog (706941)
      We already have done that.
      My old Celeron 600 notebook came with a 38 watthour battery. Most newer notebooks come with 50-65WHr batteries and I think you can order batteries as large as 80WHr with some notebooks.

      And really power consumption, at least for the mobile CPUs right now, isn't all that much higher than it was back in the P3 days. Mobile P3s needed anywhere from about 10-20W, the Pentium M's use 7.5 (600mhz idel) to 24W (a few 533 FSB parts are 27W).
    • by Freexe (717562) <serrkr@tznvy.pbz> on Tuesday August 23, 2005 @03:32PM (#13382246) Homepage
      Better batteries providing more power will only produce even more heat.

      I welcome cooler CPU and hard drives, not only does it help extend the lifespan but also helps keep my sperm count up!

  • Good (Score:5, Interesting)

    by alecks (473298) on Tuesday August 23, 2005 @02:28PM (#13381613) Homepage
    Rather than chasing clock speeds, Intel is focusing on lowering power consumption with its new architecture.
    Exactly what we've all been waiting for. Is Intel Good(tm) now?
    • Re:Good (Score:5, Insightful)

      by GamblerZG (866389) on Tuesday August 23, 2005 @02:35PM (#13381697)
      Is Intel Good(tm) now?
      No, they just reached the limits of silicon technology. Increasing performance any further would require eather designing "smarter" (rather than faster) processor or using multiple cores.

      Anyway, the trend is good indeed. Finally, people will start thinking about performance on the level of software.
    • Re:Good (Score:4, Funny)

      by davmoo (63521) on Tuesday August 23, 2005 @02:37PM (#13381732)
      You must be new here, and obviously do not know the rules. Let me help you.

      AMD is always good, no matter what they do.
      Intel is always bad, no matter what they do.
      Apple is always good, no matter what they do.
      Microsoft is always bad, no matter what they do.
      Steve Jobs is always right and the sun shines out his rectum, even when he's wrong.
      Bill Gates is wrong and is the spawn of the Devil, even when he's right.

      These rules apply even in cases where one entity does something, and then the other entity does the exact same thing two weeks later.

      And finally, my reply and any like it will always be moded -1 'troll' because the majority of readers here do not want to admit they are this biased.
    • Re:Good (Score:3, Informative)

      by Alsee (515537)
      Is Intel Good(tm) now?

      No.

      The new line of chips are LaGrande Compliant. [xbitlabs.com] LaGrande is Intel's CPU embedded implementation [intel.com] of the Trusted Computing Group's [trustedcom...ggroup.org] Trusted Platform Module.

      So what does that mean?

      All of the new CPUs have ID numbers again. Remember the Pentium 3 ID numbers that created so much outrage and backlash? Whell they are back with a vengance.

      The new CPUs will hold crypto keys, and they are specifically designed to keep the keys (and encrypted files) secure against the owner. They are specifically
  • Now we know... (Score:3, Insightful)

    by wvitXpert (769356) on Tuesday August 23, 2005 @02:30PM (#13381628)
    So this is what Steve was talking about.
    • I feel good about the choice that Steve made but I don't think he capitalized on the announcement.

      Here's hoping that the new architecture is not just a M$, Linux thing.

      I'd really like to have a low-power multi-core 64 bit chip blazing away in my next iMac.
  • woot! (Score:5, Funny)

    by Anonymous Coward on Tuesday August 23, 2005 @02:30PM (#13381632)
    Awesome. Now I'll be able to run 4 times as many CPUs with my 1000w PSU.
  • Places (Score:5, Interesting)

    by kevin_conaway (585204) on Tuesday August 23, 2005 @02:32PM (#13381653) Homepage
    Ok, Conroe [sjra.net] appears to be a lake in Texas, Merom [webshots.com] is a bluff near the Wabash river in Indiana...where/what was the inspiration for Woodcrest?
  • by iggymanz (596061) on Tuesday August 23, 2005 @02:32PM (#13381654)
    So instead of clock speed how about execution speed of standard benchmarks on a reference machine? Or would that show how much they suck per dollar next to AMD?
    • the APwC - accumulated performance-per-watt cost.
      (performance/watt) / cost

      I think that's more relevant.

      The best processor would be one that offers the highest performance-per-watt at the lowest price. I have a feeling that the AMD-64s currently hold that crown.

      Since dual cores are the quite common these days, we need a measure that can scale even based on the number of processors used to achieve the performance numbers.

      So whether it takes 50 transmeta processors or 2 AMD 64s or x Intel processors, at the e
  • They've taken a little cooler and stopped chasing their speed dragon to make a more solid, well-organized, and efficient architecture. Once they've established this 'way point' of stability, then they can get back on the zip zoom bus. I'd like to stand in on the silicon vista, if I were tiny, and see how much less litter they've got hooked up down there. Copper plate thatches, cat scratches, now Intel has the cool down rock and roll.
    • by MOBE2001 (263700) on Tuesday August 23, 2005 @03:13PM (#13382097) Homepage Journal
      In my opinion, Intel and the rest of the big processor vendors are running out of ideas. They can only come up with so many incremental improvements before they bore the market to death. So what comes next?

      I suggest that they start working on the biggest problem facing the computer industry today: unreliable software. It's costing us billions of dollars and even human lives. Consider that the basic architecture of the processor has not change in more than 150 years, ever since a guy named Babbage and his girlfriend Ada built their mechanical computer around the "table of instructions". All processor architectures have benn based on and optimized for the algorithm ever since.

      A truly innovative architecture would abandon the algorithm and embrace a non-algorithmic, signal-based synchronous software model. It would not only revolutionize the computer industry, it would solve its nastiest problem: software unreliability.

      But can we really expect the big guys (Intel, AMD, IBM, etc...) to be truly innovative at this stage of the game? Their approach is evolutionary, not revolutionary and they are doing just fine as it is. They have no great incentive to change. Hopefully, a bright upstart will get the message and make a killing while the behemoths are busy fighting each other for market share. They won't know what hit them until it's too late. The message is simple: There is a solution to the software reliability crisis. The disadvantage is that it will require a radical change in both processor architecture and software construction methodology. The advantage is too good to ignore: 100% software reliability! Guaranteed!

      This is the stuff that revolutions and great companies are made of. After a century and a half, I think it's time for a change. He who has an ear (and the venture capital) let him hear!
  • Actually... (Score:5, Informative)

    by EconolineCrush (659729) on Tuesday August 23, 2005 @02:33PM (#13381665)
    This post originally linked The Tech Report's coverage [techreport.com]. Not sure why the mod changed the link.

    TR also has additional details [techreport.com] on the architecture itself.

    • Not sure why the mod changed the link.

      More pretty pictures. This is slashdot, I don't want to have to read anything (:
  • Does it run Lin--, err, Mac OS X?
  • instruction set? (Score:5, Interesting)

    by John_Sauter (595980) <John_Sauter@systemeyescomputerstore.com> on Tuesday August 23, 2005 @02:35PM (#13381694) Homepage
    Does anybody know what instruction set these three new processors implement? The article states that these are 64-bit CPUs, but doesn't say whether they feature the AMD64 or the Itanium instruction set.
            John Sauter (J_Sauter@Empire.Net)
    • Re:instruction set? (Score:5, Informative)

      by Anonymous Coward on Tuesday August 23, 2005 @02:53PM (#13381913)
      ... it clearly states that it combines the 64bit and netburst from the P4. M$ already told intel to fcuk off when it came to itanium 64bit. Hence EM64T that they have now which is compatible with AMD's implementation.

      "combining the lessons learned from the Pentium 4's NetBurst and Pentium M's Banias architectures. To put it bluntly, the next-generation microprocessor architecture borrows the FSB and 64-bit capabilities of NetBurst and combines it with the power saving features of the Pentium M platform."
  • by Harbinjer (260165)
    So its the revamped P3 once again. I'm glad they optimized it for power instead of marketing, but will it scale to higher clockspeeds? Will it be able to reach 3 Ghz in the next 2 years?
  • by loose_cannon_gamer (857933) on Tuesday August 23, 2005 @02:35PM (#13381699)
    Don't get me wrong, I don't care for my house being heated by computer heat the way it is now by my small LAN. But...

    Fundamentally, most markets of any age undergo specialization, niches form, and those most fitted to the niches, do best. But having a unified architecture between server / laptop / desktop flies in the face of that; it either claims there is no niche market anywhere, or that there is a "killer chip" which fits all niches better than anything else.

    Now, I can guess what Intel would choose of those options, but is there something about the chip industry that makes it immune to this specialization idea? What am I missing?

    • very high development, entry, and "subscription"(basically getting people to use your software model, which is hard) costs make shared commonality a more desirable utility than specialization in some cases. In this case you get the best of both worlds, the specialization per-task of the niche factor, while still keeping the enormous economies of scale and ability to leverage none niche resources. Software is malleable and adaptable enough that not everything has to be coded for one particular niche to be ef
  • 0.5W (Score:3, Insightful)

    by blamanj (253811) on Tuesday August 23, 2005 @02:36PM (#13381716)
    The reduction in power will enable a new class of devices to be created at the 0.5W marker - the Handtop.

    Also known as the video iPod, perhaps?
  • by BikeRacer (810473) on Tuesday August 23, 2005 @02:36PM (#13381726)
    The screenshots make it look like Intel isn't including HT with this next gen core. Is that because it's likely the pipeline is shorter? I thought it would be uber-cool to have a dual-core CPU with HT for some awesome synthetic 4-core action. But, I guess the real question is: Should I care about HT anymore?
  • by Vengeance (46019) on Tuesday August 23, 2005 @02:37PM (#13381743)
    It's been YEARS since Transmeta began preaching performance/watt, and it looks like right now, when Transmeta has some big contracts (with Sony, Microsoft, Fujitsu, etc) beginning to pay off, Intel finally figures it out.

    Of course, Transmeta's already GOT the technology to cut leakage by tremendous amounts... Given that they are no longer a direct competitor of Intel's, it would make some sense if Intel simply licensed Transmeta's LongRun2 tech. But what do I know? I'm always foolishly choosing the better technology instead of the better marketing.

    • Of course, Transmeta's already GOT the technology to cut leakage by tremendous amounts... Given that they are no longer a direct competitor of Intel's

      Yeah, they have it. Their approach is a bit like this:

      0) Preach, preach, preach about performance/watt
      1) IPO
      2) Deliver low power
      3) Performance sucks a big donkey (i.e. fail to deliver)
      4) Fail to hit your market and go out of business (i.e. no longer a direct competitor of Intel)

      ...it would make some sense if Intel simply licensed Transmeta's LongRun2 tech.

      Lo
  • by SiliconEntity (448450) on Tuesday August 23, 2005 @02:41PM (#13381779)
    So much for Moore's Law. So much for the supposedly inexorable march of technology. So much for that nonsense about increasing CPU performance, you all didn't really want 4 GHz anyway, did you?

    People have been predicting the demise of Moore's Law for years. It's funny that it's happened and nobody seems to notice.
    • by Aadain2001 (684036) on Tuesday August 23, 2005 @03:36PM (#13382291) Journal
      Intel engineers came out years ago and stated that they will be hitting the physical wall by 2010, if not sooner. And this isn't the 'we don't know how to get light any smaller' wall, it's the 'the gate is an atom thick' wall. Once you get that small, that's it, you can't get smaller using atoms. You'd have to goto subatomic particles to get smaller, which is a completely different ballgame.

      And if anything, the battle between AMD and Intel should have taught everyone here on Slashdot that faster speed does not mean faster performance. There are MANY factors in architecture design that will improve or decrease overall performance. Sure, you can have a 4GHz CPU, but if it's cycles per instruction (CPI) is 100 while a 2GHz CPU has a CPI of 20, the 2GHz CPU will actually be FASTER than the 4GHz chip! Intel knows this, AMD knows this, and everyone who does serious computer design work knows this. Intel chose the wrong path with Netburst and they have known it for years. But you can't turn around one day, snap your fingers, and switch to another architecture company wide. It takes time, hard work, and a lot of people, which is why we are only seeing this change now and not back in 2002 like they would have wanted.

      I'm happy with this change and I think playing with the architecture to get better CPI and instructions per cycle (IPC) is a better way to go than just cranking up the clock speed.

    • Moore's observation is that the number of transistors per m^2 doubles about every 18 months. This is still happening. What has stopped at least temporarily is the use of this increase in transistor count as a means of increasing CPU throughput. A variety of factors including the fact that RAM access can't keep up, power consumption increases as the cube of clock speed and that increasing pipeline depth to enable higher clock speeds has been taken as far as is practical means that CPU execution speed is curr
    • by pla (258480) on Tuesday August 23, 2005 @03:41PM (#13382329) Journal
      So much for that nonsense about increasing CPU performance

      First the OB-peeve: Moore's Law has nothing to do with clock speed or relative performance, only that the number of transistors per unit of area will double every X months (where X lies between 12 and 18, depending on which "version" of his law you use).

      Okay, that taken care of... :)

      AMD and Intel hit a barrier "harder" than the mere doubling of transistors... They reached a point where running a PC noticeably increases the electric bill (a typical single-core P4 costs around $1.50 per month to run 24/7 in the Northeastern US, just for the CPU, not counting the graphics card, monitor, hair dryer, or whatever other power-sucking toys you might have attached); and relatedly, that high density of power consumption requires getting rid of a proportional amount of heat.

      By dropping the energy requirements by a fifth, you can consequently have five times as many cores for the same heat-dissipating capacity. If each of those pushes a mere half the numerical performance of the single power-hungry core, you still get a net gain of 1.5 units of processing per unit of area.
    • People are well aware of the scaling limits and have been for years.

      There is a fundamental physical limit that puts a cap on the amount of heat that can be removed from a solid per unit time.

      We are fundamentally power limited. Moore's law says the transistor density increases exponentially, but we can't switch those transistors faster because the chip gets too hot and we can't remove that heat fast enough - FUNDAMENTALLY due to the laws of physics.

      So there's a tradeoff. Either put more transistors on the
  • by ndansmith (582590) on Tuesday August 23, 2005 @02:43PM (#13381806)
    I am glad to see that Intel is addressing power consumption with the server chip Woodcrest. After all, desktops and laptops are small potatoes compared to servers when it comes to power usage. For corporations with large server implementations, I could see this saving a lot of power (=$). Good move for Intel; lower power bills are good leverage for new technology purchases -- many of us used that same argument to upgrade from CRTs to LCDs. It is nice to finally have something to be excited about from Intel again.
  • Power Consumption (Score:4, Insightful)

    by Botia (855350) on Tuesday August 23, 2005 @02:46PM (#13381838)
    This is something Intel needs to do to stay in the CPU market. Their NetBurst architecture has allowed AMD to capture the hearts of the enthusiests as it is a better processor. (Note: the mass market has many other factors besides which processor is best in determining sales.)

    While I currently favor AMD's processors, The Pentium M is a magnificant piece of hardware. With Intel basing their future processors on the Pentium M they are going to give AMD a run for their money. This will force AMD to drop their prices to a more reasonable level.

    The one thing Intel is doing that IMHO is wrong is changing the definition of performance from clock speed to performance/watt. This tells us nothing of the performance of the processor or the power required to run it. Instead we should have two basic measurements for all processors: performace and power consumption. Most people are able to do simple calculations such as division on their own or with a calculator. The is no need to hide the actual performance from the end users.
  • by Kaa (21510) on Tuesday August 23, 2005 @02:47PM (#13381850) Homepage
    Instead of Anand's pictures of PowerPoint slides, here's some actual info from TechReport:

    "IDF -- On the heels of Intel's announcement of a single, common CPU architecture intended to drive its mobile, desktop, and server platforms, the company has divulged additional details of that microarchitecture. This dual-core CPU design will, as we've reported, support an array of Intel technologies, including 64-bit EM64T compatibility, virtualization, enhanced security, and active management capabilities. Intel says the new chips will deliver big improvements in performance per watt, especially compared to its Netburst-based offerings.

    At 14 stages, the main pipeline will be a little bit longer than current Pentium M processors. The cores will be a wider, more parallel design capable of issuing, executing, and retiring four instructions at once. (Current x86 processors are generally three-issue.) The CPU will, of course, feature out-of-order instruction execution and will also have deeper buffers than current Intel processors. These design changes should give the new architecture significantly more performance per clock, and somewhat consequently, higher performance per watt.

    Unlike Intel's current dual-core CPU designs, which don't really share resources or communicate with one another except over the front-side bus, this new design looks to be a much more intentionally multicore design. The on-die L2 cache will be shared between the two cores, and Intel says the relative bandwidth per core will be higher than its current chips. L2 cache size is widely scalable to different sizes for different products. The L1 caches will remain separate and tied to a specific core, but the CPU will be able to transfer data directly from one core's L1 cache to another. Naturally, these CPUs will thus have two cores on a single die.

    The first implementation of the architecture will not include Hyper-Threading, but Intel (somewhat cryptically) says to expect additional threads over time. I don't believe that means HT capability will be built into silicon but not initially made active, because Intel expressly cited transistor budget as a reason for excluding HT.

    On the memory front, the new architecture is slated to have the ever-present "improved pre-fetch" of data into cache, and it will also include what Intel calls "memory disambiguation." That sounds an awful lot like a NUMA arrangement similar to what's found on AMD's Opteron, but I don't believe it is. This feature seems to be related to a speculative load capability instead..

    The server version of the new Intel architecture, code-named Woodcrest, will feature two cores. Intel is also talking about Whitefield, which has as much as twice the L2 cache of Woodcrest and four execution cores.

    The company has decided against assigning a codename to this new, common processor microarchitecture, curiously enough. As we've noted, the first CPUs based on this design will be available in the second half of 2006 and built using Intel's 65nm fabrication process. "
    • by hacker (14635) <hacker@gnu-designs.com> on Tuesday August 23, 2005 @03:32PM (#13382247)
      The on-die L2 cache will be shared between the two cores, and Intel says the relative bandwidth per core will be higher than its current chips. L2 cache size is widely scalable to different sizes for different products. The L1 caches will remain separate and tied to a specific core, but the CPU will be able to transfer data directly from one core's L1 cache to another.

      So in other words, they haven't learned at all [daemonology.net], it seems. With the major security flaws [eweek.com] in Hyperthreading (including the flaws in the L1/L2 cache design), I'm not surprised they've pulled it from the chips for now.

      When things don't work and you can't fix them, pull it out. Microsoft should take a tip here and start pulling out the insecure parts of their OS. Oh wait, that might leave a blank drive instead.

      • Why would you say that? The way Hyperthreading was designed was to use all of the hardware possible, as long as possible. To do this, you need a deep pipeline so that each pipeline has time to break up the operation, and you also need lots, and lots, and lots of extra hardware in terms of ALUs, FPUs, and Load/Store units. This adds up to a huge silicon investment, and it's simply not there.

        Secondly, these new cores are not Netburst cores, so Hyperthreading would have to be redesigned from the ground up t
  • by MosesJones (55544) on Tuesday August 23, 2005 @02:53PM (#13381912) Homepage

    With everyone chasing multi-core rather than clock-rate this isn't really a suprise. If you want to run 4 cores on one die you clearly need to reduce the power consumption of each of those cores over what is done today.

    It clearly helps with laptops, which of course will be multi-core themselves in a year or so.

    What an odd day it will be when I start ordering either a "2-way" or "4-way" laptop.
  • Bigger than IE? (Score:5, Interesting)

    by otis wildflower (4889) on Tuesday August 23, 2005 @02:54PM (#13381924) Homepage
    I have to wonder if Intel basically ditching the last 5 years of CPU development in favor of their Israeli skunkworks ranks at or above the famous Microsoft IE U-turn?

    I mean, Intel sold millions and spent billions on Netbu(r|)st, and hit the wall far before the 5+ghz figures bandied about back in the day. This is basically ctrl-alt-del on a large part of their roadmap, though I'm sure they'll still be selling 'traditional' P4s for awhile.
  • by pla (258480) on Tuesday August 23, 2005 @03:04PM (#13382020) Journal
    Intel plans to release these in Q2 2006. They will use a 65nm process, support dual cores, and get 5x the per-watt performance of the Prescott EE.

    AMD has dual core chips available now, that get 3-5x the per-watt performance of Intel's Prescott EE line (depending on how they define certain things - Idle? Mean power/load? Peak realistic-but-not-theoretical? TDP?).

    And AMD only uses 90nm at the moment, and will have two 65nm fabs up by the end of this year - Which will give them another nice boost in terms of per-watt performance.


    I love the idea of a truly "new" CPU line entering the arena, but this smells an awfully lot like more of Intel playing catch-up, and in a way they won't win.

    Unless the Pentium-M line has, for whatever reason, reached a hard wall for performance, Intel would have done better to expand it to multi core - Perhaps jump right to 4 cores just to bypass the whole "catch up with dual" criticism - And dropped the price to undercut AMD (at least per-core). But this? Well, it has potential, but unless Intel has decided to seriously under hype a major announcement, I won't lose any sleep worrying that I just upgraded three machines to readiness for AMD's X2 line (can't afford the damn things yet, so currently just running Winchester 3000s, but all just a chip-swap away from going to X2).
    • Unless the Pentium-M line has, for whatever reason, reached a hard wall for performance, Intel would have done better to expand it to multi core
      I suspect that is exactly what they are doing, with a new label slapped on to suggest something really new and exciting.
      Considering the per-watt performance of the current Pentium M versus the AMD64 (both at 90nm), the Pentium M seems slightly superior. So Intel may actually take the lead there.
      In absolute performance, however, the AMDs are currently superior. Unle
      • I don't see the Pentium M's as hitting any performance wall at all. In fact, if anything, I see them hitting a Watt wall, and being told by the senior execs that they won't release a Pentium M chip that puts out more than 30 Watts, period. Something tells me this is even the reason we haven't seen them in desktops.

        As for performance per watt, the Pentium M is more superior than you want to claim. 27 Watts is hard for anything in the desktop world to compare to; the AMD64's are all up in the 50W range (ma
  • by spooon (447071) on Tuesday August 23, 2005 @03:17PM (#13382135)
    Maybe I'm missing something, but I don't understand how performance per watt is useful as *the* statistic for comparing processors. Granted, clockspeeds aren't the law of the land, but at least they gave you some idea of how processors stack up against each other. The lines have become fuzzier recently, but I can know with a resonable amount of certaintly that a 3ghz P4 will kick the living daylight out of a 1mhz CPU.

    Performance per watt tells a different story. While performance return per unit power consumed may tell how efficient a processor is, it doesn't tell me how good a processor is at doing what I want it to -- crunch numbers, really fast.

    Performance per watt is a ratio, so the rating can increase when performance increases or power consumption decreases. Therefore, a solar calculator with a 5mhz processor and (I'm making this up) 0.1 watt power consumption would have a 50 mhz/watt rating, and a 3ghz CPU with a 100 watt consumption would have a rating of 30 mhz/watt. So, now Intel sells both these processors and advertises their performance/watt ratings. When someone goes to buy a new computer, they're surprised to find that the 50 mhz/watt computer is actually slower/worse/crappier than the 30 mhz/watt one.

    A rock has infinite performance per power usage. It performs one instruction using no power.
    • by photon317 (208409) on Tuesday August 23, 2005 @03:37PM (#13382293)
      While it's not a perfect metric, it is very useful for some very important target markets. Some companies crunch numbers continuously for profit. They have datacenters filled with thousands upon thousands of Opterons or Xeons or what-have-you. The battles they are fighting (in terms of maximizing their profits) are all about power/heat density (how many GFlops can I cram into X square feet of datacenter space and still be able to supply the proper power and cooling), and performance per watt (for every $100,000 I spend on electric bills running this datacenter, how many calculations can I complete?).
  • by illumina+us (615188) on Tuesday August 23, 2005 @03:18PM (#13382137) Homepage
    Silly me... when I read new architecture I said to myself: "Finally! We move on from x86. We have advanced beyond 20 year old technology."

    Sadly, I was mistaken.
    • by stienman (51024) <adavis.ubasics@com> on Tuesday August 23, 2005 @03:52PM (#13382446) Homepage Journal
      Finally! We move on from x86. We have advanced beyond 20 year old technology.

      That's a bit like saying, "Finally! We move on from English. We have advanced beyond centruries old technology."

      The X86 is just a language. No recent processor actually uses it raw. There may be some inefficiencies in the language itself, but the most significant have been reduced by extensions and smart compilers which avoid those constructs. The remaining inefficiencies are worth the backwards compatability, but they are minimal anyway.

      A lot of people keep complaining about this "ancient" instruction set, but the reality is that it doesn't matter at this point. Even low-level drivers are being written in C due to fast processors and infinite storage space.

      Yeah, sure, it would be nice to move to another instruction set, but previous efforts have failed. Intel's 64 bit chip requires a monstrously complex compiler, but it's wicked fast/efficient. But the P4 has surpassed it with it's "inefficient, outdated, and clunky" instruction set.

      There's so much momentum on the X86 caravan that to develop something else and surpass the caravan is a hurculean task. Currently it is more effective to improve the architecture that runs X86 than it is to make a new instruction set and try to improve the architecture at the same time. (which is required since just changing the instruction set won't advance the performance enough to compete with the X86 that comes out when you're ready to release)

      -Adam
  • by Sebastopol (189276) on Tuesday August 23, 2005 @03:52PM (#13382445) Homepage
    For everyone who keeps restating the mistake that Moore's law deals with PERFORMANCE, please educate yourselves:

    "Moore's law is the empirical observation that at our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost will double in about 18 months."

    http://en.wikipedia.org/wiki/Moore%27s_Law [wikipedia.org]

    How bout that, NOTHING about performance.

  • by Theovon (109752) on Tuesday August 23, 2005 @04:35PM (#13382836)
    Intel's original idea was to find a way to more aggressively pipeline their CPU design, allowing for higher clock rates. Increasing the number of pipeline stages allows you to reduce the number of transistors between stages, reducing propagation delay and increasing maximum clock rate.

    In a vaccuum, this makes sense. If the instruction reorderer and/or compiler are smart enough, you can keep that pipeline full and take advantage of that higher clock rate. Indeed, there have been examples of carefully-crafted code that ran very well on this architecture.

    Unfortunately, real software is quite different from the ideal sort of thing that runs well on the P4. Too many hazzards (branches and instruction dependencies) limited how full you could keep the pipeline. The CPU would execute instructions out of order, but there's only so smart you can make it. And not all branch hazzards can be fixed by a branch predictor.

    Intel's hyperpipelined design was a relative failure. Sure, they could clock it 50% faster than an AMD, but that's what it took to make up for the increased pipeline stalls. Performance-wise, it was a wash. In other respects, it was a loss, because the processors required more power, more expensive cooling, and more expensive fabrication.

    After a while, Intel came up with a way to make use of that wasted bandwidth. Why not fill those pipeline bubbles with another, independent execution stream? HyperThreading was born. Not altogether a bad idea. In many cases, it allowed up to 30% better over-all performance for multi-threaded apps, and giving you another CPU core (virtual or not) is always a good way to reduce latency.

    In a last-ditch attempt to try to break the MHz barrier, Intel came out with the Northwood core. They lengthened the pipeline from an excessive 20 stages to an absurd 31 stages (not including the x86-to-RISC translator before the trace cache). To make up for the additional hazzards, Intel had to develop even more aggressive branch prediction and use larger reorder buffers. Unfortunately, this too turned out to be a performance wash, with an associated increase in power requirements.

    At the same time, notebook computers started to overtake desktops in popularity. Low-power became MUCH more important than high-performance. The P4 really could not compete in this space, so Intel hired an Israeli team to develop a whole new architecture. To make a long story short, they basically reverted back to the P3 architecture (a relatively short pipeline), but added on all of the P4's advancements in reordering an branch prediction.

    Think about that. Intel had made some mistakes, but they were GOOD mistakes. In order to work around the deficiencies in their P4 design, they had to develop some very impressive and advanced ways of keeping that pipeline full. Of course, any pipeline is going to have hazzards, so imagine applying that technology to a much shorter pipeline. The result was impressive. While the slower clock speed of Banias/Centrino was noticable under SOME circumstances (as it is with AMD processors), the majority of the time, the performance was excellent, even at a lower clock rate and lower power requirement.

    The development of the P4 was a technical failure, but it was also a valuable phase in Intel's life. These lessons learned are going to be the basis for Intel's future success in efficient CPUs. Finally, I think Intel will be able to compete with AMD, even WITHOUT dubious deals with resellers designed to lock AMD out of the market.
  • Apple's switch (Score:4, Interesting)

    by theolein (316044) on Tuesday August 23, 2005 @05:17PM (#13383210) Journal
    These new processors are the reason Apple is switching to x86. They're coming out in the 2nd half of 2006, just when Apple said its first x86 machines would be released and they offer improved "performance per watt", i.e. the exact same terms Jobs used when he announced the switch. My guess is that Apple will also be wanting the .5W handtop cpus for its Video iPod and that there will be some video enabled version of Airport Express to go along with it.
  • by fbg111 (529550) on Tuesday August 23, 2005 @06:42PM (#13384071)
    The company has decided against assigning a codename to this new, common processor microarchitecture, curiously enough.

    Wow, could it be that the engineers are back in charge at Intel? Palace coup? You know if the marketing people were still in charge, they'd have blue freaks miming the new codename all over the place. Dare I hope that it might become cool again for geeks to like Intel...

Nondeterminism means never having to say you are wrong.

Working...