Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel IT Hardware

Intel Reveals Next-Gen CPUs 515

EconolineCrush writes "Intel has revealed its next generation CPU architecture at the Intel Developer Forum. The new architecture will be shared by 'Conroe' desktop, 'Merom' mobile, and 'Woodcrest' server processors, all of which were demoed by Intel CEO Paul Otellini. Rather than chasing clock speeds, Intel is focusing on lowering power consumption with its new architecture. Otellini claimed that Conroe will offer five times the performance per watt of the company's current desktop chips. He also ran the entire keynote presentation on a Merom laptop, and demoed Conroe on a system running Linux."
This discussion has been archived. No new comments can be posted.

Intel Reveals Next-Gen CPUs

Comments Filter:
  • Actually... (Score:5, Informative)

    by EconolineCrush ( 659729 ) on Tuesday August 23, 2005 @02:33PM (#13381665)
    This post originally linked The Tech Report's coverage [techreport.com]. Not sure why the mod changed the link.

    TR also has additional details [techreport.com] on the architecture itself.

  • by Anonymous Coward on Tuesday August 23, 2005 @02:36PM (#13381722)
    Somehow I don't think you RTFA.

    Thanks to the death of NetBurst, Conroe will feature a 5x increase in performance per watt. Here's to the death of the power-hungry Intel processor.

    and

    Woodcrest and Merom will both improve performance per watt by a factor of 3 over their predecessors.

    They're improving the processor as opposed to the batteries...

    On electrical cost savings alone, PC users will save $1 billion per year for every 100M computers.

    Pretty amazing. Although I'd like to see real #s to back up that claim.
  • Re:Power concerns (Score:5, Informative)

    by LWATCDR ( 28044 ) on Tuesday August 23, 2005 @02:37PM (#13381731) Homepage Journal
    Because batteries are more mature than electronics.
    Honestly there just is not that much room for improvement unless someone makes a huge break through.
    If you think about the requirements for a battery they are pretty harsh.
    1. Relatively none toxic
    2. Relatively none explosive,
    3. Last a long time.
    4. Cheap.
  • by FadedTimes ( 581715 ) on Tuesday August 23, 2005 @02:43PM (#13381799)
    higher clock speeds isn't the only way to get more performance.

    At 14 stages, the main pipeline will be a little bit longer than current Pentium M processors. The cores will be a wider, more parallel design capable of issuing, executing, and retiring four instructions at once. (Current x86 processors are generally three-issue.) The CPU will, of course, feature out-of-order instruction execution and will also have deeper buffers than current Intel processors. These design changes should give the new architecture significantly more performance per clock, and somewhat consequently, higher performance per watt.
  • by Kaa ( 21510 ) on Tuesday August 23, 2005 @02:47PM (#13381850) Homepage
    Instead of Anand's pictures of PowerPoint slides, here's some actual info from TechReport:

    "IDF -- On the heels of Intel's announcement of a single, common CPU architecture intended to drive its mobile, desktop, and server platforms, the company has divulged additional details of that microarchitecture. This dual-core CPU design will, as we've reported, support an array of Intel technologies, including 64-bit EM64T compatibility, virtualization, enhanced security, and active management capabilities. Intel says the new chips will deliver big improvements in performance per watt, especially compared to its Netburst-based offerings.

    At 14 stages, the main pipeline will be a little bit longer than current Pentium M processors. The cores will be a wider, more parallel design capable of issuing, executing, and retiring four instructions at once. (Current x86 processors are generally three-issue.) The CPU will, of course, feature out-of-order instruction execution and will also have deeper buffers than current Intel processors. These design changes should give the new architecture significantly more performance per clock, and somewhat consequently, higher performance per watt.

    Unlike Intel's current dual-core CPU designs, which don't really share resources or communicate with one another except over the front-side bus, this new design looks to be a much more intentionally multicore design. The on-die L2 cache will be shared between the two cores, and Intel says the relative bandwidth per core will be higher than its current chips. L2 cache size is widely scalable to different sizes for different products. The L1 caches will remain separate and tied to a specific core, but the CPU will be able to transfer data directly from one core's L1 cache to another. Naturally, these CPUs will thus have two cores on a single die.

    The first implementation of the architecture will not include Hyper-Threading, but Intel (somewhat cryptically) says to expect additional threads over time. I don't believe that means HT capability will be built into silicon but not initially made active, because Intel expressly cited transistor budget as a reason for excluding HT.

    On the memory front, the new architecture is slated to have the ever-present "improved pre-fetch" of data into cache, and it will also include what Intel calls "memory disambiguation." That sounds an awful lot like a NUMA arrangement similar to what's found on AMD's Opteron, but I don't believe it is. This feature seems to be related to a speculative load capability instead..

    The server version of the new Intel architecture, code-named Woodcrest, will feature two cores. Intel is also talking about Whitefield, which has as much as twice the L2 cache of Woodcrest and four execution cores.

    The company has decided against assigning a codename to this new, common processor microarchitecture, curiously enough. As we've noted, the first CPUs based on this design will be available in the second half of 2006 and built using Intel's 65nm fabrication process. "
  • by Anonymous Coward on Tuesday August 23, 2005 @02:47PM (#13381852)
    Moore's Law does not make a statement about performance. It makes a statement about the number of transistors in a certain area.

    http://www.webopedia.com/TERM/M/Moores_Law.html [webopedia.com]

  • Re:Power concerns (Score:3, Informative)

    by Anonymous Coward on Tuesday August 23, 2005 @02:50PM (#13381879)

    Nope: current Intel CPU is 100+ Watts, a hard drive is like 15 Watts.
  • Re:instruction set? (Score:5, Informative)

    by Anonymous Coward on Tuesday August 23, 2005 @02:53PM (#13381913)
    ... it clearly states that it combines the 64bit and netburst from the P4. M$ already told intel to fcuk off when it came to itanium 64bit. Hence EM64T that they have now which is compatible with AMD's implementation.

    "combining the lessons learned from the Pentium 4's NetBurst and Pentium M's Banias architectures. To put it bluntly, the next-generation microprocessor architecture borrows the FSB and 64-bit capabilities of NetBurst and combines it with the power saving features of the Pentium M platform."
  • Re:Power concerns (Score:3, Informative)

    by freidog ( 706941 ) on Tuesday August 23, 2005 @02:57PM (#13381959)
    We already have done that.
    My old Celeron 600 notebook came with a 38 watthour battery. Most newer notebooks come with 50-65WHr batteries and I think you can order batteries as large as 80WHr with some notebooks.

    And really power consumption, at least for the mobile CPUs right now, isn't all that much higher than it was back in the P3 days. Mobile P3s needed anywhere from about 10-20W, the Pentium M's use 7.5 (600mhz idel) to 24W (a few 533 FSB parts are 27W).
  • Re:Power concerns (Score:5, Informative)

    by drgonzo59 ( 747139 ) on Tuesday August 23, 2005 @02:58PM (#13381969)
    Good point. The #3: "Last a long time" is usually equivalent to "stores a lot of energy". And mostly it contradicts with #1 and #2. Whenever you have anything that produces and stores large ammounts of energy you are bound to have toxicity, explosive potential and other harmful effects.

    For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.

    Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...).

    But I remember that there was an article about someone developing such a battery here [sciencedaily.com] the link, I think.

  • Re:Places (Score:3, Informative)

    by jasonmicron ( 807603 ) on Tuesday August 23, 2005 @03:03PM (#13382017)
    Conroe is a city, which encompasses said lake. It is roughly 30-45 minutes north of Houston in Montgomery County. Their outlet center sucks.

    PS -- the houses in Harbor Town off of Seven Coves are nice.
  • by pla ( 258480 ) on Tuesday August 23, 2005 @03:04PM (#13382020) Journal
    Intel plans to release these in Q2 2006. They will use a 65nm process, support dual cores, and get 5x the per-watt performance of the Prescott EE.

    AMD has dual core chips available now, that get 3-5x the per-watt performance of Intel's Prescott EE line (depending on how they define certain things - Idle? Mean power/load? Peak realistic-but-not-theoretical? TDP?).

    And AMD only uses 90nm at the moment, and will have two 65nm fabs up by the end of this year - Which will give them another nice boost in terms of per-watt performance.


    I love the idea of a truly "new" CPU line entering the arena, but this smells an awfully lot like more of Intel playing catch-up, and in a way they won't win.

    Unless the Pentium-M line has, for whatever reason, reached a hard wall for performance, Intel would have done better to expand it to multi core - Perhaps jump right to 4 cores just to bypass the whole "catch up with dual" criticism - And dropped the price to undercut AMD (at least per-core). But this? Well, it has potential, but unless Intel has decided to seriously under hype a major announcement, I won't lose any sleep worrying that I just upgraded three machines to readiness for AMD's X2 line (can't afford the damn things yet, so currently just running Winchester 3000s, but all just a chip-swap away from going to X2).
  • by niskel ( 805204 ) on Tuesday August 23, 2005 @03:09PM (#13382061)
    Umm, thats double the gate density every 18 months, not performance.
  • Re:Places (Score:3, Informative)

    by DistantShadow ( 901883 ) on Tuesday August 23, 2005 @03:24PM (#13382192)
    Lake Conroe, Oregon...Intel's largest campus is in Oregon.

    Merom, Israel...Intel does much R&D work is Israel.

    ...I'm a bit confused about woodcrest, though...

    -ds
  • Mod parent up (Score:3, Informative)

    by SpinyNorman ( 33776 ) on Tuesday August 23, 2005 @03:24PM (#13382195)
    Intel's process size continues to reduce (down to 45nm now), regardless of what they're choosing to do with those transistors or how fast to clock them).

    Moore's not done yet.

  • by hacker ( 14635 ) <hacker@gnu-designs.com> on Tuesday August 23, 2005 @03:32PM (#13382247)
    The on-die L2 cache will be shared between the two cores, and Intel says the relative bandwidth per core will be higher than its current chips. L2 cache size is widely scalable to different sizes for different products. The L1 caches will remain separate and tied to a specific core, but the CPU will be able to transfer data directly from one core's L1 cache to another.

    So in other words, they haven't learned at all [daemonology.net], it seems. With the major security flaws [eweek.com] in Hyperthreading (including the flaws in the L1/L2 cache design), I'm not surprised they've pulled it from the chips for now.

    When things don't work and you can't fix them, pull it out. Microsoft should take a tip here and start pulling out the insecure parts of their OS. Oh wait, that might leave a blank drive instead.

  • Re:Power concerns (Score:3, Informative)

    by LWATCDR ( 28044 ) on Tuesday August 23, 2005 @03:34PM (#13382274) Homepage Journal
    Actually I was talking about durability not power density.
    Their are many one shot batteries that have a pretty good energy density but that would be wastful.
    The Navy used Silver/zinc batteries for a high performance test submarine.

    "For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.

    Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...). "

    Actually you can shield beta with tin foil. The problem is that a lot of good beta emitters are also good gamma emitters. Then you have to add in disposal and other problems.
  • by Lew Pitcher ( 68631 ) on Tuesday August 23, 2005 @03:34PM (#13382279) Homepage
    Specifically, see this story in today's Financial Post [canada.com] for details.

    To quote the story:

    "Research In Motion Ltd.'s stock shot up 6% yesterday on speculation the BlackBerry maker will announce a licensing deal with Intel Corp. today that will allow the computer chip giant to use technology found in RIM's popular e-mail device.

    Intel has apparently agreed to use RIM's battery-saving technology in a new generation of chips based on a nascent wireless tech standard called WiMax and RIM may also start using Intel chips in the BlackBerry, published reports indicated."
  • Re:Places (Score:4, Informative)

    by Paul Slocum ( 598127 ) on Tuesday August 23, 2005 @03:41PM (#13382335) Homepage Journal
    where/what was the inspiration for Woodcrest?

    A new upscale housing community starting is the low 300's. You'll find one in pretty much every suburban area in North America, and they're all exactly the same.
  • by william_w_bush ( 817571 ) on Tuesday August 23, 2005 @03:45PM (#13382373)
    very high development, entry, and "subscription"(basically getting people to use your software model, which is hard) costs make shared commonality a more desirable utility than specialization in some cases. In this case you get the best of both worlds, the specialization per-task of the niche factor, while still keeping the enormous economies of scale and ability to leverage none niche resources. Software is malleable and adaptable enough that not everything has to be coded for one particular niche to be efficient, at least not yet.

    And those 3 fields aren't really that different, server needs slightly better io, laptop is all about power, and desktop has been "good-enough" for most everybody for years. Now that even desktops need better power efficiency, everything is moving towards the laptop side of the spectrum, where it will balance out again. Call this the "revenge of netburst" effect.
  • by Sebastopol ( 189276 ) on Tuesday August 23, 2005 @03:52PM (#13382445) Homepage
    For everyone who keeps restating the mistake that Moore's law deals with PERFORMANCE, please educate yourselves:

    "Moore's law is the empirical observation that at our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost will double in about 18 months."

    http://en.wikipedia.org/wiki/Moore%27s_Law [wikipedia.org]

    How bout that, NOTHING about performance.

  • Re:Power concerns (Score:5, Informative)

    by timster ( 32400 ) on Tuesday August 23, 2005 @04:07PM (#13382591)
    Sorry, but this argument doesn't hold a lot of water. Fat, for instance, has an energy density of 38 kilojoules per gram, whereas lithium-ion has a density of 0.72 kilojoules per gram. Fat, while flammable, is far less dangerous than lithium-ion.

    Lots of materials have a high energy density and are still very safe and stable. The problem, of course, is that extracting electrical energy from them is not incredibly easy to do. However, we should not say that high energy density is inherently unsafe.
  • by akuma(x86) ( 224898 ) on Tuesday August 23, 2005 @04:18PM (#13382697)
    Give me a break.

    Transmeta was an overhyped technology. YOU are the one that has bought into THEIR marketing. They were supposed to be a performance leader when they got their VC money. When they failed at that, they "transformed" their goals into a power efficiency story.

    Ultra low voltage Pentium-M's deliver far higher performance per watt than any Transmeta part.

    Guess what - most x86 chips do "code morphing" but in hardware. It was foolish to think that a software solution could be more efficient.
  • by ciroknight ( 601098 ) on Tuesday August 23, 2005 @04:30PM (#13382799)
    Why would you say that? The way Hyperthreading was designed was to use all of the hardware possible, as long as possible. To do this, you need a deep pipeline so that each pipeline has time to break up the operation, and you also need lots, and lots, and lots of extra hardware in terms of ALUs, FPUs, and Load/Store units. This adds up to a huge silicon investment, and it's simply not there.

    Secondly, these new cores are not Netburst cores, so Hyperthreading would have to be redesigned from the ground up to work with the previous P6-compatible cores.

    Thirdly, they've had the go ahead to use Dual Core chips. Why do you need two simulated cores if you have two physical ones? Hyperthreading was a good idea, but it was just a hold-off for dual cores, and honestly, with very, very few pieces of software optimized for running dual cores, there's not a lot of enthusiasm to go that route from Intel.

    It's really not about "learning a lesson", as you're pushing your articles on us. It's about moving to where the customers are. Right now, the customers are in long battery life, highly mobile computers. My guess is their server market really bottomed out when IBM came trucking through it again, this time with a chip that can really deliver what it promised. I'd wager another guess as far as to say some bargaining went between IBM and Intel not only for Apple, but for staying out of each others market segments. AMD is the real victor here though, since operating out of the horizion of both, and marketing toward the geek gets things done.
  • by Anonymous Coward on Tuesday August 23, 2005 @04:31PM (#13382813)
    Actually, no.
    Intels EMT64 is the exact same for all intents and purposes as AMDs x86-64 instruction set. That means the 64 bit chip can boot a 32-bit copy of windows, and run 32-bit apps.
  • by Theovon ( 109752 ) on Tuesday August 23, 2005 @04:35PM (#13382836)
    Intel's original idea was to find a way to more aggressively pipeline their CPU design, allowing for higher clock rates. Increasing the number of pipeline stages allows you to reduce the number of transistors between stages, reducing propagation delay and increasing maximum clock rate.

    In a vaccuum, this makes sense. If the instruction reorderer and/or compiler are smart enough, you can keep that pipeline full and take advantage of that higher clock rate. Indeed, there have been examples of carefully-crafted code that ran very well on this architecture.

    Unfortunately, real software is quite different from the ideal sort of thing that runs well on the P4. Too many hazzards (branches and instruction dependencies) limited how full you could keep the pipeline. The CPU would execute instructions out of order, but there's only so smart you can make it. And not all branch hazzards can be fixed by a branch predictor.

    Intel's hyperpipelined design was a relative failure. Sure, they could clock it 50% faster than an AMD, but that's what it took to make up for the increased pipeline stalls. Performance-wise, it was a wash. In other respects, it was a loss, because the processors required more power, more expensive cooling, and more expensive fabrication.

    After a while, Intel came up with a way to make use of that wasted bandwidth. Why not fill those pipeline bubbles with another, independent execution stream? HyperThreading was born. Not altogether a bad idea. In many cases, it allowed up to 30% better over-all performance for multi-threaded apps, and giving you another CPU core (virtual or not) is always a good way to reduce latency.

    In a last-ditch attempt to try to break the MHz barrier, Intel came out with the Northwood core. They lengthened the pipeline from an excessive 20 stages to an absurd 31 stages (not including the x86-to-RISC translator before the trace cache). To make up for the additional hazzards, Intel had to develop even more aggressive branch prediction and use larger reorder buffers. Unfortunately, this too turned out to be a performance wash, with an associated increase in power requirements.

    At the same time, notebook computers started to overtake desktops in popularity. Low-power became MUCH more important than high-performance. The P4 really could not compete in this space, so Intel hired an Israeli team to develop a whole new architecture. To make a long story short, they basically reverted back to the P3 architecture (a relatively short pipeline), but added on all of the P4's advancements in reordering an branch prediction.

    Think about that. Intel had made some mistakes, but they were GOOD mistakes. In order to work around the deficiencies in their P4 design, they had to develop some very impressive and advanced ways of keeping that pipeline full. Of course, any pipeline is going to have hazzards, so imagine applying that technology to a much shorter pipeline. The result was impressive. While the slower clock speed of Banias/Centrino was noticable under SOME circumstances (as it is with AMD processors), the majority of the time, the performance was excellent, even at a lower clock rate and lower power requirement.

    The development of the P4 was a technical failure, but it was also a valuable phase in Intel's life. These lessons learned are going to be the basis for Intel's future success in efficient CPUs. Finally, I think Intel will be able to compete with AMD, even WITHOUT dubious deals with resellers designed to lock AMD out of the market.
  • by akuma(x86) ( 224898 ) on Tuesday August 23, 2005 @05:14PM (#13383181)
    People are well aware of the scaling limits and have been for years.

    There is a fundamental physical limit that puts a cap on the amount of heat that can be removed from a solid per unit time.

    We are fundamentally power limited. Moore's law says the transistor density increases exponentially, but we can't switch those transistors faster because the chip gets too hot and we can't remove that heat fast enough - FUNDAMENTALLY due to the laws of physics.

    So there's a tradeoff. Either put more transistors on the chip and reduce speed, or put fewer transistors on the chip and increase speed.

    This is very different from the past, when we had the luxury of BOTH increased transistor speed and increased density. The total power was not yet high enough to cause a problem.

    For those interested in the details, I refer them to the following paper:

    http://www.intel.com/research/documents/Bourianoff -Proc-IEEE-Limits.pdf [intel.com]
  • Nuclear batteries (Score:5, Informative)

    by jeti ( 105266 ) on Tuesday August 23, 2005 @05:34PM (#13383392)
    For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.


    Considering that plutonium beta cell batteries were used in pacemakers, I wouldn't be too worried about that. I think the shielding could be lightweight enough.
    But getting rid of used batteries could be a real problem.

  • Re:Power concerns (Score:3, Informative)

    by eluusive ( 642298 ) on Tuesday August 23, 2005 @05:47PM (#13383540)
    Idle power the the entire system is approximately 120 Watts. Max power consumption is 406 watts.

    See:
    http://docs.info.apple.com/article.html?artnum=867 83 [apple.com]
  • Re:Now we know... (Score:3, Informative)

    by wvitXpert ( 769356 ) on Tuesday August 23, 2005 @07:20PM (#13384397)
    "Intel processors provide more performance per watt than PowerPC processors do, said Jobs. "When we look at future roadmaps, mid-2006 and beyond, we see PowerPC gives us 15 units of performance per watt, but Intel's roadmap gives us 70. And so this tells us what we have to do," he explained."

    "Starting next year, we will introduce Macs with Intel processors," said Jobs. "This time next year, we plan to ship Macs with Intel processors. In two years, our plan is that the transition will be mostly complete, and will be complete by end of 2007."
    That sounds perfectly in-line with today's Intel announcement.

    From http://www.macworld.com/news/2005/06/06/liveupdate /index.php [macworld.com]
  • Re:Good (Score:3, Informative)

    by Alsee ( 515537 ) on Wednesday August 24, 2005 @12:37AM (#13386591) Homepage
    Is Intel Good(tm) now?

    No.

    The new line of chips are LaGrande Compliant. [xbitlabs.com] LaGrande is Intel's CPU embedded implementation [intel.com] of the Trusted Computing Group's [trustedcom...ggroup.org] Trusted Platform Module.

    So what does that mean?

    All of the new CPUs have ID numbers again. Remember the Pentium 3 ID numbers that created so much outrage and backlash? Whell they are back with a vengance.

    The new CPUs will hold crypto keys, and they are specifically designed to keep the keys (and encrypted files) secure against the owner. They are specifically boobytrapped to self destruct if you try to read out your own keys. IBM is currently using a a seperate non-CPU Trusted Computing chip and they explicitly advertize the self destruct aspect in their Man in Black [ibm.com] Thinkpad TV commercial.

    It can also act as a little spy inside your computer - this is called Remote Attestation - a spy that watches all of the software you run and send a spy report to other people over the internet. You are denied any control over this spy report. The only control you have is to turn this system off completely, and if you turn it off then you get locked out of your own files and it is impossible to run or install Trust-using software. In a five to ten years, under Trusted Network Connect, [trustedcom...ggroup.org] you can even be denied an internet connection unless you activate the system and send this spy report and you have an approved unmodified operating system and approved unmodified software.

    It is basically a DRM enforcer CPU, but far far worse.

    -

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...