Intel Reveals Next-Gen CPUs 515
EconolineCrush writes "Intel has revealed its next generation CPU architecture at the Intel Developer Forum. The new architecture will be shared by 'Conroe' desktop, 'Merom' mobile, and 'Woodcrest' server processors, all of which were demoed by Intel CEO Paul Otellini. Rather than chasing clock speeds, Intel is focusing on lowering power consumption with its new architecture. Otellini claimed that Conroe will offer five times the performance per watt of the company's current desktop chips. He also ran the entire keynote presentation on a Merom laptop, and demoed Conroe on a system running Linux."
More detais: (Score:5, Informative)
http://theinquirer.net/?article=25623 [theinquirer.net]
http://www.hexus.net/content/reviews/review.php?d
http://www.hexus.net/content/reviews/review_print
http://www.hexus.net/content/reviews/review_print
http://www.tomshardware.com/hardnews/20050823_133
Actually... (Score:5, Informative)
TR also has additional details [techreport.com] on the architecture itself.
Addressed in the article... (Score:2, Informative)
Thanks to the death of NetBurst, Conroe will feature a 5x increase in performance per watt. Here's to the death of the power-hungry Intel processor.
and
Woodcrest and Merom will both improve performance per watt by a factor of 3 over their predecessors.
They're improving the processor as opposed to the batteries...
On electrical cost savings alone, PC users will save $1 billion per year for every 100M computers.
Pretty amazing. Although I'd like to see real #s to back up that claim.
Re:Power concerns (Score:5, Informative)
Honestly there just is not that much room for improvement unless someone makes a huge break through.
If you think about the requirements for a battery they are pretty harsh.
1. Relatively none toxic
2. Relatively none explosive,
3. Last a long time.
4. Cheap.
Re:What about performance (Score:2, Informative)
At 14 stages, the main pipeline will be a little bit longer than current Pentium M processors. The cores will be a wider, more parallel design capable of issuing, executing, and retiring four instructions at once. (Current x86 processors are generally three-issue.) The CPU will, of course, feature out-of-order instruction execution and will also have deeper buffers than current Intel processors. These design changes should give the new architecture significantly more performance per clock, and somewhat consequently, higher performance per watt.
From TechReport with actually useful info (Score:5, Informative)
"IDF -- On the heels of Intel's announcement of a single, common CPU architecture intended to drive its mobile, desktop, and server platforms, the company has divulged additional details of that microarchitecture. This dual-core CPU design will, as we've reported, support an array of Intel technologies, including 64-bit EM64T compatibility, virtualization, enhanced security, and active management capabilities. Intel says the new chips will deliver big improvements in performance per watt, especially compared to its Netburst-based offerings.
At 14 stages, the main pipeline will be a little bit longer than current Pentium M processors. The cores will be a wider, more parallel design capable of issuing, executing, and retiring four instructions at once. (Current x86 processors are generally three-issue.) The CPU will, of course, feature out-of-order instruction execution and will also have deeper buffers than current Intel processors. These design changes should give the new architecture significantly more performance per clock, and somewhat consequently, higher performance per watt.
Unlike Intel's current dual-core CPU designs, which don't really share resources or communicate with one another except over the front-side bus, this new design looks to be a much more intentionally multicore design. The on-die L2 cache will be shared between the two cores, and Intel says the relative bandwidth per core will be higher than its current chips. L2 cache size is widely scalable to different sizes for different products. The L1 caches will remain separate and tied to a specific core, but the CPU will be able to transfer data directly from one core's L1 cache to another. Naturally, these CPUs will thus have two cores on a single die.
The first implementation of the architecture will not include Hyper-Threading, but Intel (somewhat cryptically) says to expect additional threads over time. I don't believe that means HT capability will be built into silicon but not initially made active, because Intel expressly cited transistor budget as a reason for excluding HT.
On the memory front, the new architecture is slated to have the ever-present "improved pre-fetch" of data into cache, and it will also include what Intel calls "memory disambiguation." That sounds an awful lot like a NUMA arrangement similar to what's found on AMD's Opteron, but I don't believe it is. This feature seems to be related to a speculative load capability instead..
The server version of the new Intel architecture, code-named Woodcrest, will feature two cores. Intel is also talking about Whitefield, which has as much as twice the L2 cache of Woodcrest and four execution cores.
The company has decided against assigning a codename to this new, common processor microarchitecture, curiously enough. As we've noted, the first CPUs based on this design will be available in the second half of 2006 and built using Intel's 65nm fabrication process. "
Re:So much for Moore's Law (Score:2, Informative)
http://www.webopedia.com/TERM/M/Moores_Law.html [webopedia.com]
Re:Power concerns (Score:3, Informative)
Nope: current Intel CPU is 100+ Watts, a hard drive is like 15 Watts.
Re:instruction set? (Score:5, Informative)
"combining the lessons learned from the Pentium 4's NetBurst and Pentium M's Banias architectures. To put it bluntly, the next-generation microprocessor architecture borrows the FSB and 64-bit capabilities of NetBurst and combines it with the power saving features of the Pentium M platform."
Re:Power concerns (Score:3, Informative)
My old Celeron 600 notebook came with a 38 watthour battery. Most newer notebooks come with 50-65WHr batteries and I think you can order batteries as large as 80WHr with some notebooks.
And really power consumption, at least for the mobile CPUs right now, isn't all that much higher than it was back in the P3 days. Mobile P3s needed anywhere from about 10-20W, the Pentium M's use 7.5 (600mhz idel) to 24W (a few 533 FSB parts are 27W).
Re:Power concerns (Score:5, Informative)
For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.
Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...).
But I remember that there was an article about someone developing such a battery here [sciencedaily.com] the link, I think.
Re:Places (Score:3, Informative)
PS -- the houses in Harbor Town off of Seven Coves are nice.
Not to sound too much like an AMD fanboy, but... (Score:4, Informative)
AMD has dual core chips available now, that get 3-5x the per-watt performance of Intel's Prescott EE line (depending on how they define certain things - Idle? Mean power/load? Peak realistic-but-not-theoretical? TDP?).
And AMD only uses 90nm at the moment, and will have two 65nm fabs up by the end of this year - Which will give them another nice boost in terms of per-watt performance.
I love the idea of a truly "new" CPU line entering the arena, but this smells an awfully lot like more of Intel playing catch-up, and in a way they won't win.
Unless the Pentium-M line has, for whatever reason, reached a hard wall for performance, Intel would have done better to expand it to multi core - Perhaps jump right to 4 cores just to bypass the whole "catch up with dual" criticism - And dropped the price to undercut AMD (at least per-core). But this? Well, it has potential, but unless Intel has decided to seriously under hype a major announcement, I won't lose any sleep worrying that I just upgraded three machines to readiness for AMD's X2 line (can't afford the damn things yet, so currently just running Winchester 3000s, but all just a chip-swap away from going to X2).
Re:So in other words? (Score:2, Informative)
Re:Places (Score:3, Informative)
Merom, Israel...Intel does much R&D work is Israel.
-ds
Mod parent up (Score:3, Informative)
Moore's not done yet.
Re:From TechReport with actually useful info (Score:4, Informative)
So in other words, they haven't learned at all [daemonology.net], it seems. With the major security flaws [eweek.com] in Hyperthreading (including the flaws in the L1/L2 cache design), I'm not surprised they've pulled it from the chips for now.
When things don't work and you can't fix them, pull it out. Microsoft should take a tip here and start pulling out the insecure parts of their OS. Oh wait, that might leave a blank drive instead.
Re:Power concerns (Score:3, Informative)
Their are many one shot batteries that have a pretty good energy density but that would be wastful.
The Navy used Silver/zinc batteries for a high performance test submarine.
"For example it has been long known that you can have very long lasting nuclear batteries using betavoltaics (couple of a source of beta radiation and a p-n junction and you have your battery), but would you put it on your lap that is the question.
Either it has so much shielding that it is too heavy, or it is nice and light and will make you grow another set of legs (or something else down there...). "
Actually you can shield beta with tin foil. The problem is that a lot of good beta emitters are also good gamma emitters. Then you have to add in disposal and other problems.
Re:Transmeta was there first (Score:2, Informative)
To quote the story:
Re:Places (Score:4, Informative)
A new upscale housing community starting is the low 300's. You'll find one in pretty much every suburban area in North America, and they're all exactly the same.
Re:Is this the right direction? (Score:3, Informative)
And those 3 fields aren't really that different, server needs slightly better io, laptop is all about power, and desktop has been "good-enough" for most everybody for years. Now that even desktops need better power efficiency, everything is moving towards the laptop side of the spectrum, where it will balance out again. Call this the "revenge of netburst" effect.
Moore's Law != performance (Score:3, Informative)
"Moore's law is the empirical observation that at our rate of technological development, the complexity of an integrated circuit, with respect to minimum component cost will double in about 18 months."
http://en.wikipedia.org/wiki/Moore%27s_Law [wikipedia.org]
How bout that, NOTHING about performance.
Re:Power concerns (Score:5, Informative)
Lots of materials have a high energy density and are still very safe and stable. The problem, of course, is that extracting electrical energy from them is not incredibly easy to do. However, we should not say that high energy density is inherently unsafe.
Re:Transmeta was there first (Score:2, Informative)
Transmeta was an overhyped technology. YOU are the one that has bought into THEIR marketing. They were supposed to be a performance leader when they got their VC money. When they failed at that, they "transformed" their goals into a power efficiency story.
Ultra low voltage Pentium-M's deliver far higher performance per watt than any Transmeta part.
Guess what - most x86 chips do "code morphing" but in hardware. It was foolish to think that a software solution could be more efficient.
Re:From TechReport with actually useful info (Score:3, Informative)
Secondly, these new cores are not Netburst cores, so Hyperthreading would have to be redesigned from the ground up to work with the previous P6-compatible cores.
Thirdly, they've had the go ahead to use Dual Core chips. Why do you need two simulated cores if you have two physical ones? Hyperthreading was a good idea, but it was just a hold-off for dual cores, and honestly, with very, very few pieces of software optimized for running dual cores, there's not a lot of enthusiasm to go that route from Intel.
It's really not about "learning a lesson", as you're pushing your articles on us. It's about moving to where the customers are. Right now, the customers are in long battery life, highly mobile computers. My guess is their server market really bottomed out when IBM came trucking through it again, this time with a chip that can really deliver what it promised. I'd wager another guess as far as to say some bargaining went between IBM and Intel not only for Apple, but for staying out of each others market segments. AMD is the real victor here though, since operating out of the horizion of both, and marketing toward the geek gets things done.
Re:OS X on 64-bit only then (Score:1, Informative)
Intels EMT64 is the exact same for all intents and purposes as AMDs x86-64 instruction set. That means the 64 bit chip can boot a 32-bit copy of windows, and run 32-bit apps.
Short history of the P4: We saw this coming. (Score:5, Informative)
In a vaccuum, this makes sense. If the instruction reorderer and/or compiler are smart enough, you can keep that pipeline full and take advantage of that higher clock rate. Indeed, there have been examples of carefully-crafted code that ran very well on this architecture.
Unfortunately, real software is quite different from the ideal sort of thing that runs well on the P4. Too many hazzards (branches and instruction dependencies) limited how full you could keep the pipeline. The CPU would execute instructions out of order, but there's only so smart you can make it. And not all branch hazzards can be fixed by a branch predictor.
Intel's hyperpipelined design was a relative failure. Sure, they could clock it 50% faster than an AMD, but that's what it took to make up for the increased pipeline stalls. Performance-wise, it was a wash. In other respects, it was a loss, because the processors required more power, more expensive cooling, and more expensive fabrication.
After a while, Intel came up with a way to make use of that wasted bandwidth. Why not fill those pipeline bubbles with another, independent execution stream? HyperThreading was born. Not altogether a bad idea. In many cases, it allowed up to 30% better over-all performance for multi-threaded apps, and giving you another CPU core (virtual or not) is always a good way to reduce latency.
In a last-ditch attempt to try to break the MHz barrier, Intel came out with the Northwood core. They lengthened the pipeline from an excessive 20 stages to an absurd 31 stages (not including the x86-to-RISC translator before the trace cache). To make up for the additional hazzards, Intel had to develop even more aggressive branch prediction and use larger reorder buffers. Unfortunately, this too turned out to be a performance wash, with an associated increase in power requirements.
At the same time, notebook computers started to overtake desktops in popularity. Low-power became MUCH more important than high-performance. The P4 really could not compete in this space, so Intel hired an Israeli team to develop a whole new architecture. To make a long story short, they basically reverted back to the P3 architecture (a relatively short pipeline), but added on all of the P4's advancements in reordering an branch prediction.
Think about that. Intel had made some mistakes, but they were GOOD mistakes. In order to work around the deficiencies in their P4 design, they had to develop some very impressive and advanced ways of keeping that pipeline full. Of course, any pipeline is going to have hazzards, so imagine applying that technology to a much shorter pipeline. The result was impressive. While the slower clock speed of Banias/Centrino was noticable under SOME circumstances (as it is with AMD processors), the majority of the time, the performance was excellent, even at a lower clock rate and lower power requirement.
The development of the P4 was a technical failure, but it was also a valuable phase in Intel's life. These lessons learned are going to be the basis for Intel's future success in efficient CPUs. Finally, I think Intel will be able to compete with AMD, even WITHOUT dubious deals with resellers designed to lock AMD out of the market.
Re:So much for Moore's Law (Score:3, Informative)
There is a fundamental physical limit that puts a cap on the amount of heat that can be removed from a solid per unit time.
We are fundamentally power limited. Moore's law says the transistor density increases exponentially, but we can't switch those transistors faster because the chip gets too hot and we can't remove that heat fast enough - FUNDAMENTALLY due to the laws of physics.
So there's a tradeoff. Either put more transistors on the chip and reduce speed, or put fewer transistors on the chip and increase speed.
This is very different from the past, when we had the luxury of BOTH increased transistor speed and increased density. The total power was not yet high enough to cause a problem.
For those interested in the details, I refer them to the following paper:
http://www.intel.com/research/documents/Bourianof
Nuclear batteries (Score:5, Informative)
Considering that plutonium beta cell batteries were used in pacemakers, I wouldn't be too worried about that. I think the shielding could be lightweight enough.
But getting rid of used batteries could be a real problem.
Re:Power concerns (Score:3, Informative)
See:
http://docs.info.apple.com/article.html?artnum=86
Re:Now we know... (Score:3, Informative)
From http://www.macworld.com/news/2005/06/06/liveupdat
Re:Good (Score:3, Informative)
No.
The new line of chips are LaGrande Compliant. [xbitlabs.com] LaGrande is Intel's CPU embedded implementation [intel.com] of the Trusted Computing Group's [trustedcom...ggroup.org] Trusted Platform Module.
So what does that mean?
All of the new CPUs have ID numbers again. Remember the Pentium 3 ID numbers that created so much outrage and backlash? Whell they are back with a vengance.
The new CPUs will hold crypto keys, and they are specifically designed to keep the keys (and encrypted files) secure against the owner. They are specifically boobytrapped to self destruct if you try to read out your own keys. IBM is currently using a a seperate non-CPU Trusted Computing chip and they explicitly advertize the self destruct aspect in their Man in Black [ibm.com] Thinkpad TV commercial.
It can also act as a little spy inside your computer - this is called Remote Attestation - a spy that watches all of the software you run and send a spy report to other people over the internet. You are denied any control over this spy report. The only control you have is to turn this system off completely, and if you turn it off then you get locked out of your own files and it is impossible to run or install Trust-using software. In a five to ten years, under Trusted Network Connect, [trustedcom...ggroup.org] you can even be denied an internet connection unless you activate the system and send this spy report and you have an approved unmodified operating system and approved unmodified software.
It is basically a DRM enforcer CPU, but far far worse.
-