Intel Reveals Next-Gen CPUs 515
EconolineCrush writes "Intel has revealed its next generation CPU architecture at the Intel Developer Forum. The new architecture will be shared by 'Conroe' desktop, 'Merom' mobile, and 'Woodcrest' server processors, all of which were demoed by Intel CEO Paul Otellini. Rather than chasing clock speeds, Intel is focusing on lowering power consumption with its new architecture. Otellini claimed that Conroe will offer five times the performance per watt of the company's current desktop chips. He also ran the entire keynote presentation on a Merom laptop, and demoed Conroe on a system running Linux."
Power concerns (Score:5, Insightful)
Now we know... (Score:3, Insightful)
Re:Power concerns (Score:5, Insightful)
Comment removed (Score:2, Insightful)
YES... (Score:1, Insightful)
But this is VERY GOOD. It sounds like a return back to real competition.
Re:Good (Score:5, Insightful)
No, they just reached the limits of silicon technology. Increasing performance any further would require eather designing "smarter" (rather than faster) processor or using multiple cores.
Anyway, the trend is good indeed. Finally, people will start thinking about performance on the level of software.
0.5W (Score:3, Insightful)
Also known as the video iPod, perhaps?
Transmeta was there first (Score:5, Insightful)
Of course, Transmeta's already GOT the technology to cut leakage by tremendous amounts... Given that they are no longer a direct competitor of Intel's, it would make some sense if Intel simply licensed Transmeta's LongRun2 tech. But what do I know? I'm always foolishly choosing the better technology instead of the better marketing.
So much for Moore's Law (Score:5, Insightful)
People have been predicting the demise of Moore's Law for years. It's funny that it's happened and nobody seems to notice.
Re:Power concerns (Score:1, Insightful)
Batteries are also a fairly basic way of making power. Technologies like fuel cells may go some way to solving the problem and nuclear batteries would be wonderful, but will never be sold to the public.
For now though, Intel have to do more with what they've got, so instead of throwing more power in for an inefficient CPU and using brute force, they have to rethink, which is good all round; less energy = less waste = less cost = better use of resources.
power saving servers (Score:5, Insightful)
Re:Good (Score:4, Insightful)
You cannot rely on anything you read here.
Re:Power concerns (Score:5, Insightful)
Because there are physical limits to how much energy you can store in given materials. You can't "design around" these limits. All you can do is try and come up with better materials/better combinations of materials. And we've already tried every combination that is practical.
Which is why fuel-cell powered notebooks are interesting. But who knows if those will ever actually get produced.
Re:Power concerns (Score:3, Insightful)
Power Consumption (Score:4, Insightful)
While I currently favor AMD's processors, The Pentium M is a magnificant piece of hardware. With Intel basing their future processors on the Pentium M they are going to give AMD a run for their money. This will force AMD to drop their prices to a more reasonable level.
The one thing Intel is doing that IMHO is wrong is changing the definition of performance from clock speed to performance/watt. This tells us nothing of the performance of the processor or the power required to run it. Instead we should have two basic measurements for all processors: performace and power consumption. Most people are able to do simple calculations such as division on their own or with a calculator. The is no need to hide the actual performance from the end users.
Re:Good (Score:2, Insightful)
Lower power = more cores... (Score:3, Insightful)
With everyone chasing multi-core rather than clock-rate this isn't really a suprise. If you want to run 4 cores on one die you clearly need to reduce the power consumption of each of those cores over what is done today.
It clearly helps with laptops, which of course will be multi-core themselves in a year or so.
What an odd day it will be when I start ordering either a "2-way" or "4-way" laptop.
Re:Good (Score:2, Insightful)
Linux is always good, no matter what.
Solaris always sucks, no matter what.
Sun is always evil, no matter what.
Subliminal messages (Score:2, Insightful)
I don't think that the choice of desktop background for this aluminum-looking notebook [anandtech.com] is coincidental.
May this be a hint of a "5 W Sub-Laptop" in Apple's future?
Re:Yes, that's nice, but... (Score:3, Insightful)
Re:Now we know... (Score:5, Insightful)
Re:Power concerns (Score:3, Insightful)
Something other than x86 (Score:3, Insightful)
Sadly, I was mistaken.
Re:Not to sound too much like an AMD fanboy, but.. (Score:3, Insightful)
I suspect that is exactly what they are doing, with a new label slapped on to suggest something really new and exciting.
Considering the per-watt performance of the current Pentium M versus the AMD64 (both at 90nm), the Pentium M seems slightly superior. So Intel may actually take the lead there.
In absolute performance, however, the AMDs are currently superior. Unless this changes, AMD CPUs will remain the choice for maximum performance, while a "sensible" office desktop may be best equipped with the new Intels.
Granny Smith (Score:2, Insightful)
*sigh*
Neko
Re:Power concerns (Score:3, Insightful)
As any battery manufacturer will tell you, batteries do not explode. They may, however, "vent with flame."
-Adam
Re:So much for Moore's Law (Score:5, Insightful)
And if anything, the battle between AMD and Intel should have taught everyone here on Slashdot that faster speed does not mean faster performance. There are MANY factors in architecture design that will improve or decrease overall performance. Sure, you can have a 4GHz CPU, but if it's cycles per instruction (CPI) is 100 while a 2GHz CPU has a CPI of 20, the 2GHz CPU will actually be FASTER than the 4GHz chip! Intel knows this, AMD knows this, and everyone who does serious computer design work knows this. Intel chose the wrong path with Netburst and they have known it for years. But you can't turn around one day, snap your fingers, and switch to another architecture company wide. It takes time, hard work, and a lot of people, which is why we are only seeing this change now and not back in 2002 like they would have wanted.
I'm happy with this change and I think playing with the architecture to get better CPI and instructions per cycle (IPC) is a better way to go than just cranking up the clock speed.
Re:Transmeta was there first (Score:3, Insightful)
Of course, Transmeta's already GOT the technology to cut leakage by tremendous amounts... Given that they are no longer a direct competitor of Intel's
Yeah, they have it. Their approach is a bit like this:
0) Preach, preach, preach about performance/watt
1) IPO
2) Deliver low power
3) Performance sucks a big donkey (i.e. fail to deliver)
4) Fail to hit your market and go out of business (i.e. no longer a direct competitor of Intel)
LongRun2???? You mean pay money for something that does not exist and is not proven and is already a generation behind the competition. That would be suicide.
But what do I know?
Precisely. Have a seat please. The adults are talking.
Re:Something other than x86 (Score:5, Insightful)
That's a bit like saying, "Finally! We move on from English. We have advanced beyond centruries old technology."
The X86 is just a language. No recent processor actually uses it raw. There may be some inefficiencies in the language itself, but the most significant have been reduced by extensions and smart compilers which avoid those constructs. The remaining inefficiencies are worth the backwards compatability, but they are minimal anyway.
A lot of people keep complaining about this "ancient" instruction set, but the reality is that it doesn't matter at this point. Even low-level drivers are being written in C due to fast processors and infinite storage space.
Yeah, sure, it would be nice to move to another instruction set, but previous efforts have failed. Intel's 64 bit chip requires a monstrously complex compiler, but it's wicked fast/efficient. But the P4 has surpassed it with it's "inefficient, outdated, and clunky" instruction set.
There's so much momentum on the X86 caravan that to develop something else and surpass the caravan is a hurculean task. Currently it is more effective to improve the architecture that runs X86 than it is to make a new instruction set and try to improve the architecture at the same time. (which is required since just changing the instruction set won't advance the performance enough to compete with the X86 that comes out when you're ready to release)
-Adam
Re:Now that Apple has joined the Intel bandwagon . (Score:5, Insightful)
different problem solving approach (Score:3, Insightful)
Re:You forgot to mention (Score:2, Insightful)
Where's the marchitecture? (Score:3, Insightful)
Wow, could it be that the engineers are back in charge at Intel? Palace coup? You know if the marketing people were still in charge, they'd have blue freaks miming the new codename all over the place. Dare I hope that it might become cool again for geeks to like Intel...
Re:Power concerns (Score:3, Insightful)
Tim
Re:Power concerns (Score:3, Insightful)
Everything is energy. The thing is, you need to be able to get the energy out of it quickly and easily. As far as releases of energy go, you don't get much faster and easier than a bomb.
Re:So What's Next After Multi-Cores and Low Power? (Score:1, Insightful)
The reason chips have so few errors is because chip problems are *incredibly* expensive to fix (have to rip them off the circuit boards and solder on new ones), so chip companies take a *lot* of care to eliminate errors before shipping, or they go out of business. Software companies know they can let users find errors and send out a patch later for next to nothing, so they don't spend as much on QA. Plus software consumers have low expectations for quality, and high expectations for release schedules - they want the program now, even if it's not quite done.
Software for high-reliability systems like flight controls gets much more QA effort, so relatively few people die from software bugs, despite the fact that airplanes, nuclear power plants, missiles and other life-critical systems have been running on software for decades. Fatal errors occur in every design discipline, from software to civil engineering. Ever seen the video of that bridge that collapsed because it started resonating in a light breeze? Remember the car whose gas tank would rupture and incinerate the passengers in a rear end collision? Tally up the deaths from faulty software, compare it to other engineering disciplines (counting only design errors, not physical defects). I bet the software number is comparable to all the others, or lower.
Sorry to be the one to tell you this, but after checking out your website, it looks like you've wasted a lot of your life on a crackpot theory. Sucks to be you!