Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

The Economics of Chips With Many Cores 343

meanonymous writes "HPCWire reports that a unique marketing model for 'manycore' processors is being proposed by University of Illinois at Urbana-Champaign researchers. The current economic model has customers purchasing systems containing processors that meet the average or worst-case computation needs of their applications. The researchers contend that the increasing number of cores complicates the matching of performance needs and applications and makes the cost of buying idle computing power increasingly prohibitive. They speculate that the customer will typically require fewer cores than are physically on the chip, but may want to use more of them in certain instances. They suggest that chips be developed in a manner that allows users to pay only for the computing power they need rather than the peak computing power that is physically present. By incorporating small pieces of logic into the processor, the vendor can enable and disable individual cores, and they offer five models that allow dynamic adjustment of the chip's available processing power."
This discussion has been archived. No new comments can be posted.

The Economics of Chips With Many Cores

Comments Filter:
  • How is this new? (Score:4, Informative)

    by lintux ( 125434 ) <slashdot&wilmer,gaast,net> on Tuesday January 15, 2008 @04:03AM (#22047860) Homepage
    IIRC this is done in mainframes for *ages* already...
    • Not only that, but the spare "cores" can become CPUs or IO channels. Also, with parrallel sysplex, you can shunt work between boxes on-the-fly. This means you can steal your test or other non-essential systems for mission-critical work.

      I dont know whether this is possible with zLinux partitions, as alot of the moving about of stuff is very much a z/OS function, I.e. done by the OS, not the hardware or virtualisation.
    • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday January 15, 2008 @04:22AM (#22047970) Homepage
      In mainframes you have pretty much a single vendor (IBM). Even in the days of Amdahl and Hitachi, once you were committed to a single vendor they had a lot of market power over you. So the vendor can set its own price, and squeeze as much money out of each customer as possible by making variable prices that relate to your ability and willingness to pay, rather than to the cost of manufacturing the equipment.

      In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101. I expect the Intel-AMD duopoly leaves Intel some scope to cripple its processors to maintain price differentials (arguably they already do that by selling chips clocked at a lower rate than they are capable of). But they couldn't indulge in this game too much because customers would buy AMD instead (unless AMD agreed to also cripple its multicore chips in the same way, which would probably be illegal collusion).

      Compare software where you have arbitrary limits on the number of seats, incoming connections, or even the maximum file size that can be handled. It costs the vendor nothing more to compile the program with MAX_SEATS = 100 instead of 10, but they charge more for the 'enterprise' version because they can. But only for programs that don't have effective competition willing to give the customer what he wants. Certainly any attempt to apply this kind of crippling to Linux has failed in the market because you can easily change to a different vendor (see Caldera).
      • In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101.

        And when your software is licensed per processor at (let's say) $100 per cpu, your extra, unwanted, 50 processors quickly become a burden. I'd be willing to pay more for a crippled processor if it saved me money elsewhere, a

        • Per processor licenses are unaffected by the number of cores, a processor and a core are 2 separate beasts. No companies charge per core.
          • Re: (Score:3, Informative)

            by afidel ( 530433 )
            Bullshit, the biggest cost vendor who licenses per CPU actually licenses per core, Oracle! On Windows it's one license per 2 cores, everywhere else it's .75 per core except Sun T1/T2 where it's .25 per core.
          • Re: (Score:3, Informative)

            by MBGMorden ( 803437 )
            Your statement is *USUALLY* right, but not universally right.

            IBM for example when licensing some stuff (namely Lotus Domino): they go by performance units.

            A single core x86 CPU would be 100 units per core. Dual-core CPU's would be 50 units per core (notice that they work out to the same). Quad-cores however are also 50 units per core, so while a Single and Dual Core chip cost the same, Quad Cores end up costing twice as much in license fees.

            They even have some architectures where it changes to different va
      • I take it that the idea would be this: For $100 chip you sell at $200, you get some extra money to subsidize $100 chips that you sell for $100 in order to maintain market share.

        If there is performance parity along the entire product line of two processor competitors, like there had been until the Core2 era, that doesn't stop crippling. You don't need collusion - both companies could have parallel reasons to offer tiered prices for differently-crippled variants.

        But here's what I think is interesting: If

        • by Gerzel ( 240421 )
          Didn't someone predict that we'd only ever need 128 MB of ram and that more ram would be superfluous for most consumers?

          While in theory technology might out pace demand, and I think it may very well happen someday, in practice this is something that I'll believe when I see it.

          Right now there are a lot of flashy games out there. Users may want to run many more applications at once (or more likely turn on M$ poorly executed eyecandy and not notice their computers slowing down).

          I don't think this is something
          • Re: (Score:3, Insightful)

            by Firethorn ( 177587 )
            Didn't someone predict that we'd only ever need 128 MB of ram and that more ram would be superfluous for most consumers?


            I believe that you're thinking about the quote commonly attributed to Bill Gates '640k will be enough for anyone'.

            Right now there are a lot of flashy games out there. Users may want to run many more applications at once (or more likely turn on M$ poorly executed eyecandy and not notice their computers slowing down).

            Until 3D acceleration is so good that you can't tell it from real life, all
      • ..I expect the Intel-AMD duopoly leaves Intel some scope to cripple its processors to maintain price differentials...

        Interestingly, of late, it is AMD that is trying to create product differentiating by crippling their processors, or at least by selling processors with one core switched off. They're trying to do this by selling "tri-core" processors based on their Barcelona/Phenom cores, which are nothing much an actual quad-core with a core turned off, either deliberately, or because it is defective. They probably want to position this as a mid-range offering, to make it more competitive to Intel's relatively cheaper q

        • Re: (Score:3, Insightful)

          by Lonewolf666 ( 259450 )

          Interestingly, of late, it is AMD that is trying to create product differentiating by crippling their processors, or at least by selling processors with one core switched off. They're trying to do this by selling "tri-core" processors based on their Barcelona/Phenom cores, which are nothing much an actual quad-core with a core turned off, either deliberately, or because it is defective. They probably want to position this as a mid-range offering, to make it more competitive to Intel's relatively cheaper qua

          • Re: (Score:3, Interesting)

            by Firethorn ( 177587 )
            On top of that, there may be some crippling of intact quad-cores if there is more demand for the cheap "tri-cores".

            It'd probably be more profitable to up the price of the tri-core a nitch*. A couple bucks would reduce the demand for the tri-core, as some people decide to settle for a dual core instead and some decide that the now smaller difference between a tri-core and a quad core makes it worth it to buy a quad core.

            IE:
            Quad: $100, Tri $75, Dual $50 - not enough triples to meet demand
            Quad $100, Tri $77,
      • by ElDuque ( 267493 ) <adw5&lehigh,edu> on Tuesday January 15, 2008 @07:50AM (#22048904)
        That's a common misconseption there - prices in a competitive market are based on the consumer's willingness to pay, and nothing else. The cost of manufacturing equipment would only come into play in a monopoly situation, where the seller is able to "set" prices (because she will be sure to set them higher than her per-unit production costs.)

        This is the same misconseption people often apply to baseball player salaries - they do NOT drive ticket prices. Baseball ticket prices are set at the highest level the market will bear - a price that is determined as consumers make decisions between countless sources of entertainment and leisure.

        What is confusing is that the quality of a product (and therefore the market demand for it, sometimes) is often related to the cost of production, so it looks like production costs set prices. But remember when Homer designed a car? It was $80,000, and no one wanted to buy it at that price! The consumers decided there were better uses for their car-buying dollars. This is a perfect (although fictional) illustration of why costs != prices in a competitive market.
    • In a way they already do something similar when they sell one chip with various clock speeds. Artificial limitations are nothing new to the tech industry.
      • Re: (Score:3, Informative)

        by jorenko ( 238937 )
        Except that they do this because manufacturing chips is not an exact science -- some turn out better than others, and these are able to handle higher clock speeds with less chance of failure and less power usage. Thus, the quality of each individual chip determines its clock speed and its price. While the enthusiast will always be able to increase that with no problem 90% of the time, that's quite a different thing from selling a chip that's supposed to be turned up. These would need to be good enough to ha
    • by kenh ( 9056 )
      Yes - I know that Sun, IBM, DEC, HP and others havr offered this "service" (or option) on some of their larger machines, they essentially stuff the box with maximal number of CPUs, memory, what have you, then sell you the box at a reduced price reflecting the "activated" hardware you ordered, and you are expected to "upgrade" the box at a later date and "activate" the idle CPUs, RAM, whatever for an additional fee. The trick is, you don't own the excess CPUs, RAM, etc. - you are storing it until you buy it
  • by foobsr ( 693224 ) on Tuesday January 15, 2008 @04:06AM (#22047880) Homepage Journal
    In related news, an initiative of car manufacturers spearheaded by Ford has introduced an enabling 'cylinder per need' model. Car performance is wirelessly monitored in real time to give the customer the option to add in additional power according to his needs if he has signed to a plan designed to optimally fit his profile (composed on his overall lifestyle information). This also creates a new exciting opportunity to reduce individual carbon tyreprints for the consumer.

    CC.
    • by shmlco ( 594907 )
      Yeah, and equally dumb. In both cases the manufacturer had to build it and pay for parts and materials and processing and all of the other costs involved, and you have the entire end product sitting there, whether you're using it to its full potential or not.

      I may only use four of my eight cores most of the time, but there are eight of them there, nonetheless.
    • by peas_n_carrots ( 1025360 ) on Tuesday January 15, 2008 @04:54AM (#22048150)
      Those 100-cylinder engines sure are light. After all, the metal necessary to build such an engine would only make up the majority of the weight of the car. Use 10 cylinders to drag around the rest of the 90, now that's efficiency.
    • It's a valid point. Certainly the European car manufacturers have a "gentleman's agreement" to limit their high-end sports cars to a maximum speed of 155mph (around 250km/h). Now, I know that I wouldn't use that kind of power every day, but it would annoy me to know that the car was capable of more but prevented from doing so by an artificial limitation. If I'm paying for a 500bhp car, I want it to run like a 500bhp car...
      • Your 500 bhp car will still run like a 500 bhp car, up to the agreed 156 mph limit.

        It'll still accelerate like shit off a chrome shovel, and if you really want the 200 mph or so that 500 bhp will give you, it's possible to remap the ECU to remove the limit.

        The best use for disabling cylinders is when driving in traffic - to be able to run on half the normal number of cylinders at idle saves a hell of a lot of fuel, especially in a 500 hp behemoth.

        Disclaimer - I drive a slightly tweaked Scorpio Cossie that k

      • by gnasher719 ( 869701 ) on Tuesday January 15, 2008 @06:53AM (#22048610)

        It's a valid point. Certainly the European car manufacturers have a "gentleman's agreement" to limit their high-end sports cars to a maximum speed of 155mph (around 250km/h). Now, I know that I wouldn't use that kind of power every day, but it would annoy me to know that the car was capable of more but prevented from doing so by an artificial limitation. If I'm paying for a 500bhp car, I want it to run like a 500bhp car...
        I suppose people like you are the reason for the limitation.
        • It's a valid point. Certainly the European car manufacturers have a "gentleman's agreement" to limit their high-end sports cars to a maximum speed of 155mph (around 250km/h). Now, I know that I wouldn't use that kind of power every day, but it would annoy me to know that the car was capable of more but prevented from doing so by an artificial limitation. If I'm paying for a 500bhp car, I want it to run like a 500bhp car...

          I suppose people like you are the reason for the limitation.

          Isn't this one of the attitudes about women put forward by the porn industry? If she comes equipped with three cylinders, I want all three, even if I've only got one piston.

          Steven Pinker has a pretty good article in the NYTimes about moral instincts. By one method of hamming the hog, there are five core instincts: Harm, fairness, community (or group loyalty), authority and purity.

          http://www.nytimes.com/2008/01/13/magazine/13Psychology-t.html [nytimes.com]

          Unfortunately, he leaves out gratification entitlement, which is

    • In related news, an initiative of car manufacturers spearheaded by Ford has introduced an enabling 'cylinder per need' model.


      I suspect that you are not aware that they've been doing this for some time now. [findarticles.com]

      Chris Mattern
  • This should lead itself to a whole new form of hacking - buy the 10 core system and tweak it to use all 100
  • I've been looking for a new web host recently, and I'm consistently attracted to ones based on the Virtual Private Server concept -- your own box within another box. The multicore economics argument is definitely tied in here, where we can balance demand not just within our own enterprise, but between different consumers of computing time.

    Beyond that, I don't really get it... if I have a certain computational workload X, I'd probably prefer to use more cores temporarily rather than pace the work longer ove
    • It makes sense on web hosts, though. The machine isn't used solely by you, and the resources are limited.

      It doesn't make sense for desktop computers with one user at a time.
  • by k-zed ( 92087 ) on Tuesday January 15, 2008 @04:10AM (#22047912) Homepage Journal
    I don't want to "rent" the processing power of my own computer, thank you. Nor do I want to "rent" my operating system, or my music, or movies. I buy those things, and I'm free to do with them as I wish.

    Renting your own possessions back to you is the sweetest dream of all hardware, software and "entertainment" manufacturers. Never let them do it.
    • "I don't want to "rent" the processing power of my own computer"

      Well in that case you can remove the tinfoil. This is aimed at people who do, they have the money to get it and the bussiness sense to know what to do with it. I don't mean to be rude but nobody cares if you have your own data center in the basement, unless of course you want to pay someone serious money to look after it for you.

      "Renting your own possessions back" is a practise used by multi-nationals for tax purposes. /fixed
    • by BeanThere ( 28381 ) on Tuesday January 15, 2008 @08:08AM (#22049032)
      ... for CPUs, there are effectively ZERO variable costs to the producer once you've purchased the chip and it's in your hands.

      Dedicated circuitry to create artificial scarcity and control actually adds unnecessary costs.

      This might be useful in very specific scenarios where somebody, say, owns a supercomputer and rents it out, but even there, I'm sure there are far better solutions that don't involve the CPU hardware.

      This is, like you suggest, just a BS wet dream of the manufacturers ... make something once, get money forever.

      Right now we probably have few enough major chip vendors that with a little bit of collusion, if they decided not to compete, they could probably pull something like this on us. This doesn't look likely right now, but it seems possible. Hopefully some other (possibly foreign) company would enter the market if that happened. Competition is healthy for a market.
  • erm... (Score:2, Interesting)

    by Anonymous Coward
    So, Intel is going to charge us less for a processor with 4 cores because we can turn three off most of the time? Or is the power saving supposed to make the cost of the chip less prohibitive?

    Maybe it'll be a subscription service, 9.99 per month and .99 cents per minute every time you turn another core on.

    • Re: (Score:3, Informative)

      by gnasher719 ( 869701 )

      So, Intel is going to charge us less for a processor with 4 cores because we can turn three off most of the time? Or is the power saving supposed to make the cost of the chip less prohibitive?

      First, it seems you are under the impression that this might be Intel's idea. It is not. Second, turning off cores is stupid. If you want to reduce performance of a multi-core chip, you reduce the clock speed as far as possible. Four cores at a quarter of the maximum clock speed use lots less electricity than one core running at full speed.

  • by Moraelin ( 679338 ) on Tuesday January 15, 2008 @04:17AM (#22047946) Journal
    You know what I still don't get? Why's everyone acting like dividing a CPU into several separate cores is a good thing?

    Let me compare it to, say, a construction company having a number of teams and a number of resources, e.g., vehicles:

    1. One team, 4 vehicles. That's classic single core. Downside, at a given moment it might only need 2 or 3 of those vehicles. (E.g., once you're done digging the foundation, you have a lot less need of the bulldozer.)

    2. Two teams, can pick what they need from a common pool of 4 vehicles. That's classic "hyperthreading". Downside, you're not getting twice the work done. Upside, you still paid only for 4 vehicles, and you're likely to get more out of them.

    3. Two teams, each with 4 vehicles of its own. They can't borrow one from each other. This is "dual core." Downside, now any waste from point 1 is doubled.

    But the one I don't see is, say,

    4. Two teams with a common pool of 8 vehicles. It's got to be more efficient than number 3.

    Basically #4 is the logical extension of hyperthreading, and it seems to me more efficient any way you want to slice it. Even if you add HT to dual-core design, you end up with twice #2 instead of #4 with 4 teams and a common pool. There is no reason why splitting the pool of resources (be it construction vehicles or execution pipelines) should be more efficient than having them all in a larger dynamically-allocated pool.

    So why _are_ we doing that stupidity? Just because AMD at one point couldn't get hyperthreading right and had its marketers convince everyone that worse is better, and up is down?
    • by lintux ( 125434 ) <slashdot&wilmer,gaast,net> on Tuesday January 15, 2008 @04:22AM (#22047972) Homepage
      You know what I still don't get? Why's everyone acting like dividing a CPU into several separate cores is a good thing?

      AFAIK adding more MHz was getting more and more complicated, so it was time to try a new trick.
      • Well, yes, obviously. But that's not what I was asking. My question was _not_ "why don't they stick to the MHz race?"

        What I'm saying is: ok, so now they have to expand in width, so to speak, instead of in MHz. Fine. But why is (A) two separated sets of, say, 3 pipelines better (B) than a set of 6 with two execution units, allocated dynamically? It's still 8 pipelines, only the second one can be dynamically allocated with better results. If one particular thread could use 4 while another used only 2, solutio
    • by RzUpAnmsCwrds ( 262647 ) on Tuesday January 15, 2008 @04:53AM (#22048148)
      Your metaphor on multi-issue CPUs is interesting, but not necessarily valid.

      Instruction scheduling is the biggest fundamental problem facing CPUs today. Even the best pipelined design issues only one instruction per clock, per pipeline (excluding things like macro-op fusion which combine multiple logical instructions into a single internal instruction). So we add more pipelines. But more pipelines can only get us so far - it becomes increasingly more difficult to figure out (schedule) which instructions can be executed on which pipeline at what time.

      There are several potential solutions. One is to use a VLIW architecture where the compiler schedules instructions and packs them into bundles which can be executed in parallel. The problem with VLIW is that many scheduling decisions can only occur at runtime. VLIW is also highly dependent on having excellent compilers. All of these problems (among others) plagued Intel's advanced VLIW (they called it "EPIC") architecture, Itanium.

      Another solution is virtual cores, or HyperThreading. HTT uses instructions from another thread (assuming that one is available) to fill pipeline slots that would otherwise be unused. The problem with HTT is that you still need a substantial amount of decoding logic for the other thread, not to mention a more advanced register system (although modern CPUs already have a very advanced register system, particularly on register-starved architectures like x86) and other associated logic. In addition, if you want to get benefits from pipeline stalls (e.g like on the P4), you need even more logic. This means that HTT isn't particularly beneficial unless you have code that results in a large number of data dependencies or branch mispredicts, or if pipeline stalls are particularly expensive.

      Multicore CPUs have come about for one simple reason: we can't figure out what to do with all of the transistors we have. CPUs have become increasingly complex, yet the fabrication technology keeps marching forward, outpacing the design resources that are available. This has manifested itself in two main ways.

      First, designers started adding larger and larger caches to CPUs (caches are easy to design but take up lots of transistors). But after a point, adding more cache doesn't help. The more cache you have, the slower it operates. So designers added a multi-level cache hierarchy. But this too only goes so far - as you add more cache levels, the performance delta between memory and cache decreases, because there's only a finite level of reference locality in code (data structures like linked lists don't help this). You may be able to get a single function in cache, but it's unlikely that you're going to get the whole data set used by a complex program. The net result is that beyond a certain point, adding more cache doesn't do much.

      What do you do when you can't add more cache? You could add more functional units, but then you're constrained by your front-end logic again, which is a far more difficult problem to solve. You could add more front-end logic, which is what HyperThreading does. But that only helps if your functional units are sitting idle a substantial percentage of the time (as they did on the P4).

      So you look at adding both functional units and more front-end logic. You'll decode many instruction streams and try to schedule them on many pipelines. This is what modern GPUs do, and for them, it works quite well. But most general-purpose code is loaded with data dependencies and branches, which makes it very difficult to schedule more than a very few (say, 4) instructions at a time, regardless of how many pipelines you have. So, now, effectively, you have one thread that is predominantly using 4 pipelines, and one that is predominantly using the other 4.

      Wait, though. If one thread is mostly using one set of pipelines, and one is mostly using the other, we can split the pipelines into two groups. Each will take one thread. This way, our register and cache systems are simpler (because
      • Well, first of all, thanks for the in depth answer.

        Another solution is virtual cores, or HyperThreading. HTT uses instructions from another thread (assuming that one is available) to fill pipeline slots that would otherwise be unused. The problem with HTT is that you still need a substantial amount of decoding logic for the other thread, not to mention a more advanced register system (although modern CPUs already have a very advanced register system, particularly on register-starved architectures like x86)

        • One major problem you're missing is that having an extra decoder on the chip (that is used by another core) is not, and cannot be that useful to the other core. The problem is that accessing the other decoder will incur a huge latency penalty (20+ cycles). During those cycles, dependent instructions will generally stall in the main pipeline, and overall throughput could be decreased. Of course the scheduling to choose the other one is also a nightmare.

          Comparing it with the construction analogy. if you w
    • Re: (Score:2, Informative)

      "..because AMD at one point couldn't get hyperthreading right and had its marketers convince..."

      Quick history lesson. Intel tried pawning off hyperthreading to the market. If you mean that AMD should have done hyperthreading, perhaps you should look at the reviews/benchmarks to see that it reduced performance in many cases. In the future, more software might by able to take advantage of increased thread parallelism, but that future is not now, at least in the x86 world.
      • that future is most certainly now. It's been here for a while.

        Parallel processing is not some weird dream, way off in the future, that lots of people here on slashdot think it is. It's a reality and it's here now.

        In fact it's been with us since the 70s in the form of multi-process software.
        Multithreading has some idiots running scared ("It's so *hard*!" being their favourite lie), but it's been with us for quite some time. I've been writing multi threaded server and workstation software for about 8 years no
        • It is less of an issue of being hard but an issue of thinking differently. And there are more methods then just multitreading. on MasParr systems there was a language called MPL it was mostly C with Parallel Processing features non thread related. They had what was called Plural Variables. Meaning each process worked on a variable. so a psuto example of the code will be like this...

          /*A simple lottery program that will give each processor a random number and will return 1 if any processor has the value gi

      • Yes and no (Score:3, Insightful)

        by Moraelin ( 679338 )

        Quick history lesson. Intel tried pawning off hyperthreading to the market. If you mean that AMD should have done hyperthreading, perhaps you should look at the reviews/benchmarks to see that it reduced performance in many cases. In the future, more software might by able to take advantage of increased thread parallelism, but that future is not now, at least in the x86 world.

        While I'll concede the point that Intel's first implementation was flawed, you can't judge and damn a technology for all eternity just

    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday January 15, 2008 @05:21AM (#22048302) Journal
      I know that on Linux, I cannot immediately tell the difference between an SMP-enabled kernel on a single-core Hyperthreading system, and an SMP-enabled kernel on a dual-core system with no hyperthreading.

      In either case, I'm fairly sure I see at least two items in /proc/cpuinfo, I need an SMP kernel, etc. So if someone (Intel) suddenly decided to make a dual-core hyperthreaded design in which the "teams" actually shared a common pool, would I notice, short of Intel making an announcement?

      As for your assertion, a quick scan of Wikipedia suggests that you're a bit naively wrong here. (But then, I'm the one pretending to know what I'm talking about from a quick scan of wikipedia; I suppose I'm being naive.) Wikipedia makes a distinction between Instruction level parallelism [wikipedia.org] and Thread level parallelism [wikipedia.org], with advantages and disadvantages for each.

      One of the advantages of thread-level parallelism is that it's software deciding what can be parallized and how. This is all the threading, locking, message-passing, and general insanity that you have to deal with when writing code to take advantage of more than one CPU. As I understand it, a pipelining processor essentially has to do this work for you, by watching instructions as they come in, and somehow making sure that if instruction A depends on instruction B, they are not executed together. One way of doing this is to delay the entire chain until instruction A finishes. Another is to reorder the instructions.

      But even if you consider this a solved problem, it requires a bit of hardware to solve. I'm guessing at some point, it's easier to just throw more cores at the problem than to try to make each core a more efficient pipeline, just as it's easier to throw more cores at the problem than it is to try to make each core run faster.

      There's also that user-level interface I talked about above. With multicore and no hyperthreading, the OS knows which core is which, and can distribute tasks appropriately -- idle tasks can take up half of one core, the gzip process (or whatever) can take up ALL of another core. With multicore and hyperthreading, the OS might not know -- it might simply see four cores. And with multicore, hyperthreading, and shared pipelines, it gets worse -- as I understand it, there's no longer any way, at that point, that an OS can specify which CPU a particular thread should be sent to. Threading itself may become irrelevant.

      Well, anyway... What confuses me is that we still haven't adopted languages and practices that naturally scale to multiple cores. I'm not talking about complex threading models that make it easy to deadlock -- I'm talking about message-passing systems like Erlang, or wholly-functional systems like Haskell.

      Hint: Erlang programs can easily be ported from single-core to multi-core to a multi-machine cluster. Haskell programs require extra work at the source code level to be made single-threaded, and can (like Make) use an arbitrary number of threads, specifiable at the commandline. They're not perfect, by far; Haskell's garbage collector is single-threaded, I think. But that's an implementation detail; most programs in C and friends, even Perl/Python/Ruby, will not be written with multiple cores in mind, and, in fact, have single-threaded implementations (or stupid things like the GIL).
      • Well, yes, the thing about thread level parallelism vs instruction level parallelism is very insightful and true, but it only says why we're leaving case #1 behind. Cases #2, #3 and #4 all had thread level parallelism.

        As for the languages, good question. I guess because it's cheaper to use existing skills and libraries than to port everything to Erlang? No real idea, though. I'm sure someone is better qualified than me to answer that.
    • Because our software is not made to run on some crazy shit that you just made up. Thats the reason we still have the same X86 architecture since like a million years ago, noone is chump enough to make something totally different that nothing runs on. Everything already supports multiple processors though, so making a chip with 2 core is the sensible thing to do as far as compatibility goes.
      • by Antity-H ( 535635 ) on Tuesday January 15, 2008 @07:59AM (#22048978) Homepage

        noone is chump enough to make something totally different that nothing runs on.

        I guess that's why IBM did not develop the cell processor which is therefor not used in PS3s or why no supercomputer is built using it.

        All this also explains why IBM did not develop a new product line of cell based blade servers. And neither are grids being built around cell based servers.

        Of course even if IBM did develop it and sony did use it in the PS3, it would be unable to run anything which is why there isn't any game for the PS3 or why there are not linux distribution for the PS3.

        Sorry, but a different architecture doesn't mean nothing runs on it, nor does it mean noone will develop for it if the promised power is cheap and proficient enough.
  • First of all, most people buy low to mid-range CPUs and other goods, and while this may be enough to cover the production costs, the manufacturers' largest profits are on the high end CPUs, cars, watches, etc. Currently the increased price tag is justified to some extent by the increased quality, performance and even status given by the high end goods. But under the proposed model, there would be no physical difference between the CPUs, other than artificial limitations imposed by the manufacturer. Suddenly
    • by mgblst ( 80109 )
      The only thing stopping us is that they don't make soldering irons that small. There are some physical differences, in the fact that some important interconnects are missing.
  • Why? (Score:3, Interesting)

    by RuBLed ( 995686 ) on Tuesday January 15, 2008 @04:25AM (#22047994)
    If one could make a 5 core processor for the price of $300 and be able to sell it with 5 cores enabled to a customer for $600. Why would he sell the same unit for $400 with only 2 cores enabled?

    Wouldn't he profit more if he could sell the 5 core processors all at $600 and make a separate 2 core processor for the price of $200 and sell it for $400?

    Well if they're going to rent it (as some of TFA said), it would make sense but if they're not, then it would be a profit not maximized.
    • Re:Why? (Score:5, Insightful)

      by Anne Thwacks ( 531696 ) on Tuesday January 15, 2008 @04:49AM (#22048128)
      Because in reality, it costs $4.99 to make the chip, and $10,000,000 to design it.

      The cost of designing one core is the same as the cost of designing 10 or 100 cores, because copy and paste was invented several years ago. The cost of adding a core to the design is about 1%.

      There might be a case for powering down unused processors to save energy, and there is a case for selling cheaper processors with reduced core counts where some cores don't work, but there is no case for disabling working processeors for economic reasons.

      Sun's Niagra technology differs, cos it has "virtual cores" which gives you more virtual cores but slower. Its very good if you multi-thread (run apache) and p*ss- poor if you dont (run Windows).

      • Re:Why? (Score:5, Funny)

        by Alsee ( 515537 ) on Tuesday January 15, 2008 @06:40AM (#22048566) Homepage
        The cost of designing one core is the same as the cost of designing 10 or 100 cores, because copy and paste was invented several years ago.

        Copy-pasting a hundred cores will cost almost ten times as much as copy-pasting 10 cores because you have to pay the patent holder who invented copy-paste.

        -
      • by mblase ( 200735 )
        Sun's Niagra technology differs, cos it has "virtual cores" which gives you more virtual cores but slower.

        Thanks, that cleared it up completely for me.
    • by rm999 ( 775449 )
      "Wouldn't he profit more if he could sell the 5 core processors all at $600 and make a separate 2 core processor for the price of $200 and sell it for $400?"

      Economy of scale says not necessarily. If you can build a factory that only builds one product, you can make it incredibly efficient. One possibility under this plan would be to intelligently disable cores. For example, let's say there is some failure rate in each core. The chips with high failure rates can have the failed cores disabled, and the compan
  • by SmallFurryCreature ( 593017 ) on Tuesday January 15, 2008 @04:26AM (#22048002) Journal

    In theory it makes sense and some of you might point at mainframes as an example. However that would like comparing cars to trucks (real trucks not big cars), they are both vehicles and a company might use both but their usage is totally different.

    PC's just ain't upgraded, either they are good enough or they are replaced. I love building my own computer but am not as crazy as to replace the CPU whenever a new clockspeed comes out and this means that even a self-builder will often have to bite the bullet and just replace everything.

    Be honest, how often in business do you upgrade your desktops by replacing the CPU?

    We can test this easily, in the era of the P3 a lot of office systems were DUAL ready, so that when your needs increased you could ad another P3 and have lots more power. How many of you did that with a P3 that had been in the office for more then a year?

    This scheme seems like overthinking the problem. PC's in my experience either last until they die and by that time it cheaper to buy new then upgrade/repair, or they are simply replaced with the latest shining model because tech moves so fast that upgrading just the CPU will turn everything else into a bottle neck. Just check how many different types of memory we have had over the years. Would you really want a quad core on your IDE-33 motherboard? Play DVD's on a single speed cd-rom?

    Either you need all the cores now, or by the time you activate them because your apps need them everything else will need to be upgraded too and a brand new CPU will be available that is far better AND cheaper.

    But in a way we have had this solution for a long time now, but instead of activating extra cores when paid for, chipmakers instead sell defective chips for a reduced price so your still got a 4 core inside your machine but only 2 actually function (not sure wether this happens with entire cores but it is offcourse the case with cache memory).

    I don't see this happening, especially if you consider that an army of nerds would be trying their best to break the enabling code to get their extra cores for free, just see what happened with the "dual" P2 and cheapo P3's, Intel would have a heart attack.

    • I agree that their approach to the problem is based on a flawed understanding of how processor development works, not to mention the tech industry's marketing strategy ("People like to buy shiny new objects on a regular basis.(tm)") We are still a ways away from reaching a design plateau where we have achieved some ultimate chip design that can no longer be improved on.

      When you buy a computer, you buy it for the worst-case scenario. Your processing needs are probably not going to mysteriously increase ove
  • by Dr. Spork ( 142693 ) on Tuesday January 15, 2008 @04:26AM (#22048004)
    TFA is written really badly, but from what I gather, the "more advanced" models of figuring out how much to charge for chips goes like this:

    1. Everybody gets the same chip, but it will be crippled unless you pay the highest price.

    2. Everybody gets the same uncrippled chip, but there's a FLOPS meter on it that phones home, and you pay Intel according to the amount of numbercrunching your chip did for you.

    Both of these models seem completely retarded to me, although the first is already sort of in use in the CPU/GPU market. Have modern processors overshot our needs by so much that our big worry now is to find innovative ways to cripple them? If so, maybe this processor war we're fighting is ultimately not even worth winning.

    • by foobsr ( 693224 )
      If so, maybe this processor war we're fighting is ultimately not even worth winning.

      Probably more a sign of a new kind of software gap, IMHO due to still missing AI (not everyone is dealing with video/visual data), this again caused by an imbalance in investment in basic research which favours 'hard science' (with the assumption that there is much more to AI than 'logic', even if it is 'fuzzy').

      If there were 'intelligent' applications that could fix Joe Sixpack's everyday problems more autonomously
  • CPU economics are all about yields. They will design a chip, say with 8 cores. Some of the cores might have manufacturing problems so they disable them. The chips with all 8 working cores cost more while the chips with 4 or 6 working cores cost less.

    Back in the "olden" days of two years ago the same would happen but with clock speed. The chips that could clock higher without problems got sold as the 1800+ while ones that failed under testing at higher frequencies would get sold as 1600+.

    Chips use so
  • Calculators (Score:4, Interesting)

    by Detritus ( 11846 ) on Tuesday January 15, 2008 @04:44AM (#22048108) Homepage
    Someone already mentioned mainframes. Something similar is often done with calculators. Rather than design a new chip for each model, they design a single chip with all of the features. In mid-range and low-end models, it is crippled by the design of the keyboard and/or jumpers. It is often cheaper to dumb down a single hardware design than to produce unique designs for each segment of the market.
  • In the sense of Digital Restriction Managment a part of the article states:

    This can be accomplished with small pieces of logic incorporated into the processor that enables the vendor to disable/enable individual cores

    Now think once or maybe twice about it. The situation could be that of a manager of a datacenter, which probably handles sensitive data, and lets the vendor mess realtime whith the CPUs (and possibly the data) driving the system just because he wants to save a few hundred dollars on a digitally castrated chip. Though idiotness is a widespread illness I don't see who could be such a moron. This could only be acceptable by the CIA or

  • by WaZiX ( 766733 ) on Tuesday January 15, 2008 @04:55AM (#22048166)
    1) Sell your super high power 20 cores CPU uncrippled.
    2) Make a platform where researchers can rent CPU power.
    3) Allow your customers to rent their unused CPU power/cores.
    4) Charge double what you give to your customers to the researchers.
    5) Profit! (From both the sale and the rental afterwards).

    And there is no ?...
  • ..so why would they bite on this one? Here, you can buy this processor for really cheap, but every time you want to use it, you have to call us and pay a rental fee.

    Rediculousness. Besides which, it's a no-brainer that it'd be a zero-day hack to enable all the available processing power on a given chip.

    • by Anonymous Coward
      IBM's been doing this for years with some of their smaller servers http://www-03.ibm.com/systems/i/hardware/cod/index.html/ [ibm.com]

      The cost in IT labor and lost productivity during the downtime that old methods need to add processing capacity can be a *lot* for servers hosting your important applications but its awfully expensive to pay upfront for enough power to keep up with ordering spikes during the Christmas buying season (for example) if that spikes way beyond your normal needs. Much cheaper to pay for onl
  • by JRHelgeson ( 576325 ) on Tuesday January 15, 2008 @05:02AM (#22048212) Homepage Journal
    So, what would happen if the Microsoft DRM update management and monitoring "feature" has a "bug" and hits 100% utilization as it tries to verify the authenticity and my right to possess my entire music collection... do i have to pay a processor tax for that? What about a runtime condition? An app locks up and hits 100% utilization until it is killed. OOPS, I need to ante up for the Tflop tax. Or when I file my annual procmon return I cna apply for earned op/sec credit, filing as head of household...

    I'm not about to pay a tax on other peoples poorly written software.
  • by quitte ( 1098453 ) on Tuesday January 15, 2008 @05:12AM (#22048254)
    really. STOP IT!
    • by Arimus ( 198136 )
      Sadly in this instance I think its valid...

      pity it is about the only bloody occurance where it is but throw enough darts and one is bound to hit the board...
  • Crippleware... (Score:5, Insightful)

    by Bert64 ( 520050 ) <bertNO@SPAMslashdot.firenzee.com> on Tuesday January 15, 2008 @05:13AM (#22048266) Homepage
    This is crippleware, and a terrible idea for the average consumer...
    Paying more for a product that costs the same to produce, or potentially even less because they don't have to disable the extra cores is a terrible rip off, and it happens already...

    The same people who currently overclock, will buy the cheaper cpus with cores disabled and re-enable them... You will also get third parties who make a business out of doing the same, tho without the "exceeding design spec" risks of overclocking.

    Personally, I will never pay more for a more expensive version of the same product, i will buy the cheapest available just as soon as people have worked out how to re-enable the disabled cores, and i will help my less technical friends do the same.
  • Search on google for "Intel" and "Larrabee" if you need to know slightly more. Rumors have floated around for almost two years now about that project, with a release date estimated to 2009 or so.

    Also, if you need a job in the multicore business, check out http://www.intel.com/jobs/careers/visualcomputing/ [intel.com]

    In short, visual computing (read gaming) will use all those cores mentioned in the article, word processing will not. Be so sure.
    .
  • Use FOSS, and tell them where to stick their core tax.
  • by The Master Control P ( 655590 ) <ejkeever&nerdshack,com> on Tuesday January 15, 2008 @05:30AM (#22048332)
    So Intel is going to design a CPU with N cores on it, then add hardware that disables half of them, then manufacture the chip with all N cores and sell it for less, even though it actually costs more to design/build because of the added hardware to cripple it, then try and make us pay for access to the other half of the cores and hope we don't notice that our computers have suddenly become a constant expense instead of a one-time purchase?

    And moreover, they apparently forgot which problem they're trying to solve between paragraphs 4 and 5. They start talking about the real problem of many cores creating a very large space of core/memory architectures that would be difficult to choose between and support. Then they veer off into the rent-your-own-hardware-back-to-you idea and never finish reasoning out just how it would work before they come back. A few minor things they ignored:
    • How do they turn cores on? Difficult level: No, you can NOT have a privileged link through my firewall onto my network.
    • How do they stop me from hacking it and enabling it all myself? Difficulty level: Mathematically impossible since you can't stop Eve from listening if Eve and Bob are the same person.
    • How do they propose to bill me? Difficulty: No, I will NOT let my CPU spy on me.
    • Why should I hand you everything you need to force me to upgrade against my will?
    • What happens if you go out of business and leave me stranded?
    • Even if you don't see what's wrong with charging me continually to access my own hardware, do you actually think I won't?
    In conclusion, Profs. Sloan & Kumar of the University of Illinois, I believe the premises and reasoning behind your proposal to be flawed, and the proposal itself to be unworkable and contradictory to openness in computing. Or, as we say on the Internet, wtf r u doin???
  • by Cyno01 ( 573917 ) <Cyno01@hotmail.com> on Tuesday January 15, 2008 @06:34AM (#22048540) Homepage
    Sort of, as many other people have said, about overclocking and such, its not necesarily a scam, it makes things more cost effective for the company and can benefit the consumer who would take the effort to overclock. Lets say Intel (or AMD, doesnt matter, they both do it) does a run of chips. The specs call for the chip to run at, for simplicities sake, 2Ghz, with stock AMD cooling. But no manufacturing process is perfect, and lets say 25% of the chips arent good enough to run at 2ghz without frying. They then clock these chips to 1.5Ghz and sell them as such. This allows them to do a smaller run of specced 1.5Ghz chips and save themselves money. Its not really a scam, theyve been doing this forever, and arent the only industry to do it either. In this case, the consumer can benifit. Someone can buy a 1.5Ghz chip (although they might have to exchange it till they get one of the ones from the 2Ghz production run), and most of the time it'll run fine at the 2Ghz speed with improved cooling.
  • Why not just go the whole hog, and sell people more cores than they actually need, then let them use BOINC-style software to rent out their otherwise unused CPU power to other people? Surely with our current technology in terms of the Internet and encryption, it should be relatively safe to farm out certain CPU intensive tasks to strangers and pay them for the privilege of using their processing power, as long as protocols and software exist to avoid the obvious security risks to both parties.
  • This is the greatest opportunity for vendors and consumers to finally have robust systems. Couple specific functions to a core or cores such that for instance all security, encryption and housekeeping functions, all patch management and all other back office requirements are bound to a core to allow them to run flat out all the time. This would require changes to an OS to partition those functions and run them essentially in their own OS image.
  • Of course we all know that in application there will be all sorts of issues and problems.
    From teh core hackers to laws making it illegal to hack your cpu to then embedded spyware tosystem filure on serious systems due to accidently lock out to ......

    And all this for what? A way for the CPU manufacture to control how much of something you own, can you use.
  • If I would have a quadcore system with only two cores enabled, and two 'spare' cores. If one burns out, with the next boot another core would take over.

    Booting Linux...
    found SMP MP-table at 000ff780
    Core #3 burn out: #4 taking over
    On node 0 totalpages: 524240
    DMA zone: 4096 pages, LIFO batch:0
    Normal zone: 225280 pages, LIFO batch:31
    HighMem zone: 294864 pages, LIFO batch:31
    DMI present. ...

    Wouldn't that be sweet?

  • This is just mean-spiritedness on the manufacturers' part. If you can sell a multicore chip for a certain amount of money and still make a profit, turning off some of the cores is just ..... mean.
  • by v1 ( 525388 ) on Tuesday January 15, 2008 @08:28AM (#22049202) Homepage Journal
    a unique marketing model for 'manycore' processors

    Nothing UNIQUE about this strategy. It's a model growing in popularity. Traditionally companies that wanted to capture several levels of market would make several models of a unit. Like buying a laptop with a better graphics chip or bus speed etc. This cost them more because they had to produce three different units which triples costs on some of their overheads. What this is doing is allowing them to produce one high end product, and configure it easily, post-production, to any of the three units they want to market. The same capabilities are present in all models, but features are disabled/crippled/nerfed in the less expensive models. This allows them to sell their product in the lower cost market without losing sales in their high end market, and without the additional expense of producing several different models.

    It's a good idea for the manufacturer, but introduces the problem of what happens when the consumer figures out how to "enable" disabled features in their low end model? This always results in a little war of sorts, where the manufacturer takes steps to make de-nerfing difficult or impossible. It always aggravates the consumer to find out that after he conceded to buying the model that didn't do everything he wanted it to due to cost, CAN do it, it just refuses to. The consumer feels cheated that he payed for a gadget that CAN do what he wants it to, but can't take advantage of it.

    Interestingly, it doesn't become a problem until the consumer realizes the product that they were obviously happy to pay the small amount for can do more than they bargained for. The producer would argue that you didn't pay what they were asking for those additional features and so you should not feel cheated, and that you agreed to the advertised feature set when you purchased the product.

    The consumer then will try to modify the product to restore the disabled features, and can get upset if it's not possible or is made deliberately difficult.

    As much as it causes aggravation in the consumer (that'd be ME) I think it's not a bad idea. What it all boils down to is you can't complain about a product being capable of performing beyond the advertised and accepted expectations at the time you purchased it. You agreed to buy it Just because it's done on purpose does not change the situation. If it CAN do more than advertised and claimed, and you can make it do that, good for you. If you can't, then too bad.

    In the end, this DOES result in slightly higher cost for the low end model, because the cost of production (or development) of the low end product is higher than it would have been, if the company had only been making the low end model, and that money ends up in the pockets of the manufacturers who shave overhead on production. So from that point of view it's not a good thing for the consumer, but not for the reason they are seeing.

You can be replaced by this computer.

Working...