Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

The Economics of Chips With Many Cores 343

meanonymous writes "HPCWire reports that a unique marketing model for 'manycore' processors is being proposed by University of Illinois at Urbana-Champaign researchers. The current economic model has customers purchasing systems containing processors that meet the average or worst-case computation needs of their applications. The researchers contend that the increasing number of cores complicates the matching of performance needs and applications and makes the cost of buying idle computing power increasingly prohibitive. They speculate that the customer will typically require fewer cores than are physically on the chip, but may want to use more of them in certain instances. They suggest that chips be developed in a manner that allows users to pay only for the computing power they need rather than the peak computing power that is physically present. By incorporating small pieces of logic into the processor, the vendor can enable and disable individual cores, and they offer five models that allow dynamic adjustment of the chip's available processing power."
This discussion has been archived. No new comments can be posted.

The Economics of Chips With Many Cores

Comments Filter:
  • by Xhris ( 97992 ) on Tuesday January 15, 2008 @05:07AM (#22047888)
    This should lead itself to a whole new form of hacking - buy the 10 core system and tweak it to use all 100
  • by k-zed ( 92087 ) on Tuesday January 15, 2008 @05:10AM (#22047912) Homepage Journal
    I don't want to "rent" the processing power of my own computer, thank you. Nor do I want to "rent" my operating system, or my music, or movies. I buy those things, and I'm free to do with them as I wish.

    Renting your own possessions back to you is the sweetest dream of all hardware, software and "entertainment" manufacturers. Never let them do it.
  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday January 15, 2008 @05:22AM (#22047970) Homepage
    In mainframes you have pretty much a single vendor (IBM). Even in the days of Amdahl and Hitachi, once you were committed to a single vendor they had a lot of market power over you. So the vendor can set its own price, and squeeze as much money out of each customer as possible by making variable prices that relate to your ability and willingness to pay, rather than to the cost of manufacturing the equipment.

    In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101. I expect the Intel-AMD duopoly leaves Intel some scope to cripple its processors to maintain price differentials (arguably they already do that by selling chips clocked at a lower rate than they are capable of). But they couldn't indulge in this game too much because customers would buy AMD instead (unless AMD agreed to also cripple its multicore chips in the same way, which would probably be illegal collusion).

    Compare software where you have arbitrary limits on the number of seats, incoming connections, or even the maximum file size that can be handled. It costs the vendor nothing more to compile the program with MAX_SEATS = 100 instead of 10, but they charge more for the 'enterprise' version because they can. But only for programs that don't have effective competition willing to give the customer what he wants. Certainly any attempt to apply this kind of crippling to Linux has failed in the market because you can easily change to a different vendor (see Caldera).
  • by lintux ( 125434 ) <slashdot@wilRASP ... .net minus berry> on Tuesday January 15, 2008 @05:22AM (#22047972) Homepage
    You know what I still don't get? Why's everyone acting like dividing a CPU into several separate cores is a good thing?

    AFAIK adding more MHz was getting more and more complicated, so it was time to try a new trick.
  • by Dr. Spork ( 142693 ) on Tuesday January 15, 2008 @05:26AM (#22048004)
    TFA is written really badly, but from what I gather, the "more advanced" models of figuring out how much to charge for chips goes like this:

    1. Everybody gets the same chip, but it will be crippled unless you pay the highest price.

    2. Everybody gets the same uncrippled chip, but there's a FLOPS meter on it that phones home, and you pay Intel according to the amount of numbercrunching your chip did for you.

    Both of these models seem completely retarded to me, although the first is already sort of in use in the CPU/GPU market. Have modern processors overshot our needs by so much that our big worry now is to find innovative ways to cripple them? If so, maybe this processor war we're fighting is ultimately not even worth winning.

  • Re:Why? (Score:5, Insightful)

    by Anne Thwacks ( 531696 ) on Tuesday January 15, 2008 @05:49AM (#22048128)
    Because in reality, it costs $4.99 to make the chip, and $10,000,000 to design it.

    The cost of designing one core is the same as the cost of designing 10 or 100 cores, because copy and paste was invented several years ago. The cost of adding a core to the design is about 1%.

    There might be a case for powering down unused processors to save energy, and there is a case for selling cheaper processors with reduced core counts where some cores don't work, but there is no case for disabling working processeors for economic reasons.

    Sun's Niagra technology differs, cos it has "virtual cores" which gives you more virtual cores but slower. Its very good if you multi-thread (run apache) and p*ss- poor if you dont (run Windows).

  • by markus_baertschi ( 259069 ) <markus@mELIOTarkus.org minus poet> on Tuesday January 15, 2008 @05:53AM (#22048142)

    For the individual, personal computer, such a model will not fly, as outlined.

    However, in the enterprise market this is already there. IBM is using such a 'on-demand' model for its Series P hardware since a couple of years. For a small fee, IBM is installing a bigger configuration (CPU, memory) than the customer bought. The additional hardware is used automatically in case of a failure (built-in replacement parts) or can be unlocked by the customer on the fly.

    In the enterprise case it makes sense:

    • In enterprise servers the hardware cost is small, compared to the engineering cost. So installing additional hardware does not cost much. A GB of memory costs much more for a high end Unix server than for a PC, even if the technology of the components (simm's) is the same. The difference is in the much lower number of these servers sold and the additional complex engineering needed to build these machines.
    • The additional hardware is already there and can be unlocked and added to the configuration on-line. For man enterprise applications this alone is a huge advantage as maintenance windows are scarce. Typically you have a maintenance window four times a year between Sunday 23:00 and Monday 02:30.

    Markus

  • by Anonymous Coward on Tuesday January 15, 2008 @05:54AM (#22048154)
    To be fair to the graphics companies, they sometimes at least did that because of relatively low yields. If you can take a chip that has ten pipes, two of which are faulty, and disable those two faulty pipes, you've effectively created an eight pipe chip for nothing. This reduces the overall cost of producing a single chip, because a partial failure is still usable.

    This is also why Sony used a Cell with only seven SPUs instead of the eight designed on the chip: if a single SPU fails (which is much more likely than none) in test, the chip is still usable. It pushes up yields significantly.

    IOW: you're comparing the wrong business model. The model you're describing is "oh, this chip isn't quite up to spec, let's put it in a lower spec card where it will meet the spec", rather than "let's sell a fully capable chip deliberately crippled, and re-enable the crippled part later if the customer pays for it."
  • by petes_PoV ( 912422 ) on Tuesday January 15, 2008 @05:55AM (#22048164)
    In a competitive market where 100-core processors cost $100 to produce, a company selling 50-core crippled ones for $101 and 100-core processors for $200 would quickly be pushed out of business by a company making the 100-core processors for $100 and selling them, uncrippled, for $101.

    And when your software is licensed per processor at (let's say) $100 per cpu, your extra, unwanted, 50 processors quickly become a burden. I'd be willing to pay more for a crippled processor if it saved me money elsewhere, and there was no way to slice up domains to reduce the liability

  • Life just sucks that way

    Microsoft != Life.

  • by argiedot ( 1035754 ) on Tuesday January 15, 2008 @06:02AM (#22048210) Homepage

    Now, explain to me again why it would be in my best economic interest to buy a computer with cores that could be disabled if I don't pay my rent?
    I suppose because you could just buy the ones with some cores disabled and get someone who knows stuff to enable them again, like the way people did for some of the older nVidia cards that had some things disabled. Or maybe I don't know anything about how the two things work.
  • by JRHelgeson ( 576325 ) on Tuesday January 15, 2008 @06:02AM (#22048212) Homepage Journal
    So, what would happen if the Microsoft DRM update management and monitoring "feature" has a "bug" and hits 100% utilization as it tries to verify the authenticity and my right to possess my entire music collection... do i have to pay a processor tax for that? What about a runtime condition? An app locks up and hits 100% utilization until it is killed. OOPS, I need to ante up for the Tflop tax. Or when I file my annual procmon return I cna apply for earned op/sec credit, filing as head of household...

    I'm not about to pay a tax on other peoples poorly written software.
  • by quitte ( 1098453 ) on Tuesday January 15, 2008 @06:12AM (#22048254)
    really. STOP IT!
  • Crippleware... (Score:5, Insightful)

    by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Tuesday January 15, 2008 @06:13AM (#22048266) Homepage
    This is crippleware, and a terrible idea for the average consumer...
    Paying more for a product that costs the same to produce, or potentially even less because they don't have to disable the extra cores is a terrible rip off, and it happens already...

    The same people who currently overclock, will buy the cheaper cpus with cores disabled and re-enable them... You will also get third parties who make a business out of doing the same, tho without the "exceeding design spec" risks of overclocking.

    Personally, I will never pay more for a more expensive version of the same product, i will buy the cheapest available just as soon as people have worked out how to re-enable the disabled cores, and i will help my less technical friends do the same.
  • Yes and no (Score:3, Insightful)

    by Moraelin ( 679338 ) on Tuesday January 15, 2008 @06:29AM (#22048324) Journal

    Quick history lesson. Intel tried pawning off hyperthreading to the market. If you mean that AMD should have done hyperthreading, perhaps you should look at the reviews/benchmarks to see that it reduced performance in many cases. In the future, more software might by able to take advantage of increased thread parallelism, but that future is not now, at least in the x86 world.


    While I'll concede the point that Intel's first implementation was flawed, you can't judge and damn a technology for all eternity just by its first implementation. In the meantime even Intel's competitors (e.g., Sun) are implementing it, so it can't be that horribly worse than nothing.

    Plus, then by the same kind of historical reasoning we should have said goodbye a long time ago to such stuff as:

    - any kind of computing or calculating machines. After all, Babbage tried pawning off that idea to the market, and his implementation was never even finished.

    - heavier than air airplanes. The first attempts with kites and bird wings were an outright disaster. We should have buried that idea right there and then.

    - using rockets for space travel. There was this medieval Chinese dude who tried it first, with completely disastrous results.

    - breech loaded guns. The first attempts had _major_ problems with sealing the barrel, because of poor tolerances.

    - cavalry. It just wasn't that horribly good before it successively also got a good saddle, horseshoes, stirrups, and specially bred horses. There's a reason why the Romans created their empire with elite infantry, and the cavalry was just some specialized auxiliary.

    - in fact, even earlier, we shouldn't have had even chariots. I mean, until someone invented a harness that allowed horses to pull one, it was pretty much useless. We know that the Sumerians tried using oxen there, and it couldn't have been that horribly effective. Should have discarded that idea right there and then.

    - agriculture. Until the right plants, irrigation and cats became available, it was very much a losing proposition wherever it was tried.

    Etc, etc, etc.
  • by The Master Control P ( 655590 ) <ejkeever@nerdshacFREEBSDk.com minus bsd> on Tuesday January 15, 2008 @06:30AM (#22048332)
    So Intel is going to design a CPU with N cores on it, then add hardware that disables half of them, then manufacture the chip with all N cores and sell it for less, even though it actually costs more to design/build because of the added hardware to cripple it, then try and make us pay for access to the other half of the cores and hope we don't notice that our computers have suddenly become a constant expense instead of a one-time purchase?

    And moreover, they apparently forgot which problem they're trying to solve between paragraphs 4 and 5. They start talking about the real problem of many cores creating a very large space of core/memory architectures that would be difficult to choose between and support. Then they veer off into the rent-your-own-hardware-back-to-you idea and never finish reasoning out just how it would work before they come back. A few minor things they ignored:
    • How do they turn cores on? Difficult level: No, you can NOT have a privileged link through my firewall onto my network.
    • How do they stop me from hacking it and enabling it all myself? Difficulty level: Mathematically impossible since you can't stop Eve from listening if Eve and Bob are the same person.
    • How do they propose to bill me? Difficulty: No, I will NOT let my CPU spy on me.
    • Why should I hand you everything you need to force me to upgrade against my will?
    • What happens if you go out of business and leave me stranded?
    • Even if you don't see what's wrong with charging me continually to access my own hardware, do you actually think I won't?
    In conclusion, Profs. Sloan & Kumar of the University of Illinois, I believe the premises and reasoning behind your proposal to be flawed, and the proposal itself to be unworkable and contradictory to openness in computing. Or, as we say on the Internet, wtf r u doin???
  • by snoggeramus ( 945056 ) on Tuesday January 15, 2008 @07:26AM (#22048510)
    With regards to multi-core processors in conjunction with video editing work, we've discovered that it's a waste of money having two x quad core processors on a motherboard.

    Given that you need about 1GB of RAM to make efficient use of a core, a maximum of 4GB on a Win32 machine means that you're only going to use 4 cores properly at most. Anything else has been a waste of money.
  • by Anonymous Coward on Tuesday January 15, 2008 @08:31AM (#22048790)
    Generally, unlocked or overclocked pc parts burn out faster than if they'd been left alone (e.g. the 6800LE I mentioned died a horrible death, and now doesn't work at all). However if the chip was DESIGNED to be able to be unlocked, it would be perfectly safe.

    Design is one. Manufacturing is two. Chip manufacturing is not perfect. It is more likely that the disabled parts failed full test, but that parts were still working (and thus make it sellable as a downgraded chip). All you did was enable the defective parts. And then it blew. No surprise there.

  • by ElDuque ( 267493 ) <adw5@nospAm.lehigh.edu> on Tuesday January 15, 2008 @08:50AM (#22048904)
    That's a common misconseption there - prices in a competitive market are based on the consumer's willingness to pay, and nothing else. The cost of manufacturing equipment would only come into play in a monopoly situation, where the seller is able to "set" prices (because she will be sure to set them higher than her per-unit production costs.)

    This is the same misconseption people often apply to baseball player salaries - they do NOT drive ticket prices. Baseball ticket prices are set at the highest level the market will bear - a price that is determined as consumers make decisions between countless sources of entertainment and leisure.

    What is confusing is that the quality of a product (and therefore the market demand for it, sometimes) is often related to the cost of production, so it looks like production costs set prices. But remember when Homer designed a car? It was $80,000, and no one wanted to buy it at that price! The consumers decided there were better uses for their car-buying dollars. This is a perfect (although fictional) illustration of why costs != prices in a competitive market.
  • by Antity-H ( 535635 ) on Tuesday January 15, 2008 @08:59AM (#22048978) Homepage

    noone is chump enough to make something totally different that nothing runs on.

    I guess that's why IBM did not develop the cell processor which is therefor not used in PS3s or why no supercomputer is built using it.

    All this also explains why IBM did not develop a new product line of cell based blade servers. And neither are grids being built around cell based servers.

    Of course even if IBM did develop it and sony did use it in the PS3, it would be unable to run anything which is why there isn't any game for the PS3 or why there are not linux distribution for the PS3.

    Sorry, but a different architecture doesn't mean nothing runs on it, nor does it mean noone will develop for it if the promised power is cheap and proficient enough.
  • by Lonewolf666 ( 259450 ) on Tuesday January 15, 2008 @09:07AM (#22049028)

    Interestingly, of late, it is AMD that is trying to create product differentiating by crippling their processors, or at least by selling processors with one core switched off. They're trying to do this by selling "tri-core" processors based on their Barcelona/Phenom cores, which are nothing much an actual quad-core with a core turned off, either deliberately, or because it is defective. They probably want to position this as a mid-range offering, to make it more competitive to Intel's relatively cheaper quad-cores.

    I guess it is, first of all, a way to get money for processors that would have to be thrown away otherwise. Some money for a "tri-core" is better than no money for a piece of waste silicone.
    On top of that, there may be some crippling of intact quad-cores if there is more demand for the cheap "tri-cores". But I doubt that is the main reason.
  • by Firethorn ( 177587 ) on Tuesday January 15, 2008 @09:26AM (#22049180) Homepage Journal
    Didn't someone predict that we'd only ever need 128 MB of ram and that more ram would be superfluous for most consumers?


    I believe that you're thinking about the quote commonly attributed to Bill Gates '640k will be enough for anyone'.

    Right now there are a lot of flashy games out there. Users may want to run many more applications at once (or more likely turn on M$ poorly executed eyecandy and not notice their computers slowing down).

    Until 3D acceleration is so good that you can't tell it from real life, all other tasks are 'instant' from the viewpoint of the use, greater speed will be in demand.

    Monitor resolutions are still creeping up, placing more demand on video card processing power, games are being produced that utilize sophisticated physics engines*.

    Still, consider that non-resource intensive games like Bejeweled will often outsell a resource intensive games like Supreme commander, Crysis, or Bioshock.

    It will probably happen gradually, as most customers are already low-end.

    Agreed, even today I'll recommend economy machines to people who don't play 3D games or do something like video ending.

    *To the point that I now have 3 games that you can add a daughtercard to the computer for offloading the processing.
  • by v1 ( 525388 ) on Tuesday January 15, 2008 @09:28AM (#22049202) Homepage Journal
    a unique marketing model for 'manycore' processors

    Nothing UNIQUE about this strategy. It's a model growing in popularity. Traditionally companies that wanted to capture several levels of market would make several models of a unit. Like buying a laptop with a better graphics chip or bus speed etc. This cost them more because they had to produce three different units which triples costs on some of their overheads. What this is doing is allowing them to produce one high end product, and configure it easily, post-production, to any of the three units they want to market. The same capabilities are present in all models, but features are disabled/crippled/nerfed in the less expensive models. This allows them to sell their product in the lower cost market without losing sales in their high end market, and without the additional expense of producing several different models.

    It's a good idea for the manufacturer, but introduces the problem of what happens when the consumer figures out how to "enable" disabled features in their low end model? This always results in a little war of sorts, where the manufacturer takes steps to make de-nerfing difficult or impossible. It always aggravates the consumer to find out that after he conceded to buying the model that didn't do everything he wanted it to due to cost, CAN do it, it just refuses to. The consumer feels cheated that he payed for a gadget that CAN do what he wants it to, but can't take advantage of it.

    Interestingly, it doesn't become a problem until the consumer realizes the product that they were obviously happy to pay the small amount for can do more than they bargained for. The producer would argue that you didn't pay what they were asking for those additional features and so you should not feel cheated, and that you agreed to the advertised feature set when you purchased the product.

    The consumer then will try to modify the product to restore the disabled features, and can get upset if it's not possible or is made deliberately difficult.

    As much as it causes aggravation in the consumer (that'd be ME) I think it's not a bad idea. What it all boils down to is you can't complain about a product being capable of performing beyond the advertised and accepted expectations at the time you purchased it. You agreed to buy it Just because it's done on purpose does not change the situation. If it CAN do more than advertised and claimed, and you can make it do that, good for you. If you can't, then too bad.

    In the end, this DOES result in slightly higher cost for the low end model, because the cost of production (or development) of the low end product is higher than it would have been, if the company had only been making the low end model, and that money ends up in the pockets of the manufacturers who shave overhead on production. So from that point of view it's not a good thing for the consumer, but not for the reason they are seeing.

  • by jank1887 ( 815982 ) on Tuesday January 15, 2008 @10:00AM (#22049432)
    in this case though (the rent-a-core plan) all cores must be fully functional. you're paying for a processor with the potential for using X number of cores. If they aren't all good, they've sold you a defective chip, not a downgraded one. Also, if it's a rental scheme, it can't be a one-way change to upgrade or downgrade. Apparently the process must be fully reversible. Sounds to me like all of that makes it a much more appealing hack target.
  • Re:Why? (Score:2, Insightful)

    by mentaldrano ( 674767 ) on Tuesday January 15, 2008 @10:05AM (#22049482)
    While you are right about research cost ($10,000,000) vs production cost ($4.99), your point about adding cores is not well made.

    Yes, you can just copy and paste the individual core design, but heat dissipation, core interconnect, and off-chip bandwidth will kill you if all you do is simply "paste another one on." These problems are easy to get around for few-core chips, say 2-4, but once you go farther than that, it takes real design innovation to stay afloat. Thermals require dynamic core underclocking, work distribution (keep hard working threads on widely separated cores), and split power planes. Core interconnect uses things that sound an awful lot like Ethernet, including routers with routing tables! Off chip bandwidth requirements bring in huge caches and dual buses.

    Look at the huge deal AMD made of its "native quad core" design vs. Intel's quad core chip. Sure, it didn't end up giving them a huge performance advantage, but this is the way things are going for many-core chips, and AMD does have a head start on production.
  • by AlpineR ( 32307 ) <wagnerr@umich.edu> on Tuesday January 15, 2008 @10:57AM (#22050060) Homepage
    Yeah, there are vehicles that already adjust the number of cylinders on demand. One of them is a large domestic SUV. The advertising slogan is something like "Eight cylinders when you need them. Four when you don't."

    The benefit to the vehicle owner is lower fuel costs, not an economic model to transmit his cylinder utilization to the manufacturer for a reduction in his vehicle loan payments. That'd just be silly.

    If you want a car with less power, you opt for a smaller engine. If you want a single-core processor with less power, you opt for a slower clock speed. The processor you buy might be manufactured alongside the ones sold at higher speeds, but it failed testing or was intentionally crippled to maintain a distinction between high-end and low-end.

    Sometimes it's easier for the manufacturer to make everything the same and then cripple or add on to create different classes. Suppose Initech developed a screaming-fast processor that they could sell for servers at $90,000 a piece. It also happens to cost only $90 to manufacturer. They could have priced them at $100 and sold 100 times as many for desktops, but the loss of profit in the server market would make it a loss. So instead they chop off 90% of the cores or reduce the clock speed by 90% on the processors destined for desktops. It's cheaper for Initech than to manufacture a second low-performance design and even the crippled processors are a better buy than the competition. It's economically wise and perfectly moral.

    The tricky part with manycore processors is that halving the clock speed is usually more crippling than halving the number of cores. But it all depends on how well the software parallelizes. It could make sense to sell the somecore processors at a discount, and then three years later when the customer is thinking about buying new machines say "We could double the performance of your existing hardware for half the cost."

    It might have been dumb of the customer to buy the crippled processors in the first place, but if a competitor can offer uncrippled processors for the same price then the customer won't make that mistake. And sometimes making half of a capital investment now and half later is a good business plan.
  • by Khelder ( 34398 ) on Tuesday January 15, 2008 @11:14AM (#22050266)
    Just wanted to chime in and say I can't remember the last time I saw a comment this good on /. Thanks!

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...