Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Upgrades IT Technology

Intel Expands Core Concept for Chips 222

Aziabel writes "As most of you have probably heard, Intel plans to come out with chips containing two processing cores next year, but that's just the start. The Santa Clara, Calif.-based chip giant intends to exploit the concept of using multiple processor cores; chips with four cores and eight cores will eventually join dual-core chips, which will begin to appear from Intel next year. The company's research department is also looking at the feasibility of creating chips with hundreds of cores to assist servers and supercomputers with large numbers of relatively repetitive calculations, said Steve Smith, vice president of the desktop platforms group at Intel. The focus on multiple cores arises from Moore's Law, which dictates that the number of transistors on a chip doubles every two years. I say, the more the better. Keep 'em coming, chip-makers!"
This discussion has been archived. No new comments can be posted.

Intel Expands Core Concept for Chips

Comments Filter:
  • ...a beowulf cluster on a chip!
  • Cell Processor (Score:5, Interesting)

    by News for nerds ( 448130 ) on Saturday December 18, 2004 @10:38AM (#11124204) Homepage
    It's nothing more than a catch-up move to Sony/Toshiba/IBM Cell [sony.net], just like EMT64 to catch up AMD. Those late and awkward moves are of bad omen for Intel, IMO.
    • Re:Cell Processor (Score:5, Interesting)

      by skids ( 119237 ) on Saturday December 18, 2004 @10:58AM (#11124270) Homepage
      Agreed. And in addition, really what they need to start doing is specializing the cores. Either that, or following the cell paradigm in reducing the complexity of each core to increase the number of cores, such that you can combine several into a special-function unit.

      But we all know that nothing really changes until memory access changes. Memory continues to be the bottleneck, so if the only thing a processor with more cores can do fast is crunch numbers, you'd get more bang for the buck with better/more vector processing units.

      Now, if/when they come out with memory that can be reorganized on-the-fly, perform large-scale simple massively-parrallel operations, and do some content-addressable tricks, that will be a signifigant development. I don't know how long it would take that to make it into higher level programming languages, though. It kinda of turns the job of writing programs on it's head.

      • Re:Cell Processor (Score:3, Interesting)

        by Psychofreak ( 17440 )
        How about putting a significant amount of RAM on the die with the 2 (4 or 8) processors? From my understanding of space and volume it should be possible to put at least 256 MB on the die with 2 processors in an L3 or similar arrangement. Yes the complexity is higher but the overall speed of the system would increase much by having a significant amount of RAM at (or near) chip speed.

        This coupled with about 1 GB of system RAM would (hopefully) provide superior performance.

        As a side note, I guess my next b
        • I thought the cache memory was a faster, and much more expensive type though. Or is it itsproximity to the procesor that makes it more efficient? Just curious.
        • Re:Cell Processor (Score:5, Informative)

          by wik ( 10258 ) on Saturday December 18, 2004 @03:15PM (#11125449) Homepage Journal
          This increases the fabrication costs for the silicon die because the processes used to create high-performance CMOS logic and high-density DRAM are different. Because of the cost, it's not likely to happen for commodity microprocessors any time soon.
        • Could the next step for the cell be memory chips and cpus tightly linked on the same die? each Memory chip has it's own full speed cpu? add more memory and more cpu.
      • "Now, if/when they come out with memory that can be reorganized on-the-fly, perform large-scale simple massively-parrallel operations, and do some content-addressable tricks, that will be a signifigant development."

        Isn't that what NUMA is about? AFAICR all it needs is a board that can take advantage of it, along with an OS. I think some of the Opteron boards have NUMA and all that lot.

        N.B I'm not an expert, if you couldn't tell ;)
      • Re:Cell Processor (Score:4, Interesting)

        by lukasz ( 27763 ) on Saturday December 18, 2004 @11:44AM (#11124401)
        >Now, if/when they come out with memory that can be reorganized on-the-fly, perform large-scale
        >simple massively-parrallel operations, and do some content-addressable tricks, that will be a
        >signifigant development. I don't know how long it would take that to make it into higher level
        >programming languages, though. It kinda of turns the job of writing programs on it's head.

        Have you ever stumbled on FPGAs ? It's already there. The problem is, as I see it, it does turn writing programs on it's head. Thus, very few people outside of the hardware design crowd know what to do with them.
        Just think how many people do get exposed to digital design vs programming. How many people do go beyond a vague idea of a processor working on data sitting in memory ? How many CS graduates are utterly unhappy about digital design classes ?

        • >Have you ever stumbled on FPGAs ? It's already there. The problem is, as I see it, it does turn writing programs on it's head. Thus, very few people outside of the hardware design crowd know what to do with them.

          This is largely horse pucky. FPGAs are a trade off of efficiency for generality. FPGA based coprocessors only provide a benefit where the algorithm can be implemented more efficiently in logic than conventional code and it's done at a high enough frequency to warrant the trouble. Few situa
      • Re:Cell Processor (Score:2, Insightful)

        by getch(); ( 164701 )
        Why, praytell, does Intel need to either specialize their cores or move toward a massively parallel architecture like Cell is supposed to be? Do you have some foresight into the performance of as of yet nonexistent computing architectures?

        Anyone even vaguely familiar with multithreaded programming and execution should know that it is difficult (sometimes impossible) to extract parallelism from the majority of the desktop computing applications. As such, throwing more cores at the problem doesn't necessa

        • >Do you have some foresight into the performance of as of yet nonexistent computing architectures?

          Blue Gene/L is nonexistent? ;)
        • it is difficult (sometimes impossible) to extract parallelism from the majority of the desktop computing applications

          You mean it's difficult for the compiler to extract parallelism from pure sequential code. However, a smart programmer could easily make all sorts of tasks run in a separate thread from the main UI thread:

          • Spellcheck in a text editor
          • Recalculation in a complicated spreadsheet
          • Image decoding in a web browser or slideshow app
          • Crypto in a web browser
          • All the various elements of SWF playbac
          • it is difficult (sometimes impossible) to extract parallelism from the majority of the desktop computing applications

            You mean it's difficult for the compiler to extract parallelism from pure sequential code.

            Are they making programmers yet who are better than compilers at this?

        • I read that some functional programming languages can automatically multithread a program so that the task is split up over multiple processors. The programmer would just program as for a single CPU and change nothing or very little.

          Functional programmming languages examples are Lisp and OCaml.

          Oh, correction, from a previous /. thread: [slashdot.org]

          OTOH, it is theoretically possible to automatically multithread purely functional programs, especially if they're lazy like Haskell. So it could end up being a very impor
    • I was going to post this anonymously because the person who told me about it works at Intel and needs no more grief. But ths morning, I was able to find confirmation from VNUNET [vnunet.com] and /. editors are unlikely to put up two Intel chip stories in one day so I am not submitting a story. Why make it so NOBODY ever hears this news?
      Intel is, no make that was, rumored to be, [no, definitely are] in the process of buying the design group that develops Itanium from HP.
      The vnunet page has a little speculation as t
    • I'm somewhat playing devil's advocate here, but Sony/IBM/Toshiba were also following the idea of Piranha from the Compaq group paper [sbcglobal.net]. The old Compaq project had eight simple cores (in-order Alpha 21164, if I remember correctly) and it really shined in transaction benchmarks.

      In a way, AMD64 is the same idea as when Intel extended x86 from 16-bit to 32-bit. Back in the 90's, Itanium sounded like a good idea, and they (Intel) really were looking ahead then (and they didn't want EMT64 to cannibalize Itanium s

  • by OccidentalSlashy ( 809265 ) on Saturday December 18, 2004 @10:39AM (#11124208)
    I am beginning to suspect that Intel does things like this simply to make x86's instruction set harder and harder to emulate well.

    Kind of like to what I suspect Microsoft has been trying to do against Lindows for a while now, namely complicate their API more and more. And with IE and HTML.

    Of course they're well within their rights to try. We'll just build a better idiot savant. Or let Steve Jobs keep making Apples that no one can really imitate in the first place.
    • Yeah, it's not like technology gets more and more complicated over time or anything.
    • by Decaff ( 42676 ) on Saturday December 18, 2004 @10:54AM (#11124256)
      I am beginning to suspect that Intel does things like this simply to make x86's instruction set harder and harder to emulate well.

      Why should this have anything to do with the instruction set? The principle is exactly the same as for existing multi-processor systems, but on the same chip.
    • And what does it have to do with MS and Lindows, maybe MS and wine....

      As people complain on slashdot the only thing MS is not doing is developing for IE (yes i know they started again), and i am sure they were not doing it so that no one else can emulate.
      Mozilla chooses not to for a good reason.

      Read up on RISC and CISC before you post on mac emulation, a lot of it has to do iwht processor architecture and not "Apples".
  • by Anonymous Coward on Saturday December 18, 2004 @10:40AM (#11124215)
    This does not bode well for problems that mathmatically cannot be executed in parallel.
  • Great (Score:5, Funny)

    by Anonymous Coward on Saturday December 18, 2004 @10:40AM (#11124216)
    Now I can do away with my furnace.
  • Performance rateing (Score:5, Informative)

    by Barny ( 103770 ) on Saturday December 18, 2004 @10:40AM (#11124218) Journal
    The problem is with what they (both intel and amd) plan to do is saying a dual core 1.5 centrino (for example) cpu is actually a 3Ghz machine (from the pr they have allready put out about these chips).

    Read overclockers.com for some good speculation on what the good/bad/ugly features are likely to be.
  • by melonman ( 608440 ) on Saturday December 18, 2004 @10:41AM (#11124221) Journal

    The focus on multiple cores arises from Moore's Law, which dictates that the number of transistors on a chip doubles every two years.

    I don't think non-compliance with Moore's Law is a felony. It's an observation, not a statute. Moore's Law arises from the fact that transistor counts keep doubling, not the other way around.

    Also, doubling the number of transistors in any way possible doesn't necessarily translate into double the power for any given application. In this case, multiple cores are good news for multi-threaded or forking server apps, but rather less interesting for a lot of desktop apps. Intel obviously has a vested interest in pushing ever larger die sizes, because it does large dies better than anyone else. Whether this will always be in the interests of the rest of the industry, let alone the end user, is less obvious.

    • by analog_line ( 465182 ) on Saturday December 18, 2004 @10:51AM (#11124248)
      Gordon Moore, the guy the "law" was named after, works for Intel. Intel puts a fair bit of weight behind the notion behind it, and they even have a page [intel.com] on their research section about it.
    • In fact it is neither Moore's Law nor increasing transistor count that is driving multi-core designs. It is economic and competitive pressures.

      As another reader pointed out there is a serious drool factor in a dual core AMD Opteron. Other than the gamers and overclockers one does not need dual cores or multi-GHz clock speeds for most applications. My desktop system is a dual processor 200MHz Pentium Pro system (circa '97) and my web server which was /.'ed in August is a dual processor 100 MHz Pentium

      • "As another reader pointed out there is a serious drool factor in a dual core AMD Opteron. Other than the gamers and overclockers one does not need dual cores or multi-GHz clock speeds for most applications."

        Opterons are aimed at the server and workstation market. Turning a 1U 2-way Opteron pizzabox into a 4-way one by replacing the CPU's and upgrading the BIOS is *extremely* attractive, and despite your little anecdote about the 100MHz slashdotted webserver, multi-GHz and multi-CPU is kinda important whe
    • by danila ( 69889 )
      Most desktop applications can use parallel processing just fine. Word processors, music players, image editors, etc. all can break the job into small independent chunks. But as much as I would appreciate faster processors, I'd enjoy faster storage devices even more. Even though my CPU is at most 10-15% active and only 40-50% of RAM is used, when enough applications want to access the disk simultaneously, the computer can slow to a crawl. I want better caching, more intelligent disk access prioritisation, fa
    • It seems to me like you could parallelize most desktop apps.

      An MP3 player: I would think you should be able to decode one second with one processor, and the next second with the other processor.

      Word processor: I would think that parts of the boot process that do not require the other parts would be able to run independently. Two processors could check alternating lines until the whole document was checked.

      Spreadsheets: I would think that the first half of a giant list could be handled by one processor, t
  • hmm (Score:3, Interesting)

    by mattyrobinson69 ( 751521 ) on Saturday December 18, 2004 @10:49AM (#11124240)
    if a kernel is written to take advantage of multiple cores, would this mean applications written ontop of it would start using the multiple cores?

    if not, how feasable is a multicore > single core emulation in linux.
    • Re:hmm (Score:2, Interesting)

      by Ziviyr ( 95582 )
      Unless the cores are unbearbly slow the kernel just needs support for multiple cores. Being effectively seperate CPUs, kernel is already there. Its just a matter of whatever app you're doing using enough threads to keep them all busy.

      Just watch, it'll go fairly smoothly.
    • Re:hmm (Score:4, Informative)

      by jackb_guppy ( 204733 ) on Saturday December 18, 2004 @11:13AM (#11124301)
      A core ~= A processor today. So multi-processor OS is nothing new. Shoot Intel Hyper-Threading is not new - It looks to OS as two processors but only 1 is running at given time.

      You see an OS runs multiple threads in the first place it just switches between them as each need run time.

      But for given program to be written to use 2 or more threads (looks to the OS as 2 or more programs) takes work.

      So take a program that is already written and place in a multi-core/processor/thread enviroment with all else being equal - it will run as fast as it did before.

      What will run faster is all of it. Take two of these old programs and run them in the multi-core/processor/thread enviroment and they each take same processing time unto themself, but the obversied time is shorter because they are both actually running at SAME time.

      • Shoot Intel Hyper-Threading is not new - It looks to OS as two processors but only 1 is running at given time.

        Actually both virtual CPUs in an HT system are running instructions at the same time on the same core. The intent of HT (and related implementation but differing names) is to help fill the several execution units that exist on todays CPU cores by allowing an intermixing of instructions from two (or more) execution streams (threads). Having more instructions and often nondependent instruction to ch
    • if a kernel is written to take advantage of multiple cores, would this mean applications written ontop of it would start using the multiple cores?

      Something like Linux SMP will see each core as a separate CPU and treat them as such, much as it does with hyperthreading today. The catch is, your applications have to be multithreaded. Then they will take advantage of the multiple cores.

  • Yeah, yeah. (Score:3, Insightful)

    by eddy ( 18759 ) on Saturday December 18, 2004 @10:54AM (#11124255) Homepage Journal

    &gtwill begin to appear from Intel next year.

    Very likely this is marketing sp33k for "will be paper-launched at the last day of next year"

  • Supercomputers my ass, when will Intel admit that the real driving force behind faster hardware is lots of gibbing and blood splashing around the screen in real-time! This sounds like a very good idea and one that could maybe lead to a demise in separate graphics cards? your graphics could be handled on a separate core that gets its instructions from another core that maybe even separates collision detection and AI into other cores? I know purpose built hardware is always going to be faster at its job, but
    • I can tell you like question marks?

      You put them at the end of your statements?

      Do you think such usage is correct.

    • Given that I can't get good working 3d drivers for FreeBSD (by which I mean I can upgrade my kernel from time to time without worry), or Linux; on any fast 3d video card, I'm looking for anything that will give me good 3d graphics without hasstle.

  • by Anonymous Coward on Saturday December 18, 2004 @11:01AM (#11124279)
    IBM have been doing this for years - and its biggest technological success story, the POWER5 chip shows that Intel are blatantly only playing catch-up with this announcement.

    http://www.theregister.co.uk/2004/11/26/ibm_power5 _moores_law/ [theregister.co.uk] shows this perfectly.

    It reminds me of the way that Intel pretended that they invented integrated wireless technology with its Centrino chip only after Apple had been shipping laptops for nearly two years with internal wireless cards.

    Normally, asking if they had no shame would be appropriate but it is unfortunately clear (without the need to ask) that they don't.
  • by I_am_Rambi ( 536614 ) on Saturday December 18, 2004 @11:12AM (#11124299) Homepage
    Most of the reports that I have read have said that AMD will be releasing theirs next year and Intel the following year. Intel, though didn't start talking of dual cores until AMD started talking about theirs. From research that I have done, each manufactorer has some mighty issues to overcome with the single core before dual cores can be implemented nicely.

    AMD has said that dual cores will be clocked anywhere from 600Mhz to 1Ghz slower than the single core counterpart, namely because of heat issues. There are many more issues that arise with dual cores here are a few

    Cache correnance
    Bus contention
    software implementation
    plus more

    It will be interesting none the less on how each manufactorer overcomes the issues with multi-core chips and the benefits to the user of of multi-core.
    • Cache correnance

      Cache coherence. Already solved. Both AMD and Intel handle this just fine.

      Bus contention

      AMD have largely solved this with their HyperTransport links between the processors, though obviously a dual-core Opteron will have less memory bandwidth than two single-core chips.

      Still quite a problem for Intel.

      software implementation

      Not a problem. Existing SMP code will work perfectly on multi-core chips.
    • by phoenix_rizzen ( 256998 ) on Saturday December 18, 2004 @12:17PM (#11124539)
      Cache coherance, cache access, and bus contention are only problems for Intel. AMD solved most of these with the Athlon-MP and HyperTransport, and solved the reset with the Opteron's integrated memory controller.

      In AMD SMP systems, each CPU has its own separate link to RAM and peripherals. Each CPU also has a link to each other CPU. If CPU A needs something in CPU B's cache, it just asks CPU A to send it that data across the inter-CPU link.

      As you add CPUs in an Opteron server, you actually increase the RAM/system bandwidth. Compare that to a Xeon system where adding CPUs reduces the bandwidth available to each CPU (system/RAM bandwidth is constant).

      There's a beautiful set of articles over at Ars Technica describing the SMP abilities of the Athoon, the Opteron, and the Xeon. It's amazing Intel has been able to sell any 8-way systems.
  • A few years ago I thought of a different kind of twist on computer architecture that I labelled OOH.

    The basic idea is that a computer could comprise many, many tiny CPUs, each with its own tiny local memory.

    A given (CPU+RAM) could be designated to operate as RAM for another CPU, so the MMU/OS could balance the number of processes needing memory with those needing processors.

    A (CPU+RAM) could also be labeled as a slave to others, so a multithreaded application could have the number of processors it needed
  • Years ago (early 90s?) there was a lot of talk about "wafer computing", but it never seemed to come to anything. Maybe now we'll see it take off.

    When manufactufing chips, they're done so in wafers. Then the wafer is cut up into its component parts, and each part is sealed in its own case. It would seem to be more efficient to just stick the whole damn wafer in a single case.

    It would give a whole new meaning to "pizza box server", as the wafer and case would closely match the size of a pizza and box, respe
    • It isn't going to happen. Gene Amdahl (of Amdahl Corporation) founded a company called Trilogy to try this in the '80s to the tune of something like $200M. It was one of the most spectacular failures in the history of Silicon Valley up until the Dot.Com debacle.

      It turns out that unless you have very high yield across the entire wafer its impossible to get everything you need for a "real" computer (CPUs, cache, memory, bus controllers, etc.) wired together and operating properly. Because each wafer is g

      • I didn't know about the Trilogy debacle, although as I said, wafer computing never seemed to come to anything.

        You say that it's a nightmare getting CPUs, memory, etc on the same wafer and wired up, but what I was suggesting is a wafer of, say, processors only.

        So the wafer could be tested after manufacturing, and then sold as an X core processor, depending on how many working processor cores were found.

        Essentially, I'm just taking the topic of this thread to its logical conclusion - processors that have s
        • The problem is not then processor numbers but bus contention. This article [theinquirer.net], posted by another user, goes into some of the problems. If you start putting the Level II, Level III and DRAM on other wafers the delays are going to kill you unless you stack the wafers in 3D -- and *that* is going to be really expensive technology.
    • Ah, yes, Wafer Scale Integration.

      IIRC, the main problem that was encountered is that it is extremely unusual to produce an entire wafer of defect free components. While some work was done on producing wafers that were able to route around damaged sections, I don't think it ever got very far.

      I'm sure we'll see it return at some point, though.
  • According to Sun CEO Jonathan Schwartz [sun.com] they have 8 way chips already.
  • Sun's new Ultrasparc IV shipped in the SunFire 490's and larger servers already do this. The plans right now are to scale this up to 32 cores per cpu. The only issue that I see is that the memory controller is onboard the cpu, so while you may have 2/4/8/16/32 cores, you still only have a single memory controller, which limits the ammount of ram you can have. I'm sure they have a solution for this, but I don't know what it is.
  • Intel just canned their 8-way chip and replaced it with a variant of Montecito, or more likely a Montvale derivative. Here is a bit on it:
    http://www.theinquirer.net/?article=20270
    ht tp://www.theinquirer.net/?article=20286
    Needless to say, their long term strategies are a tad up in the air right now.

    As for their desktop (IE P4 based) dual core plans, there are 2 generations planned. The first is a simple pairing of 2 current cores with a minimum of tweaks, basically a scared response to AMD. The second one is really the first one they planned, and it is a lot more sophisticated.

    AMD was there from long before Day One, and have the most coherent philosophy on dual cores for the desktop/server.

    Rather than re-write all my own articles here, here is a link where I break down all of Intel's dual core plans as well as some of AMDs.
    http://www.theinquirer.net/?article=17906

    Sorry for all the self links, but I don't really want to keep re-writing that stuff, links are the reason behind the web, right? :)

    -Charlie
  • by MROD ( 101561 ) on Saturday December 18, 2004 @11:44AM (#11124402) Homepage
    Although for some, non-memory intensive, highly threaded applications multiple cores can be a boon, for many applications this won't be a boost in performance at all.

    Remember that each of these processing cores will have to share their memory bandwidth and possibly level 2 cache as well. As it is Intel's EM64T Xeon processors really feel the bandwidth bottleneck in their memory interface and can easily saturate it.

    I can see a dual core Xeon being able to saturate its memory bus on its own. Similarly, the dual core Opteron, unlike a dual processor Opteron, will have to share a single memory bus and hence be slower than a dual processor machine.

    Adding extra cores merely moves the computing bottleneck elsewhere, it's not a panacea.
    • Not a problem (Score:4, Interesting)

      by Groo Wanderer ( 180806 ) <{charlie} {at} {semiaccurate.com}> on Saturday December 18, 2004 @12:53PM (#11124675) Homepage
      This is much less of a problem than you might thin, not because it isn't a real problem, but because it is so obvious. Everyone already has a workaround, most of which involve FB-DIMMs.

      Niagara (see my post above) is bandwidth rich, the AMD solutions are also. The only ones with a looming problem are Intel until CSI comes on in a few years, but that is manageable.

      Moral, Sun OK, AMD OK, Intel solid plan.

      -Charlie
    • Remember that each of these processing cores will have to share their memory bandwidth and possibly level 2 cache as well. As it is Intel's EM64T Xeon processors really feel the bandwidth bottleneck in their memory interface and can easily saturate it.

      What you're saying is, for applications with poor cache performance, multi-core processors will be no better than single-core. Personally, I can live with that. Most of the processor developments of the last 10 years have favored applications with good c

  • Yer Laws (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Saturday December 18, 2004 @11:45AM (#11124408) Homepage Journal
    "Laws" like Moore's, Newton's, Ohm's and others, don't "dictate" anything. They "describe" observations. Intel doesn't meet integration targets based on some hoary old directive from Gordon Moore from the late 1960s. They meet production deadlines projected as close to their maximum productivity. Moore observed the logarithmic rate of transistor integration increase way back then, and described it as invariable as gravity.

    Engineers especially must understand that "laws" of nature, including human innovation, are governed by an "invisible hand". Not some imaginary deity, or some government, or some mythic genius. Rather, there is a deeper order to events, like the way every triangle has 180 degrees, the Sun "comes up" every morning, controversial Slashdot posts will get mod'ded "Troll", without any false statements or duplicity. We're engineers: our job is to engage the deeper order, understand it, model it, and exploit it, without further mystifying it.
    • ...like the way every triangle has 180 degrees

      Mine only has 179! And some Gauss-Bonet person told me I could get one for the same price with 270 degrees! Where the hell is Euclid when you need a refund?

    • Physical laws such as Newton's, Ohm's and others are based upon observation (like all such "laws".) But they describe basic physical processes from which no significant statistical variation has ever been observed, and because of that fundamental stability, scientific and engineering predictions can be made from them with considerable accuracy. Consequently, these laws (really, mathematical models) have been elevated to the status of Physical Law, because the Universe itself dictates the behavior of objects
      • To include "Moore's Law", which is an educated (and heretofore correct), guess at the progress of transistor design, with laws that consitently and correctly dictate how the world around us works, is silly.
        • laws that consitently and correctly dictate how the world around us works

          Physical laws such as Newton's and Ohms are also "educated (and heretofore correct), g'uess[es] at" the behavior of matter and energy, observations which are made by fallible man. Newton's laws fail at submicroscopic distances and additionally at high velocities. Ohm's law fails on non-ohmic materials. Likewise, Moore's law may fail at specific points on the timeline of production.

          • Yes, I can pretty much guarantee that Moore's "Law" will fail at some point. That's ScrewMaster's Corollary to Moore's Law. We'll see if I'm right. Fifty bucks says that I am.

            But that is completely irrelevant ... physical law is dependent upon the nature of matter, energy and spacetime and behaves the way it does regardless of how much money is invested in R&D. Moore's Law, if you insist on calling it that, is at best an expression of how much progress could be made given the diversion of sufficien
  • I know some are pointing to the Cell project as the inspiration here, but Tera was hard at work on this long ago in the form of the MTA [ucsd.edu]

    The MTA was a commercial failure. Tera's inability to execute as a company was a major reason.

    It is fun to watch Intel chase AMD.
    • Tera was hard at work on this long ago

      Excuse me but IIRC Tera is more a multi-threaded processor, not a multi-core. It was intended to run 128 threads simultaneously, and solve the memory latency problem by running each thread in succession. The idea was that if a thread was stalled by a need to access main memory, by the time it got back around to that thread again the data would have arrived. Overall throughput was supposed to put it into the supercompuer class.

      You're right that the processor didn

      • Have you any information on why the Tera processor didn't succeed (my gut feeling: a lack of thread available to use the CPU successfully)

        On an unrelated topic, I disagree with your sig: I live in France where we are very strict about separating private matters such as religion with public matters such as school or governement (the idea of having a president swearing on the bible seems stupid for us), and AFAIK except for the veil problem, there is not too much problem for religious people to live their re
  • I say, the more the better.

    Shouldn't that be, the Moore, the better?

  • It's because they can't speed up the clock rate any more. Nobody wants to admit it so they switch over to multicores and try to distract you from the fact that it's the same clock as in your current computer. They're terrified that people are going to stop upgrading.
  • The ultimate limit on the number of cores you can put on a single chip is the available pin bandwidth. At a point, there simply isn't enough bandwidth available to supply instructions and data.
  • can it play Duke Nukem Atomic Edition?
  • The focus on multiple cores arises from Moore's Law, which dictates that the number of transistors on a chip doubles every two years.
    That's every 18 months, not 2 years. It was even correct in TFA you dolt.
  • by doc modulo ( 568776 ) on Saturday December 18, 2004 @05:31PM (#11126303)
    I read that some functional programming languages can automatically multithread a program so that the task is split up over multiple processors. The programmer would just program as for a single CPU and change nothing or very little.

    Functional programmming languages examples are Lisp and OCaml.

    Oh, correction, from a previous /. thread: [slashdot.org]

    OTOH, it is theoretically possible to automatically multithread purely functional programs, especially if they're lazy like Haskell. So it could end up being a very important language on multi-processor and distributed systems.

    The only way I see multi-core processors or cluster-like processors (Cell) succeed is if programmers switch to languages like that. Any other way would introduce too many bugs in programs. Computers should make life easier, not harder. Even for programmers.

    Eventually, multi-core/processor is the only way forward, long before single-processors have to heat up to supernova temperatures to increase speed.

    We're just at the beginning of computing. Looking back, programmers of the future will pity us poor folk who had to make do with only 1 CPU. However, we need the right tools to move forward. Anyone know if there's an automatically multithreading (functional) programming language in existance or being invented?
  • Well, one of the first questions to cross my mind, was how to tie the multiple cores together. But if the VP of this endeavor is Steve Smith [pbs.org], maybe the answer is the "handyman's secret weapon..."

    -d

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...