Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

IBM's Chief Architect Says Software is at Dead End 334

j2xs writes "In an InformationWeek article entitled 'Where's the Software to Catch Up to Multicore Computing?' the Chief Architect at IBM gives some fairly compelling reasons why your favorite software will soon be rendered deadly slow because of new hardware architectures. Software, she says, just doesn't understand how to do work in parallel to take advantage of 16, 64, 128 cores on new processors. Intel just stated in an SD Times article that 100% of its server processors will be multicore by end of 2007. We will never, ever return to single processor computers. Architect Catherine Crawford goes on to discuss some of the ways developers can harness the 'tiny supercomputers' we'll all have soon, and some of the applications we can apply this brute force to."
This discussion has been archived. No new comments can be posted.

IBM's Chief Architect Says Software is at Dead End

Comments Filter:
  • by AKAImBatman ( 238306 ) * <[moc.liamg] [ta] [namtabmiaka]> on Tuesday January 30, 2007 @11:34AM (#17814996) Homepage Journal

    In an InformationWeek article entitled 'Where's the Software to Catch Up to Multicore Computing?' the Chief Architect at IBM gives some fairly compelling reasons why your favorite software will soon be rendered deadly slow because of new hardware architectures.
    *THUNK*

    owww... my head...

    There are a couple of serious problems with this statement. The most important one is that the article doesn't say that existing software will get slower. And there's a reason for that: Existing software will continue to run on the individual processor cores. Something that they've done for a long period of time. Old software may not get any faster due to a change in focus toward parallelism vs. increased core speed, but it's not going to suddenly come to a screeching halt any more than my DOS programs from 15 years ago are.

    Secondly, multicore systems are not a problem. Software (especially server software!) has been written around multi-processing capabilities for a long time now. Chucking more cores into a single chip won't change that situation. So my J2EE server will happily scale on IBM's latest multicore Xenon PowerPC 64 processor.

    Finally, what the article is really talking about is the difficulties in programming for the Cell architecture. Cell is, in effect, and entire supercomputer architecture shrunk to a single microprocessor. It has one PowerPC core that can do some heavy lifting, but its design counts on the programmers to code in 90%+ SIMD instructions to get the absolute fastest performance. By that, I mean that you need to write software that does the same transformation simultaneously across reasonably large datasets. (A simplification, but close enough for purposes of discussion.) What this means is that the Cell processor is the ultimate in Digital Signal Processor, achieving incredible thoroughput as long as the dataset is conductive to SIMD processing.

    The "problem" the article refers to is that most programs are not targetted toward massive SIMD architectures. Which means that Cell is just a pretty piece of silicon to most customers. Articles like this are trying to change that by convincing customers that they'd be better served by targetting Cell rather than a more general purpose architecture.

    With that out of the way, here's my opinion: The Cell Broadband Architecture is a specialized microprocessor that is perpendicular to the general market's needs. It has a lot of potential uses in more specialized applications (many of which are mentioned in the article), but I don't think that companies are ready to throw away their investment in Java, .NET, and PHP server systems. (Especially since they just finished divorcing themselves from specific chip architectures!) Architectures like the SPARC T1 and IBM's multicore POWER/PowerPC chips are the more logical path as they introduce parallelism that is highly compatible with existing software systems. The Cell will live on, but it will create new markets where its inexpensive supercomputing power will open new doors for data analysis and real-time processing.
    • by Red Flayer ( 890720 ) on Tuesday January 30, 2007 @11:44AM (#17815138) Journal

      The "problem" the article refers to is that most programs are not targetted toward massive SIMD architectures. Which means that Cell is just a pretty piece of silicon to most customers. Articles like this are trying to change that by convincing customers that they'd be better served by targetting Cell rather than a more general purpose architecture.

      In other words, a spokesperson from $COMPANY is trying to convince the market that they'll soon need to use $PRODUCT if they want best results, conveniently which is sold by the $COMPANY?

      Imagine that.

      Sorry, cynical today.
      • Re: (Score:2, Informative)

        by ArcherB ( 796902 ) *
        In other words, a spokesperson from $COMPANY is trying to convince the market that they'll soon need to use $PRODUCT if they want best results, conveniently which is sold by the $COMPANY?

        IBM is not the only company releasing multi-core processors. Single core processors will soon go out the same way the 386SX did when 32-bit computing became the norm.
        • IBM is not the only company releasing multi-core processors.
          It's not about multi-core processors, it's about the Cell architecture, for parts of which IBM holds many patents and makes a lot of money on licensing.
          • by ArcherB ( 796902 ) *
            It's not about multi-core processors, it's about the Cell architecture, for parts of which IBM holds many patents and makes a lot of money on licensing.

            You are correct. But I should point out that IBM is not the only company producing the Cell either. Sony and Toshiba also own rights to the Cell, but I doubt they will start putting them into servers.
            • True, it was a collaborative effort from STI. However, IBM is the chief patent-holder, and stands the most to gain from promoting developent for Cell architecture.
            • by rifter ( 147452 )

              "It's not about multi-core processors, it's about the Cell architecture, for parts of which IBM holds many patents and makes a lot of money on licensing."

              You are correct. But I should point out that IBM is not the only company producing the Cell either. Sony and Toshiba also own rights to the Cell, but I doubt they will start putting them into servers.

              They are, however, the only company working on the Roadrunner [ibm.com] project, which TFA was actually about and which TFA (or rather IBM's spokesperson in TFA) says

              • by Lazerf4rt ( 969888 ) on Tuesday January 30, 2007 @01:32PM (#17816978)

                TFA is nothing more than a press release announcing the plan to develop a supercomputer in Los Alamos, New Mexico. Yeah, it'll be made by IBM and based on Cell (and Opteron). In an attempt to make it more interesting, the article seems to struggle to make another point... and the point is difficult to discern from its river of vague generalities, lame statistics and other banalities. Best I can fathom is that the writer (IBM's chief architect) simply hopes that new, multicore-centered development tools will somehow emerge as a result of the computer's development:

                We are inviting industry partners to define the components (APIs, tools, etc.) of the programming methodology so that the multicore systems are accessible to those partners as well.

                Fair enough. The Slashdot summary is a horrible spin on TFA, and the attached "-dept." tagline attached is just embarassing. Too bad there wasn't more information content here, because multicore processing is indeed the future, and this could have been a much more interesting read.

      • Re: (Score:3, Insightful)

        by cmacb ( 547347 )
        So many misconceptions, so little time...

        I'm not sure if anyone above read past the third paragraph, but I see no evidence of it.

        Noteworthy in the article was a combining of conventional X86 technology and Cell technology along with some state of the art memory management. (I'm not employed by or invested in any of the companies involved, just reporting the facts mam).

        For the average user there is NO downside to multi core technology, so any statement to the contrary (in the article summary in this case,
    • by Salvance ( 1014001 ) * on Tuesday January 30, 2007 @11:53AM (#17815262) Homepage Journal
      The argument that software will get slower assumes that most consumer software will continue to have additional CPU requirements without being coded for multi-core applications. This doesn't make sense. The average consumer uses an Office product, e-mail, and a browser. None of these use anywhere close to 100% of the CPU for very long even on a Pentium 3, let alone on a 2GHz+ core in a multi-core processor.

      Workstation computing will suffer some until software vendors catch up, but this is already happening (e.g. most CAD, Animation, Video Processing are starting to come out with multi-core optimized software). Sure, some apps will continue to be single-threaded, but eventually, who would buy them? Software vendors aren't dumb.

      Games will probably speed up significantly as well. Imagine the possibilities of having a game engine where each AI character utilizes 100% of a single core? Game designers aren't going to sit around desiging games that run on single core engines, they always push the boundaries and will continue to do so.
      • by Pxtl ( 151020 ) on Tuesday January 30, 2007 @12:03PM (#17815420) Homepage
        Even worst-case-scenario, minimally-threaded workstation software can still allow for manual multitasking - if the render-loop of your 3D-modelling app is only using a small amount of the available processor, then at least the others remain available for continuing to work in the main app.

        The real problem is that procedural languages are fugly for working in on this stuff. Even the "modern" commercial languages like Java/DotNet still are somewhat cumbersome in the world of threading, compared to other languages where the threading metaphors are deeper in teh logic (or more mutable languages, like Lisp, where creating new core metaphors is trivial).
        • Re: (Score:3, Interesting)

          by Pojut ( 1027544 )
          Well, programming languages come and go...of course, some of the "classics" are still in limited use (cobol, Pascal, C) but for the most part programming languages go the way of the dodo eventually.

          I would imagine that if these new multi-multi core procs are released into the wild in mass numbers, new programming languages will be developed that will enable things to be done more efficiently and easier....or perhaps a hybrid language: One half of the language is for writing processes for individual cores,
          • Well, programming languages come and go...of course, some of the "classics" are still in limited use (cobol, Pascal, C) but for the most part programming languages go the way of the dodo eventually.

            I hate to see C lumped in there. Most supercomputing and parallel code in the world is written in C or Fortran. Sure, there is the rogue C++, Python, or something, but for now, parallel program runs mostly C/Fortran. They're fast, efficient, and well entrenched. One of the things that is available is addon
            • by Pojut ( 1027544 )
              I know, that's why I included it in being a classic that is still in use:-) Maybe not as limited as others, but still in use:-)
          • Re: (Score:3, Informative)

            by Pausanias ( 681077 )
            Er, C is in "limited use?" How about all of GNU software, the linux kernel, and all of GNOME, which are written in C (not C++ or anything else), and even large parts of Apple's OS X Darwin kernel?

            Regarding multicore CPUs, there already plenty of parallelization packages linkable directly into C (e.g. the various MPI implementations [open-mpi.org]). All you have to do is structure your for loops to make use of it. Once you do that, you can run each iteration of the loop on a separate core via MPI or something similar,
      • Re: (Score:2, Funny)

        The average consumer uses an Office product, e-mail, and a browser. None of these use anywhere close to 100% of the CPU for very long even on a Pentium 3, let alone on a 2GHz+ core in a multi-core processor.

        Sounds like you haven't tried Vista, Office 2007, and IE7.
      • I don't think workstation computing will suffer at all, for the reason you mentioned. :)

        The average consumer uses an Office product, e-mail, and a browser.

        Multi-core processors will hopefully schedule each of those on a different core, which will give the user added performance in task-switching. I, for one, am sick and tired of waiting for my computer to catch up whenever I switch tasks. When I'm "in the zone" on some program I'm working on, the cost of task-switching goes up significantly when those

        • Multi-core processors will hopefully schedule each of those on a different core, which will give the user added performance in task-switching.

          They already do, but it won't. That's because what having multiple cores buys you is a reduction in context switches. When you switch applications what's slow is waiting for the cached elements of the GUI to be uncached, the paged memory of the idle application to be swapped back in, and the graphics to be redrawn. Multi-buffering solves some of this but the biggest

      • by rifter ( 147452 )

        The argument that software will get slower assumes that most consumer software will continue to have additional CPU requirements without being coded for multi-core applications. This doesn't make sense. The average consumer uses an Office product, e-mail, and a browser. None of these use anywhere close to 100% of the CPU for very long even on a Pentium 3, let alone on a 2GHz+ core in a multi-core processor.

        Be that as it may, I think that the focus here was more on scientific computing and enterprise softw

    • Re: (Score:3, Informative)

      by xtracto ( 837672 )
      (many of which are mentioned in the article), but I don't think that companies are ready to throw away their investment in Java, .NET, and PHP server systems. .

      The Cell will live on, but it will create new markets where its inexpensive supercomputing power will open new doors for data analysis and real-time processing..

      I just wanted to quote these two sentences you said relevant to my comment. The problem with multicore computing power is not in the software developers but in the available tools. The pro
      • Re: (Score:3, Informative)

        by AKAImBatman ( 238306 ) *

        he problem with multicore computing power is not in the software developers but in the available tools. The problem with having say, 60 cores able to run in parallel is that our computation methods (turing based machine computation) are based on the basic "serial algorithm".

        I think you're misunderstanding the subject here. The Cell Processor is designed for chewing through massive data sets at an unprecidented rate. It has almost nothing to do with multiprocessing, and everything to do with fast transforma

      • by Rycross ( 836649 )
        Functional programming languages are highly suited to parallelization. I think in the future we'll see functional languages become more mainstream, and/or a lot of functional programming paradigms
      • by TheRaven64 ( 641858 ) on Tuesday January 30, 2007 @12:53PM (#17816326) Journal

        The problem with having say, 60 cores able to run in parallel is that our computation methods (turing based machine computation) are based on the basic "serial algorithm"

        We've had theoretical foundations for parallel processing back since Turing (see non-deterministic Turing Machine) and rigourous theoretical frameworks such as ?-calculus and CSP for decades. We even have languages like some Haskell dialects and Erlang that are built using these as foundations.

        If you choose to use languages designed for a PDP-11, that's up to you, but the rest of us are quite happy writing code for 64+ processors in languages designed for parallelism.

    • by adam31 ( 817930 ) <adam31@gmELIOTail.com minus poet> on Tuesday January 30, 2007 @12:46PM (#17816198)
      but its design counts on the programmers to code in 90%+ SIMD instructions to get the absolute fastest performance.


      This is an often-repeated misconception. Cell abandons the practice of having different fp, integer, and vector registers... all registers are 128-bit and any instruction can be issued on any of them, and those instructions are generated by a C++ compiler. So saying that programmers code in these SIMD instructions is like saying that "x86's design counts on programmers to shuffle values between the fp stack, integer and vector registers, and code in separate fp, integer, and vector instructions to get the absolute fastest performance".

      The reality is that Cell was targeted more at solving the memory problem than just doing SIMD stream processing. Engineers looked around and decided a 32kb L1 cache was silly... not having a cache-snooping DMA engine (or prefetch engine) would be silly. Putting nine cores on a bus with 7 GB/s bandwidth would be silly. Not being able to overlap memory latency with execution is silly. To solve all these problems, you give up having a single coherent address space.

      But there is even more power in Java, .NET investments now... It is completely within the realm of possibility to write a runtime that executes your Java thread on SPU, or JITs the .NET to SPU code. It's a nice benefit that these are already handle-based rather than pointer based languages, so the memory-mapping is a task of the runtime and transparent to the code. And IBM is working hard on native C++ code generation that is agnostic of the address space problem.

  • by The-Bus ( 138060 ) on Tuesday January 30, 2007 @11:36AM (#17815020)
    I see no need for why we would ever need anything more than 640 cores per processor in the future.
  • But Developers do? (Score:4, Insightful)

    by Ckwop ( 707653 ) * on Tuesday January 30, 2007 @11:37AM (#17815026) Homepage

    Software, she says, just doesn't understand how to do work in parallel to take advantage of 16, 64, 128 cores on new processors.

    But the developers do? When these processors become prevelant, people will design their software to utilise the parallel processing capability. What am I missing here?

    Simon

    • Concurrency is hard. (Score:5, Informative)

      by argent ( 18001 ) <peter@@@slashdot...2006...taronga...com> on Tuesday January 30, 2007 @11:52AM (#17815252) Homepage Journal
      Concurrency is a hard problem, and unexpected interactions between asynchronous events in concurrent environments has been a periodic bugbear for almost as long as computers have been interactive.

      It's what made the Amiga look less reliable than its competitors... if you only ran one native program at a time it was a lot more stable than MacOS or MS-DOS, because the OS provided a much richer set of services so applications didn't have to replicate them... but most people took advantage of the multitasking and when something crashed in the background the lack of memory protection meant the whole thing went down, and non-native software that wasn't written with multitasking in mind could produce the most entertaining crashes.

      These days we all have good protected mode multitasking operating systems, but we don't have good easy ways to distribute an application across multiple cores. Until we do, most applications are going to be written to run single-threaded and depend on the OS to use the other cores to speed up the rest of the system, both at the application level and doing things like running graphics libraries on another core.

      Until we have so many cores that the OS can't make effective use of them I don't think there's even going to be much of an attempt to make use of them for more developers. And then we're going to go through a painful period like we went through before Microsoft discovered multitasking.
      • Concurrency is... (Score:5, Informative)

        by TodMinuit ( 1026042 ) <todminuit@gm a i l .com> on Tuesday January 30, 2007 @12:04PM (#17815438)
        Concurrency [wikipedia.org] is [erlang.org] easy [vitanuova.com].
        • Concurrency is easy, threading is hard. If all of your threads are in the same address space, or communicating synchronously then your debugging complexity scales roughly according to the factorial of your degree of concurrency. If you use an asynchronous model with no shared address space then it scales roughly linearly with your degree of parallelism.

          Pick the right tool for the job.

      • by Trails ( 629752 )
        Concurrency is hard, but so is Object Oriented programming... to those who aren't used to it.

        Concurrency comes with its own set of hurdles, traps, etc... etc... and the reason why people fall into them is that the majority of programmers have only limited experience with multi-threading. One could also make the argument that existing mainstream API's are a little "thin", expecting a lot from each implementation.

        The shift in hardware towards multi-threading precursors a shift in software towards the same.
      • It's what made the Amiga look less reliable than its competitors... if you only ran one native program at a time it was a lot more stable than MacOS or MS-DOS, because the OS provided a much richer set of services so applications didn't have to replicate them... but most people took advantage of the multitasking and when something crashed in the background the lack of memory protection meant the whole thing went down

        What made the Amiga unstable was the lack of memory protection, not the multitasking. What

      • Re: (Score:3, Informative)

        by texag98 ( 1057646 )
        I agree... anyone who's developed multi-threaded code for an SMP has probably run into the problems of debugging asynchronous thread events. This can make debugging, which is already tedious even more tedious and time consuming.

        As the number of cores increases different algorithmic approaches will need to be pursued to get the maximum performance. Many algorithms which are great for serial processors will perform poorly on a parallel architecture.

        I think many people don't realize just yet how big of
    • by jd ( 1658 )
      The first mistake is in assuming that the program has to understand concurrency. There are parallelizing compilers which will take standard C code, spot the parallelizable segments, and build those to run in parallel threads. Sure, GCC isn't one of them - OpenMP is manually-described parallelism - but there is no obvious reason why it couldn't do this. It already has a profile round that is followed by a build-to-profile optimization round, so why not have something similar for segmenting the code?

      The sec

    • by theStorminMormon ( 883615 ) <(theStorminMormon) (at) (gmail.com)> on Tuesday January 30, 2007 @12:09PM (#17815500) Homepage Journal
      How does current virtualization software fare with multi-core architecture? I mean, if the hype is even somewhat believed even SMBs will be able to afford off-the-shelf "supercomputers". Of course relative to the real super-computers, these machines will be slow. But relative to actual requirements, they should be, well, supercomputers.

      Suddenly the old "everyone's moving to thin-clients/mainframes/dumb terminals/etc" story (as recently as today: http://it.slashdot.org/article.pl?sid=07/01/30/134 0210 [slashdot.org] ) becomes interesting again. If virtualization software works, then we don't need to wait for a golden age of multi-threaded software development. SMBs (and large companies too) will be able to deploy dumb terminals linked to multi-core monsters, install virtualization software to get as many servers as they need, offload individual instances of programs to the various cores as is natural, and viola: now you can actually realize all those TOC savings the thin client crowd has been raving about all these years.

      This transition to multi-core is what we need because, as far as I know, actually getting individual programs to run nicely on multiple cores is (with notable exceptions) something we're not really ready for yet in terms of development.

      -stormin
  • by ruiner13 ( 527499 ) on Tuesday January 30, 2007 @11:39AM (#17815058) Homepage
    What the author fails to take into account is that multi-core allows each program to effectively use a separate core to do its work, regardless of how it is programmed. All it takes is the OS to be smart enough to task each program to a free core, if available. The programs don't have to be specifically written to be multi-core aware as long as the OS is smart enough to send process to the idle cores. The programs that need more power than one core can deliver will usually have the multi-core support built in, as many games are starting to do now that the technology is taking off.

    Notice I took the high ground and didn't make the obligatory windows virus scan jokes... :)
    • by ArcherB ( 796902 ) * on Tuesday January 30, 2007 @11:56AM (#17815312) Journal
      The programs don't have to be specifically written to be multi-core aware as long as the OS is smart enough to send process to the idle cores.

      While that is true of multi-core general purpose processors like the x86, but I don't think that works too well when talking about the Cell processor. The OS can't just assign a Power-PC compiled app to a SPU and expect it to run. Apps have to be specifically coded to take advantage of the SPU's on the Cell.

      • by MysteriousPreacher ( 702266 ) on Tuesday January 30, 2007 @12:16PM (#17815608) Journal
        I'm just dipping my toe in to the world of programming so bear with me if this is silly.

        Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.

        I could take an old library (QuickDraw for example) and totally rewrite it to take advantage of a new architecture as long as it accepts the old calls and returns the expected results. This is probably an over-simplification though.
        • Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.

          Yes! Reusable code is the answer to this dilemma. This works best when the APIs are part of the OS, or when they are Open Source. Both increase their use.

        • by Jerf ( 17166 ) on Tuesday January 30, 2007 @01:16PM (#17816716) Journal

          Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.

          In a word, no.

          The more complicated answer is "Yes, in rare cases".

          The problem is that programs written in your normal languages (C, C++, Java, C#, basically anything you've ever heard of) are totally synchronous; you can not proceed on to the next statement until the previous one completes.

          Thus, trying to parallelize something at the API is virtually worthless. I don't win anything if my "drawWindow" or "displayMPEGFrame" function flies off to another processor to do its work, if I still have to wait for it to complete before I can move on.

          (This can be helpful if you have two types of processors, so in fact 3D graphics APIs can be looked at as working just this way. But we already have that.)

          You might say, "But there are some operations that I can do that with, like loading a webpage!" We already can do that. It's called asynchronous IO; you fire your IO request, the hardware (with software assist) does its thing, and you get the results later. You might even fire off a lot of things and process them in the non-deterministic order they come back. UNIX has been doing that for about as long as it has been UNIX, via the select call.

          The easy stuff has been done. To write programs that actually fill a multi-core CPU's capacity is going to require a paradigm change. Shared-memory threading isn't looking very good (too complex for any human to correctly implement). There are several candidate paradigms, but there is no clear winner at the moment, some of them may never work, and they all have one thing in common: They look nothing like current coding practices with threads (because, as I said, that's looking pretty useless if we can't get it working in the decades we've had to play with it).

          The claims I've seen so far:

          • Erlang-style concurrency: This is a ton of little threads that communicate solely through message passing, no shared state. On the plus side, it's got a working implementation that you can use today. On the down side (and this is my personal opinion), I'm not sure you really need the "functional" part of Erlang to use it (I think you just need threads that share nothing, and if you did that in a more conventional OO language it'd be fine), and Erlang's still quite short on libraries for anything outside of its core competency of network programming.
          • Pure functional programming: Pure functional programming has the idea of no mutable state, which allows you to do certain things out-of-order automatically without fear of the system behaving non-deterministically. A lot of people are still making bold claims about this one, but I tend to agree with the papers that show the amount of implicit parallelism in real-world programs is fairly minimal; you're going to need to tell the system where the parallelism for the forseeable future.
          • Stream programming: Probably ultimately a special case of Erlang-style processes, and only useful in certain domains (like sound processing).
          • And of course, I'd be remiss to not mention the "suck it up and use threads" school of thought, but my feeling is that if programmers in general haven't gotten it right after 20 years, the claim that programmers are especially stupid becomes less plausible, and "the technology is uselessly complex in practice" must be the right answer.

          This isn't exhaustive, it's off the top of my head, and there are endless variations on each of those themes.

          If I had to lay money down, I'd go with "a language that used threadlets like Erlang and rigidly enforced no sharing, in an OO environment" winning, which does not really exist yet. (Probably the closest you get today would be Stackless Python with a manual enforcement of sending only immutables across t

          • Re: (Score:3, Interesting)

            But in which way are threads which don't share anything different from several processes which communicate (e.g. through sockets or through pipes)?
          • Re: (Score:3, Informative)

            by jstott ( 212041 )

            The problem is that programs written in your normal languages (C, C++, Java, C#, basically anything you've ever heard of) are totally synchronous; you can not proceed on to the next statement until the previous one completes.

            *cough* Fortran *cough*

            Fortran has had support for this type of thing since F90 came out 15+ years ago. The language standard defines what are (and are not) asynchronous operations. For example, the WHERE()...OTHERWISE...END construct is designed to be implemented asynchronous.

    • I keep thinking about this as well. Really, sitting down a trying to write code that runs optimally on multiple processors is a huge headache, and, frankly, judging by the code I've seen in my life, most coders aren't up to it...It would be far better to put a VM or a specialty compiler between the code and the system, one that is capable of taking regular code and making it more multi-core friendly.

      Sure it'll add overhead, but the number of cores we're going to be working with at a time is going to continue to change, and the only way to not write immediately obsolete code is to have an intermediate control layer that is smart enough to translate.
    • Re: (Score:3, Interesting)

      And if thread management is good/fast enough then there's no reason things like GUI widgets can't run on their own thread/core. (I doubt spell checker in Firefox runs in it's own thread though)
    • Re: (Score:3, Informative)

      by cweber ( 34166 )
      Exactly, and as it stands right now this is already happening.

      Scientific computing has gone from exhaustive and detailed simulation to exhaustive analyses of an entire parameter space with the advent of new life science branches such as Genomics, Proteomics and all the other broadly based omics-type endeavors. This means embarrassingly parallel computation at a massive scale. At my institution we keep roughly 3000 cores humming around the clock without any difficulty, largely using legacy code.

      At home I hav
  • by Dr Kool, PhD ( 173800 ) on Tuesday January 30, 2007 @11:39AM (#17815064) Homepage Journal
    If you look at single-thread performance on Intel and AMD's dual/quad core chips, they meet or beat the best that single-core has to offer. I don't see why a multi-core system in the future will run single-thread apps any slower than right now. If anything I'd expect single-thread performance to increase incrementally as Intel and AMD are able to increase clock speeds.
  • by GillBates0 ( 664202 ) on Tuesday January 30, 2007 @11:43AM (#17815126) Homepage Journal
    Has Netcraft confirmed this yet?
  • by NullProg ( 70833 ) on Tuesday January 30, 2007 @11:44AM (#17815140) Homepage Journal
    Herb Sutter wrote about this topic two years ago. A great read for anyone who is interested.
    http://www.gotw.ca/publications/concurrency-ddj.ht m [www.gotw.ca]

    Enjoy,
    • Currently I have VLC, Opera, a bunch of Notepads, and Visual Studio open. None of these would benefit all too greatly from concurrency.
      • Visual Studio would - even if you consider the speed improvement from compiling concurrently (except that it'll still bottleneck on the disk), I'm sure the many stupid threads that update intellisense or the the object tree or whatever it is that makes my box hang for a couple of seconds when I want it to do stuff will benefit.
      • Re: (Score:3, Insightful)

        by AKAImBatman ( 238306 ) *
        VLC would benefit greatly from concurrent SIMD cores like the Cell Broadband Engine. Depending on the particular compression stream, the cores could be decoding multiple frames of video/audio in parallel at a much faster rate than a regular PC. Even if the particular compression stream didn't allow for multiprocessing, the SIMD design would still allow for the decompression stream to be completed in a fraction of the time it would take general purpose instructions to perform. (The exact fraction is dependen
  • by BritneySP2 ( 870776 ) on Tuesday January 30, 2007 @11:46AM (#17815156)
    IMHO, multi-cores are good for multitasking, which does not cover the whole problem of parallelism. Software (at least, in principle) _is_ ready: pure functional languages, for example, are perfectly suited for parallel processing; it is the lack of the CPUs with architectures that support internal concurrency (using a single core - as opposed to those providing support for multi-threading using multiple cores) that is the problem...
  • by maynard ( 3337 ) on Tuesday January 30, 2007 @11:55AM (#17815292) Journal
    A New Kind of Science [wikipedia.org]. Converting a range of standard CS algorithms into Cellular Automata networks is the very solution our brains use; a combination of message passing and feedback loops. If we want our computers to scale in parallel, we might want to look at how biology has solved the problem. A lot of people laughed at Wolfram when he initially published that book. I think he yet might have the last laugh.
  • I rather feel hardware is at an end rather than software. It seems we've got to the point where stacking things on top of each other is the only way to keep improving things.
  • Forgive my ignorance, but can't the OS just make each new app run on it's own core? That would probably give us some overall apparent-speed-of-computer increases, without having to completely modify all existing stuff.
    • by Overzeetop ( 214511 ) on Tuesday January 30, 2007 @12:12PM (#17815542) Journal
      You are correct. And given the multitude of things that modern OSes need to do to "help us", we need these cores. I wish my laptop could be upgraded to a multi-core system, as there are too many things that will bod down the system that have to run in the background. Having a processor (or two) for them would significantly increase the responsiveness of my system.

      A decade ago I had a dual PP200 that was one of the nicest machines I had ever run. I ran some unruly apps at the time, and having an extra idle processor to cut those processes of at the knee without rebooting was a nice benefit. Nothing was multi-threaded back then, but having two processors was still valuable.
    • can't the OS just make each new app run on it's own core?

      that's my question. why not put some scheduler thing in the OS to load balance between cores?

      That would probably give us some overall apparent-speed-of-computer increases, without having to completely modify all existing stuff.

      with all of this talk about virtualization as the future of rock and roll, why can't you just stick your legacy apps into some container and have the container take a single core all to itself? presumably clockspeeds will

      • "with one core that rules the others, lord of the rings style."

        In this case, you'd have the command O/S run on the "lead" core, and the apps on the other cores, right? I mean, am I a rocket scientist here, or shouldn't this stuff all be obvious?
        • well, i thought that the beauty of grid computing was that it could be made of disperate architectures and that the translating was handled by the "client"... or whatever you call the program that lets a node talk to the grid.

          isn't it the same deal, only for interprocess communication? it seems that the cluster/grid engineers already solved this one and you just have to miniaturize the whole thing to work between cores instead of between nodes.

          one thing i learned in the very brief dotcom era experience

    • by Sique ( 173459 )
      That's not the problem. But if you have a CPU intensive, but non-threaded program in your 128-processor-computer, then one processer runs at 100% load for your program, another one maybe at 28% to keep the I/O running, and 126 processors are idle. Thus your computer runs at 128%/128 = 1% of its capacity, even though the problem you are trying to solve on it is solely CPU limited.

  • If you read the article, you will see that Mrs. Crawford does not even come close to saying that "Software is at Dead End". She says software needs to catch up with the hardware.

    Computers have more and more processors (and different kinds of processors, like GPUs), and currently most software isn't designed for that kind of environment. IBM has developed some clever ways to program these types of systems in a "general purpose" way.

    That's the worst summary of a headline that I've ever read.
  • Clearly this person hasn't heard of our lord and master (and commander) Tim O'Reilly. Web 2.0, with it's centralised software-as-a-service paradigm will synergize the blah blah blah.

    Ok, ok, buzz words aside (synergize!), with the move to web apps, desktop terminals, people browsing on gaming consoles, etc... I just don't see what the person is whinging about.

    Ok, so hardware is moving towards parellelism (new buzzword? It's mine, and I've trademarked it, use it and I'll sue you!!). Software will need

  • by barnacle ( 522370 ) on Tuesday January 30, 2007 @12:17PM (#17815624) Homepage
    was an interesting article, particularly the part about the hybrid "roadrunner" architecture.

    However what is more relevant to today's non-supercomputing needs is SMP scalability.

    One of the challenges with SMP scalability is cache coherency; synchronizing the caches on the processors is a costly operation (this is necessary to ensure that each processor has the same view of certain memory at the same time), normally (always?) done with a cache invalidation.

    So the more invalidations you do, the more often the processor has to fetch memory from main memory, and the less it's using its cache. Processing slows down dramatically.

    I've tried to design the qore programming language http://qore.sourceforge.net/ [sourceforge.net] to be scalable on SMP systems. The new version (released today) has some interesting optimizations that have resulting in a large performance boost on SMP machines - the optimizations involve reducing the number of cache invalidations to the minimum (more than just reducing locking, although that is a part of it too - even an atomic update - for example on intel an assembly lock and increment - involves a cache invalidation and therefore is an expensive operation on SMP platforms). There is more work to be done, but in simple benchmarks of affected code paths the performance increase was between 2 and 3 times as fast with the optimizations on the same qore code.

    Anyway it would be interesting to know if other high-level programming languages have also taken the same approach (or will do so); as we go forward, it's clear that SMP scalability will be an important topic for the future...
  • Writing software is hard. Very hard. So hard that it can take many years to appreciate all the interactions between all components. These components are both concrete and abstract. Multi core is yet another layer on top of everything else.

    One of the first problems which Developers will need to master is Cache-Consistency. We are not being helped very much here. At the moment this is all very level/library specific. All the tricks for, say, elegantly sharing a session in a Hibernate [hibernate.org] based J2EE webapp are di

    • by geekoid ( 135745 )
      Writing software is insanely easy.
      Engineering software is vary hard.

      Just thought I would clarify for our Java and .net folks.

  • I think multiple cores are great. Now each of the spyware apps on my Windows box will have their own dedicated processor, and I can actually get something done on the ones that are freed up.

    Bring it on!
  • by Phat_Tony ( 661117 ) on Tuesday January 30, 2007 @12:28PM (#17815834)
    "We will never, ever return to single processor computers"

    Does anyone think that's anything other than a stupid thing to say?

    I mean, maybe we never will, and maybe it's really unlikely that we will anytime soon. But it seems that anytime there's a real revolutionary (rather than evolutionary) jump in processors, we may well go back to a single "core." For example, if they invented a fully optical processor that was insanely faster than anything in silicon, but they were very expensive to produce per core, and the price scaled linearly with the number of cores... sounds like we'd have single core computers around again for a while. And what about quantum computers? I don't even know what a "core" would be for a quantum computer, but are they by nature going to have a design that works on multiple problems simultaneously without being able to use that capacity to work on an individual problem faster? Even if that is the case, does the author know that, or are they just ignoring any possibility of non-silicon architectures?

    Even within silicon, is it out of the realm of conceivability that someone will develop a radical new architecture that can use more transistors to make a single core faster such that it's competitive with using the same transistor count for multiple cores?

    Considering how computers have spent a good 40 years continuously changing more quickly than any other technology in history, I'd be a bit more reserved in making sweeping generalizations about all possible future developments that might occur in the next forever.

    Still, computer scientists seem to be in rough agreement that current software development models mostly don't produce programs that are multi-threaded enough to take optimal advantage of the current trend toward increased cores. maybe it just sounds too boring when worded that way.

    • Even within silicon, is it out of the realm of conceivability that someone will develop a radical new architecture that can use more transistors to make a single core faster such that it's competitive with using the same transistor count for multiple cores?

      It's not inconceivable, but what is inconceivable is that anyone would bother using it for a desktop system until the price came down - and also that people wouldn't be using multiple cores, because they would. Such a system would be most desirable in

    • by geekoid ( 135745 )
      1. They are talking in terms of todays technology. Silicon, fabs, etc. SO to say "What if magic computer happen" is a little outside the context of this discussion.

      That said, if I create an 'optical' chip, why wouldn't I create a muliticore optical chip?
  • by Tablizer ( 95088 ) on Tuesday January 30, 2007 @12:28PM (#17815840) Journal
    Most apps get slow for these reasons:

    1. Disk is slow
    2. Network is slow
    3. Junkware hogging CPU
    4. Some primadona process decided against my will that it wants to run a scan, Java RTE update, registry cleaning, etc., using up disk head movements, RAM, and CPU.

    CPU is usually not the bottleneck except when other crap makes it the bottleneck.
  • Turing imagined massively parallel machines, but the succes of Von Newman architectures (hardware/software) has lead to the actual state of computers.
    There are things that can only be achieved with massive parallel processing, but after +60 years we are still triyng to extend the current arquitecture to multiple (not massive) units.
    I can understand the motivations and advantages but they are still fundamentally wrong.

  • by david.emery ( 127135 ) on Tuesday January 30, 2007 @12:45PM (#17816170)
    I believe a big part of our problem is our piss-poor set of programming langauges and their support for concurrency. C/C++ threads packages and Java's low level synchronization primitives make developing parallel/concurrent programs much more difficult than it should be. (Ada95/Ada05 gets it better, at least by raising the level of abstraction and supporting one approach to unifying concurrency synchronization, concurrency avoidance, and object-oriented programming.)

    Additionally, there's the related problems of understanding concurrency. In the 80's and 90s in particular, there were a lot of fundamental research results in reasoning about concurrent systems. Nancy Lynch's work at MIT (http://theory.csail.mit.edu/tds/lynch-pubs.html) comes to my mind. I'm always dismayed at how little both new CS grads and practicing programmers know about distributed systems, and how poor their ability is collectively to reason about concurrency. It seems like most of the time when I say "race condition" or "deadlock", eyes glaze over and I have to go back and explain 'concurrency 101' to folks who I think should know this.

    Wasn't it Jim Gray (I sure hope he shows up safe and sound!) who coined the terms "Heisenbugs" and "Bohrbugs" to help describe concurrency and faults? (Wikipedia attributes this to Bruce Lindsay, http://en.wikipedia.org/wiki/Heisenbug [wikipedia.org]) Not only is developing concurrent programs hard, debugging them is -really hard-, and our tools (starting with programming languages and emphasizing development tools/checkers), should be focused on substantially reducing or elminating the need for debugging, or development effort will continue to grow.

    Until we have more powerful tools -and training- (both academic and industrial) in using those tools, the Sapir-Whorf hypothesis (http://en.wikipedia.org/wiki/Sapir-Whorf_hypothes is) will apply: The lack of a language (programming language as well as 'spoken language') to talk about concurrency will make it nearly impossible for most programmers to develop concurrent programs. This applies to both MIMD and SIMD kinds of parallelism.

              dave
  • by Animats ( 122034 ) on Tuesday January 30, 2007 @12:47PM (#17816218) Homepage

    I've met some of the architects of the Cell processor, and they have a "build it and they will come" attitude. They've designed the computer; it's up to others to make it useful. This is probably not going to fly.

    The Cell is a non-shared memory multiprocessor with quite limited memory per processor. There's only 256K per processor, which takes us back to before the 640K IBM PC. There are DMA channels to a bigger memory, but no cacheing. Architecturally, it's very retro; it's very similar to the NCube of the mid-1980s. It's not even superscalar. Cell processors are dumb RISC engines, like the old low-end MIPS machines. They clock fast, but not much gets done per clock.

    Yes, you get lots of CPUs, but that may not help. On a server, what are you going to run in a Cell? Not your Java or Perl or Python server app; there's not enough memory. No way will an instance of Apache fit. You could put a copy of the TCP/IP stack in a Cell, but that's not where the CPU time goes in a web server. One IBM document suggests putting "XML acceleration" (i.e. XML parsing) in the server, but that's an answer looking for a problem. It might be useful for streaming video or audio; that's a pipelined process. If you need to compress or decompress or transcode or decrypt, the Cell might be useful. But for most web services, those jobs are done once, not during playout. Even MPEG4 compression might be too much for a Cell; you need at least two frames of storage, and it doesn't have enough memory for that.

    Now if they had, say, 16MB per CPU, it might be different.

    The track record of non-shared memory supercomputers is terrible. There's a long history of dead ends, from the ILLIAC IV to the BBN Butterfly to the NCube to the Connection Machine. They're easy to design and build, but just not that useful for general purpose computing. Some volumetric simulation problems, like weather prediction, structural analysis, and fluid dynamics can be crammed into those machines, so there are jobs for them, but the applications are limited.

    Shared-memory microprocessors look much more promising as general purpose computers. Having eight or sixteen CPUs in a shared-memory multicore configuration is quite useful. That's how SGI servers worked, and they had a good track record. Scaling up today's multicore shared-memory CPUs is repeating that idea, but smaller and cheaper.

    At some point, you have to go to non-shared memory, but that doesn't have to happen until you hit maybe 16 CPUs sharing a few gigabytes of memory, which is about when the cache interconnects start to choke and speed of light lag to the far side of the RAM starts to hurt. That might even be pushed harder; there's been talk of 80 CPUs in a shared memory configuration. That's optimistic. But we know 16 will work; SGI had that years ago.

    Then you go to a cluster on a chip, which is also well understood.

    That's the near future. Not the Cell.

  • build good compilers that support these options?

  • by kscguru ( 551278 ) on Tuesday January 30, 2007 @01:07PM (#17816588)
    Instead of an IBM executive, how about David Patterson. Hint: he wrote The Book on computer architecture.

    Berkeley tech report (inc. Patterson as author) [berkeley.edu]

    Brief summary (I heard the same talk when he spoke at PARC), computational problems are divisible into one of thirteen categories that range from matrix multiplication to finite state automata. Most existing research (academia and industry) into parallelism tends to focus on about seven of those categories that are most easily parallelized - think supercomputer cluster. Most apps that you or I use fall into the graph traversal or finite-state categories (think compilers, apps with an event loop, etc.), into which there is essentially no research. Patterson even suspects that finite state machines are inherently serial and CANNOT be parallelized.

    So ... the apps that we already use can't really get faster on parallel cores without major, fundamental advances in computer science that don't seem to be approaching. Which means we'll be using our current apps for a LONG time.

    Additional note: IBM (and other chip manufacturers) have a vested interest in telling everyone that parallelism is the future. They can't make faster chips anymore, they can only compete on sheer number of cores.

  • by Tracy Reed ( 3563 ) <{treed} {at} {ultraviolet.org}> on Tuesday January 30, 2007 @03:39PM (#17818768) Homepage
    ...is the only way we are going to take advantage of multi-core cpu's and continue to improve our software. Only through purely functional code can you make guarantees about what can be executed simultaneously and let the machine sort it all out. I'm learning Haskell for this very reason.

Don't get suckered in by the comments -- they can be terribly misleading. Debug only code. -- Dave Storer

Working...