IBM's Chief Architect Says Software is at Dead End 334
j2xs writes "In an InformationWeek article entitled 'Where's the Software to Catch Up to Multicore Computing?' the Chief Architect at IBM gives some fairly compelling reasons why your favorite software will soon be rendered deadly slow because of new hardware architectures. Software, she says, just doesn't understand how to do work in parallel to take advantage of 16, 64, 128 cores on new processors. Intel just stated in an SD Times article that 100% of its server processors will be multicore by end of 2007. We will never, ever return to single processor computers. Architect Catherine Crawford goes on to discuss some of the ways developers can harness the 'tiny supercomputers' we'll all have soon, and some of the applications we can apply this brute force to."
Clearing things up a bit (Score:5, Insightful)
owww... my head...
There are a couple of serious problems with this statement. The most important one is that the article doesn't say that existing software will get slower. And there's a reason for that: Existing software will continue to run on the individual processor cores. Something that they've done for a long period of time. Old software may not get any faster due to a change in focus toward parallelism vs. increased core speed, but it's not going to suddenly come to a screeching halt any more than my DOS programs from 15 years ago are.
Secondly, multicore systems are not a problem. Software (especially server software!) has been written around multi-processing capabilities for a long time now. Chucking more cores into a single chip won't change that situation. So my J2EE server will happily scale on IBM's latest multicore Xenon PowerPC 64 processor.
Finally, what the article is really talking about is the difficulties in programming for the Cell architecture. Cell is, in effect, and entire supercomputer architecture shrunk to a single microprocessor. It has one PowerPC core that can do some heavy lifting, but its design counts on the programmers to code in 90%+ SIMD instructions to get the absolute fastest performance. By that, I mean that you need to write software that does the same transformation simultaneously across reasonably large datasets. (A simplification, but close enough for purposes of discussion.) What this means is that the Cell processor is the ultimate in Digital Signal Processor, achieving incredible thoroughput as long as the dataset is conductive to SIMD processing.
The "problem" the article refers to is that most programs are not targetted toward massive SIMD architectures. Which means that Cell is just a pretty piece of silicon to most customers. Articles like this are trying to change that by convincing customers that they'd be better served by targetting Cell rather than a more general purpose architecture.
With that out of the way, here's my opinion: The Cell Broadband Architecture is a specialized microprocessor that is perpendicular to the general market's needs. It has a lot of potential uses in more specialized applications (many of which are mentioned in the article), but I don't think that companies are ready to throw away their investment in Java,
Re:Clearing things up a bit (Score:5, Insightful)
In other words, a spokesperson from $COMPANY is trying to convince the market that they'll soon need to use $PRODUCT if they want best results, conveniently which is sold by the $COMPANY?
Imagine that.
Sorry, cynical today.
Re: (Score:2, Informative)
IBM is not the only company releasing multi-core processors. Single core processors will soon go out the same way the 386SX did when 32-bit computing became the norm.
Re: (Score:2)
Re: (Score:2)
You are correct. But I should point out that IBM is not the only company producing the Cell either. Sony and Toshiba also own rights to the Cell, but I doubt they will start putting them into servers.
Re: (Score:2)
Re: (Score:2)
"It's not about multi-core processors, it's about the Cell architecture, for parts of which IBM holds many patents and makes a lot of money on licensing."
You are correct. But I should point out that IBM is not the only company producing the Cell either. Sony and Toshiba also own rights to the Cell, but I doubt they will start putting them into servers.
They are, however, the only company working on the Roadrunner [ibm.com] project, which TFA was actually about and which TFA (or rather IBM's spokesperson in TFA) says
Re:Clearing things up a bit (Score:4, Interesting)
TFA is nothing more than a press release announcing the plan to develop a supercomputer in Los Alamos, New Mexico. Yeah, it'll be made by IBM and based on Cell (and Opteron). In an attempt to make it more interesting, the article seems to struggle to make another point... and the point is difficult to discern from its river of vague generalities, lame statistics and other banalities. Best I can fathom is that the writer (IBM's chief architect) simply hopes that new, multicore-centered development tools will somehow emerge as a result of the computer's development:
Fair enough. The Slashdot summary is a horrible spin on TFA, and the attached "-dept." tagline attached is just embarassing. Too bad there wasn't more information content here, because multicore processing is indeed the future, and this could have been a much more interesting read.
Re: (Score:2)
[1] Of course, IMO, the market for the PS3 should be for about 5 or 6 units... but that's just because of how I feel about Sony.
Re: (Score:3, Insightful)
I'm not sure if anyone above read past the third paragraph, but I see no evidence of it.
Noteworthy in the article was a combining of conventional X86 technology and Cell technology along with some state of the art memory management. (I'm not employed by or invested in any of the companies involved, just reporting the facts mam).
For the average user there is NO downside to multi core technology, so any statement to the contrary (in the article summary in this case,
Re: (Score:2)
And because the "Cell architecture" has a ton of patents on it, which means that it is indeed a global value, as far as most IP law is concerned; ditto for the trademarked IBM.
You hit the nail right on the head (Score:5, Interesting)
Workstation computing will suffer some until software vendors catch up, but this is already happening (e.g. most CAD, Animation, Video Processing are starting to come out with multi-core optimized software). Sure, some apps will continue to be single-threaded, but eventually, who would buy them? Software vendors aren't dumb.
Games will probably speed up significantly as well. Imagine the possibilities of having a game engine where each AI character utilizes 100% of a single core? Game designers aren't going to sit around desiging games that run on single core engines, they always push the boundaries and will continue to do so.
Re:You hit the nail right on the head (Score:5, Interesting)
The real problem is that procedural languages are fugly for working in on this stuff. Even the "modern" commercial languages like Java/DotNet still are somewhat cumbersome in the world of threading, compared to other languages where the threading metaphors are deeper in teh logic (or more mutable languages, like Lisp, where creating new core metaphors is trivial).
Re: (Score:3, Interesting)
I would imagine that if these new multi-multi core procs are released into the wild in mass numbers, new programming languages will be developed that will enable things to be done more efficiently and easier....or perhaps a hybrid language: One half of the language is for writing processes for individual cores,
Re: (Score:2)
I hate to see C lumped in there. Most supercomputing and parallel code in the world is written in C or Fortran. Sure, there is the rogue C++, Python, or something, but for now, parallel program runs mostly C/Fortran. They're fast, efficient, and well entrenched. One of the things that is available is addon
Re: (Score:2)
Re: (Score:3, Informative)
Regarding multicore CPUs, there already plenty of parallelization packages linkable directly into C (e.g. the various MPI implementations [open-mpi.org]). All you have to do is structure your for loops to make use of it. Once you do that, you can run each iteration of the loop on a separate core via MPI or something similar,
Re: (Score:3, Insightful)
Mapping 1:1 to the underlying OS is not the be-all and end-all of linguistic constructs. Consider Actors model languages, or dataflow-model languages - or the native rendezvous concepts from Ada. Im not saying that any of these are ideal approaches (I hate Ada, for example) - Im just saying that Algol-descended languages w
Re: (Score:2, Funny)
Sounds like you haven't tried Vista, Office 2007, and IE7.
Re: (Score:2)
I don't think workstation computing will suffer at all, for the reason you mentioned. :)
Multi-core processors will hopefully schedule each of those on a different core, which will give the user added performance in task-switching. I, for one, am sick and tired of waiting for my computer to catch up whenever I switch tasks. When I'm "in the zone" on some program I'm working on, the cost of task-switching goes up significantly when those
Re: (Score:2)
They already do, but it won't. That's because what having multiple cores buys you is a reduction in context switches. When you switch applications what's slow is waiting for the cached elements of the GUI to be uncached, the paged memory of the idle application to be swapped back in, and the graphics to be redrawn. Multi-buffering solves some of this but the biggest
Re: (Score:2)
The argument that software will get slower assumes that most consumer software will continue to have additional CPU requirements without being coded for multi-core applications. This doesn't make sense. The average consumer uses an Office product, e-mail, and a browser. None of these use anywhere close to 100% of the CPU for very long even on a Pentium 3, let alone on a 2GHz+ core in a multi-core processor.
Be that as it may, I think that the focus here was more on scientific computing and enterprise softw
Re: (Score:3, Informative)
The Cell will live on, but it will create new markets where its inexpensive supercomputing power will open new doors for data analysis and real-time processing..
I just wanted to quote these two sentences you said relevant to my comment. The problem with multicore computing power is not in the software developers but in the available tools. The pro
Re: (Score:3, Informative)
I think you're misunderstanding the subject here. The Cell Processor is designed for chewing through massive data sets at an unprecidented rate. It has almost nothing to do with multiprocessing, and everything to do with fast transforma
Re: (Score:2)
Re:Clearing things up a bit (Score:5, Insightful)
We've had theoretical foundations for parallel processing back since Turing (see non-deterministic Turing Machine) and rigourous theoretical frameworks such as ?-calculus and CSP for decades. We even have languages like some Haskell dialects and Erlang that are built using these as foundations.
If you choose to use languages designed for a PDP-11, that's up to you, but the rest of us are quite happy writing code for 64+ processors in languages designed for parallelism.
Re: (Score:3, Insightful)
Things like audio/photo processing could benefit a lot, though. I imagine all you would need is a worker-thread design pattern, and let the OS do the rest.
Re: (Score:3, Insightful)
With any language I'm aware of you just have to implement it.
But there's the rub ... technically you can implement anything you want. Heck, you can do it in assembler! But do people do it and do they do it right? How much extra time and energy does it cost you and how difficult is the task? Why did database application developers care whether the database inherently supported transactions when they could just implement it in the application? Why did anyone care about Java's garbage collector when it c
Re:Clearing things up a bit (Score:5, Informative)
This is an often-repeated misconception. Cell abandons the practice of having different fp, integer, and vector registers... all registers are 128-bit and any instruction can be issued on any of them, and those instructions are generated by a C++ compiler. So saying that programmers code in these SIMD instructions is like saying that "x86's design counts on programmers to shuffle values between the fp stack, integer and vector registers, and code in separate fp, integer, and vector instructions to get the absolute fastest performance".
The reality is that Cell was targeted more at solving the memory problem than just doing SIMD stream processing. Engineers looked around and decided a 32kb L1 cache was silly... not having a cache-snooping DMA engine (or prefetch engine) would be silly. Putting nine cores on a bus with 7 GB/s bandwidth would be silly. Not being able to overlap memory latency with execution is silly. To solve all these problems, you give up having a single coherent address space.
But there is even more power in Java, .NET investments now... It is completely within the realm of possibility to write a runtime that executes your Java thread on SPU, or JITs the .NET to SPU code. It's a nice benefit that these are already handle-based rather than pointer based languages, so the memory-mapping is a task of the runtime and transparent to the code. And IBM is working hard on native C++ code generation that is agnostic of the address space problem.
That's ridiculous. (Score:5, Funny)
But Developers do? (Score:4, Insightful)
But the developers do? When these processors become prevelant, people will design their software to utilise the parallel processing capability. What am I missing here?
Simon
Concurrency is hard. (Score:5, Informative)
It's what made the Amiga look less reliable than its competitors... if you only ran one native program at a time it was a lot more stable than MacOS or MS-DOS, because the OS provided a much richer set of services so applications didn't have to replicate them... but most people took advantage of the multitasking and when something crashed in the background the lack of memory protection meant the whole thing went down, and non-native software that wasn't written with multitasking in mind could produce the most entertaining crashes.
These days we all have good protected mode multitasking operating systems, but we don't have good easy ways to distribute an application across multiple cores. Until we do, most applications are going to be written to run single-threaded and depend on the OS to use the other cores to speed up the rest of the system, both at the application level and doing things like running graphics libraries on another core.
Until we have so many cores that the OS can't make effective use of them I don't think there's even going to be much of an attempt to make use of them for more developers. And then we're going to go through a painful period like we went through before Microsoft discovered multitasking.
Concurrency is... (Score:5, Informative)
MOD PARENT UP (Score:2)
Pick the right tool for the job.
Re: (Score:2)
Concurrency comes with its own set of hurdles, traps, etc... etc... and the reason why people fall into them is that the majority of programmers have only limited experience with multi-threading. One could also make the argument that existing mainstream API's are a little "thin", expecting a lot from each implementation.
The shift in hardware towards multi-threading precursors a shift in software towards the same.
Re: (Score:2)
What made the Amiga unstable was the lack of memory protection, not the multitasking. What
Re: (Score:3, Informative)
http://support.microsoft.com/kb/78326 [microsoft.com]
http://support.microsoft.com/kb/79749 [microsoft.com]
Re: (Score:3, Informative)
As the number of cores increases different algorithmic approaches will need to be pursued to get the maximum performance. Many algorithms which are great for serial processors will perform poorly on a parallel architecture.
I think many people don't realize just yet how big of
Re: (Score:2)
The sec
Re:But Developers do? (Score:4, Interesting)
Suddenly the old "everyone's moving to thin-clients/mainframes/dumb terminals/etc" story (as recently as today: http://it.slashdot.org/article.pl?sid=07/01/30/13
This transition to multi-core is what we need because, as far as I know, actually getting individual programs to run nicely on multiple cores is (with notable exceptions) something we're not really ready for yet in terms of development.
-stormin
Yeah, if you only run one program at a time.. (Score:5, Insightful)
Notice I took the high ground and didn't make the obligatory windows virus scan jokes...
Re:Yeah, if you only run one program at a time.. (Score:4, Informative)
While that is true of multi-core general purpose processors like the x86, but I don't think that works too well when talking about the Cell processor. The OS can't just assign a Power-PC compiled app to a SPU and expect it to run. Apps have to be specifically coded to take advantage of the SPU's on the Cell.
Re:Yeah, if you only run one program at a time.. (Score:4, Interesting)
Couldn't the programs inherit the benefits of a multi-core system if the APIs they call are written to distribute the work to the cores? I know this probably isn't optimal but there must be some benefits from this.
I could take an old library (QuickDraw for example) and totally rewrite it to take advantage of a new architecture as long as it accepts the old calls and returns the expected results. This is probably an over-simplification though.
Re: (Score:2)
Yes! Reusable code is the answer to this dilemma. This works best when the APIs are part of the OS, or when they are Open Source. Both increase their use.
Re:Yeah, if you only run one program at a time.. (Score:5, Informative)
In a word, no.
The more complicated answer is "Yes, in rare cases".
The problem is that programs written in your normal languages (C, C++, Java, C#, basically anything you've ever heard of) are totally synchronous; you can not proceed on to the next statement until the previous one completes.
Thus, trying to parallelize something at the API is virtually worthless. I don't win anything if my "drawWindow" or "displayMPEGFrame" function flies off to another processor to do its work, if I still have to wait for it to complete before I can move on.
(This can be helpful if you have two types of processors, so in fact 3D graphics APIs can be looked at as working just this way. But we already have that.)
You might say, "But there are some operations that I can do that with, like loading a webpage!" We already can do that. It's called asynchronous IO; you fire your IO request, the hardware (with software assist) does its thing, and you get the results later. You might even fire off a lot of things and process them in the non-deterministic order they come back. UNIX has been doing that for about as long as it has been UNIX, via the select call.
The easy stuff has been done. To write programs that actually fill a multi-core CPU's capacity is going to require a paradigm change. Shared-memory threading isn't looking very good (too complex for any human to correctly implement). There are several candidate paradigms, but there is no clear winner at the moment, some of them may never work, and they all have one thing in common: They look nothing like current coding practices with threads (because, as I said, that's looking pretty useless if we can't get it working in the decades we've had to play with it).
The claims I've seen so far:
This isn't exhaustive, it's off the top of my head, and there are endless variations on each of those themes.
If I had to lay money down, I'd go with "a language that used threadlets like Erlang and rigidly enforced no sharing, in an OO environment" winning, which does not really exist yet. (Probably the closest you get today would be Stackless Python with a manual enforcement of sending only immutables across t
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
*cough* Fortran *cough*
Fortran has had support for this type of thing since F90 came out 15+ years ago. The language standard defines what are (and are not) asynchronous operations. For example, the WHERE()...OTHERWISE...END construct is designed to be implemented asynchronous.
Re:Yeah, if you only run one program at a time.. (Score:4, Interesting)
Sure it'll add overhead, but the number of cores we're going to be working with at a time is going to continue to change, and the only way to not write immediately obsolete code is to have an intermediate control layer that is smart enough to translate.
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Scientific computing has gone from exhaustive and detailed simulation to exhaustive analyses of an entire parameter space with the advent of new life science branches such as Genomics, Proteomics and all the other broadly based omics-type endeavors. This means embarrassingly parallel computation at a massive scale. At my institution we keep roughly 3000 cores humming around the clock without any difficulty, largely using legacy code.
At home I hav
rendered deadly slow? (Score:4, Informative)
Re:rendered deadly slow? (Score:4, Insightful)
Yeah, but... (Score:3, Funny)
Concurrency in software (Score:5, Informative)
http://www.gotw.ca/publications/concurrency-ddj.h
Enjoy,
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
With decent state management and locks, it should be safe enough.
Re: (Score:3, Insightful)
Multi-cores vs. internal parallelism (Score:4, Informative)
Stephen Wolfram has a solution (Score:5, Interesting)
hardware (Score:2)
Can't the OS just bump apps to their own procs? (Score:4, Interesting)
Re:Can't the OS just bump apps to their own procs? (Score:5, Interesting)
A decade ago I had a dual PP200 that was one of the nicest machines I had ever run. I ran some unruly apps at the time, and having an extra idle processor to cut those processes of at the knee without rebooting was a nice benefit. Nothing was multi-threaded back then, but having two processors was still valuable.
Re: (Score:2)
that's my question. why not put some scheduler thing in the OS to load balance between cores?
with all of this talk about virtualization as the future of rock and roll, why can't you just stick your legacy apps into some container and have the container take a single core all to itself? presumably clockspeeds will
Re: (Score:2)
In this case, you'd have the command O/S run on the "lead" core, and the apps on the other cores, right? I mean, am I a rocket scientist here, or shouldn't this stuff all be obvious?
Re: (Score:2)
well, i thought that the beauty of grid computing was that it could be made of disperate architectures and that the translating was handled by the "client"... or whatever you call the program that lets a node talk to the grid.
isn't it the same deal, only for interprocess communication? it seems that the cluster/grid engineers already solved this one and you just have to miniaturize the whole thing to work between cores instead of between nodes.
one thing i learned in the very brief dotcom era experience
Re: (Score:2)
Did the submitter even read the article? (Score:2, Informative)
If you read the article, you will see that Mrs. Crawford does not even come close to saying that "Software is at Dead End". She says software needs to catch up with the hardware.
Computers have more and more processors (and different kinds of processors, like GPUs), and currently most software isn't designed for that kind of environment. IBM has developed some clever ways to program these types of systems in a "general purpose" way.
That's the worst summary of a headline that I've ever read.
It's called web 2.0, DUH!!!! (Score:2)
Clearly this person hasn't heard of our lord and master (and commander) Tim O'Reilly. Web 2.0, with it's centralised software-as-a-service paradigm will synergize the blah blah blah.
Ok, ok, buzz words aside (synergize!), with the move to web apps, desktop terminals, people browsing on gaming consoles, etc... I just don't see what the person is whinging about.
Ok, so hardware is moving towards parellelism (new buzzword? It's mine, and I've trademarked it, use it and I'll sue you!!). Software will need
programming for multi-core architectures (Score:5, Informative)
However what is more relevant to today's non-supercomputing needs is SMP scalability.
One of the challenges with SMP scalability is cache coherency; synchronizing the caches on the processors is a costly operation (this is necessary to ensure that each processor has the same view of certain memory at the same time), normally (always?) done with a cache invalidation.
So the more invalidations you do, the more often the processor has to fetch memory from main memory, and the less it's using its cache. Processing slows down dramatically.
I've tried to design the qore programming language http://qore.sourceforge.net/ [sourceforge.net] to be scalable on SMP systems. The new version (released today) has some interesting optimizations that have resulting in a large performance boost on SMP machines - the optimizations involve reducing the number of cache invalidations to the minimum (more than just reducing locking, although that is a part of it too - even an atomic update - for example on intel an assembly lock and increment - involves a cache invalidation and therefore is an expensive operation on SMP platforms). There is more work to be done, but in simple benchmarks of affected code paths the performance increase was between 2 and 3 times as fast with the optimizations on the same qore code.
Anyway it would be interesting to know if other high-level programming languages have also taken the same approach (or will do so); as we go forward, it's clear that SMP scalability will be an important topic for the future...
Consistent Frameworks PLEASE!!! (Score:2)
One of the first problems which Developers will need to master is Cache-Consistency. We are not being helped very much here. At the moment this is all very level/library specific. All the tricks for, say, elegantly sharing a session in a Hibernate [hibernate.org] based J2EE webapp are di
Re: (Score:2)
Engineering software is vary hard.
Just thought I would clarify for our Java and
It'll speed things up... (Score:2)
Bring it on!
Never, ever, not in a million years! (Score:3, Insightful)
Does anyone think that's anything other than a stupid thing to say?
I mean, maybe we never will, and maybe it's really unlikely that we will anytime soon. But it seems that anytime there's a real revolutionary (rather than evolutionary) jump in processors, we may well go back to a single "core." For example, if they invented a fully optical processor that was insanely faster than anything in silicon, but they were very expensive to produce per core, and the price scaled linearly with the number of cores... sounds like we'd have single core computers around again for a while. And what about quantum computers? I don't even know what a "core" would be for a quantum computer, but are they by nature going to have a design that works on multiple problems simultaneously without being able to use that capacity to work on an individual problem faster? Even if that is the case, does the author know that, or are they just ignoring any possibility of non-silicon architectures?
Even within silicon, is it out of the realm of conceivability that someone will develop a radical new architecture that can use more transistors to make a single core faster such that it's competitive with using the same transistor count for multiple cores?
Considering how computers have spent a good 40 years continuously changing more quickly than any other technology in history, I'd be a bit more reserved in making sweeping generalizations about all possible future developments that might occur in the next forever.
Still, computer scientists seem to be in rough agreement that current software development models mostly don't produce programs that are multi-threaded enough to take optimal advantage of the current trend toward increased cores. maybe it just sounds too boring when worded that way.
Re: (Score:2)
It's not inconceivable, but what is inconceivable is that anyone would bother using it for a desktop system until the price came down - and also that people wouldn't be using multiple cores, because they would. Such a system would be most desirable in
Re: (Score:2)
That said, if I create an 'optical' chip, why wouldn't I create a muliticore optical chip?
CPU not the bottleneck (Score:5, Insightful)
1. Disk is slow
2. Network is slow
3. Junkware hogging CPU
4. Some primadona process decided against my will that it wants to run a scan, Java RTE update, registry cleaning, etc., using up disk head movements, RAM, and CPU.
CPU is usually not the bottleneck except when other crap makes it the bottleneck.
still old wrong news (Score:2)
Turing imagined massively parallel machines, but the succes of Von Newman architectures (hardware/software) has lead to the actual state of computers.
There are things that can only be achieved with massive parallel processing, but after +60 years we are still triyng to extend the current arquitecture to multiple (not massive) units.
I can understand the motivations and advantages but they are still fundamentally wrong.
It's the Language, stupid! (Score:3, Insightful)
Additionally, there's the related problems of understanding concurrency. In the 80's and 90s in particular, there were a lot of fundamental research results in reasoning about concurrent systems. Nancy Lynch's work at MIT (http://theory.csail.mit.edu/tds/lynch-pubs.html) comes to my mind. I'm always dismayed at how little both new CS grads and practicing programmers know about distributed systems, and how poor their ability is collectively to reason about concurrency. It seems like most of the time when I say "race condition" or "deadlock", eyes glaze over and I have to go back and explain 'concurrency 101' to folks who I think should know this.
Wasn't it Jim Gray (I sure hope he shows up safe and sound!) who coined the terms "Heisenbugs" and "Bohrbugs" to help describe concurrency and faults? (Wikipedia attributes this to Bruce Lindsay, http://en.wikipedia.org/wiki/Heisenbug [wikipedia.org]) Not only is developing concurrent programs hard, debugging them is -really hard-, and our tools (starting with programming languages and emphasizing development tools/checkers), should be focused on substantially reducing or elminating the need for debugging, or development effort will continue to grow.
Until we have more powerful tools -and training- (both academic and industrial) in using those tools, the Sapir-Whorf hypothesis (http://en.wikipedia.org/wiki/Sapir-Whorf_hypothe
dave
"Build it and they will come" attitude (Score:5, Insightful)
I've met some of the architects of the Cell processor, and they have a "build it and they will come" attitude. They've designed the computer; it's up to others to make it useful. This is probably not going to fly.
The Cell is a non-shared memory multiprocessor with quite limited memory per processor. There's only 256K per processor, which takes us back to before the 640K IBM PC. There are DMA channels to a bigger memory, but no cacheing. Architecturally, it's very retro; it's very similar to the NCube of the mid-1980s. It's not even superscalar. Cell processors are dumb RISC engines, like the old low-end MIPS machines. They clock fast, but not much gets done per clock.
Yes, you get lots of CPUs, but that may not help. On a server, what are you going to run in a Cell? Not your Java or Perl or Python server app; there's not enough memory. No way will an instance of Apache fit. You could put a copy of the TCP/IP stack in a Cell, but that's not where the CPU time goes in a web server. One IBM document suggests putting "XML acceleration" (i.e. XML parsing) in the server, but that's an answer looking for a problem. It might be useful for streaming video or audio; that's a pipelined process. If you need to compress or decompress or transcode or decrypt, the Cell might be useful. But for most web services, those jobs are done once, not during playout. Even MPEG4 compression might be too much for a Cell; you need at least two frames of storage, and it doesn't have enough memory for that.
Now if they had, say, 16MB per CPU, it might be different.
The track record of non-shared memory supercomputers is terrible. There's a long history of dead ends, from the ILLIAC IV to the BBN Butterfly to the NCube to the Connection Machine. They're easy to design and build, but just not that useful for general purpose computing. Some volumetric simulation problems, like weather prediction, structural analysis, and fluid dynamics can be crammed into those machines, so there are jobs for them, but the applications are limited.
Shared-memory microprocessors look much more promising as general purpose computers. Having eight or sixteen CPUs in a shared-memory multicore configuration is quite useful. That's how SGI servers worked, and they had a good track record. Scaling up today's multicore shared-memory CPUs is repeating that idea, but smaller and cheaper.
At some point, you have to go to non-shared memory, but that doesn't have to happen until you hit maybe 16 CPUs sharing a few gigabytes of memory, which is about when the cache interconnects start to choke and speed of light lag to the far side of the RAM starts to hurt. That might even be pushed harder; there's been talk of 80 CPUs in a shared memory configuration. That's optimistic. But we know 16 will work; SGI had that years ago.
Then you go to a cluster on a chip, which is also well understood.
That's the near future. Not the Cell.
When will you (Score:2)
How about the David Patterson perspective? (Score:4, Interesting)
Berkeley tech report (inc. Patterson as author) [berkeley.edu]
Brief summary (I heard the same talk when he spoke at PARC), computational problems are divisible into one of thirteen categories that range from matrix multiplication to finite state automata. Most existing research (academia and industry) into parallelism tends to focus on about seven of those categories that are most easily parallelized - think supercomputer cluster. Most apps that you or I use fall into the graph traversal or finite-state categories (think compilers, apps with an event loop, etc.), into which there is essentially no research. Patterson even suspects that finite state machines are inherently serial and CANNOT be parallelized.
So ... the apps that we already use can't really get faster on parallel cores without major, fundamental advances in computer science that don't seem to be approaching. Which means we'll be using our current apps for a LONG time.
Additional note: IBM (and other chip manufacturers) have a vested interest in telling everyone that parallelism is the future. They can't make faster chips anymore, they can only compete on sheer number of cores.
Purely Functional Programming... (Score:3, Insightful)
Re: (Score:3, Interesting)
I commend you for broadening your programming horizons by learning Haskell. I also caution against drinking the kool-aid too much. A lot of the support for Haskell is in the academic community, and their priorities are very different to those of industrial software developers.
When you start to combine Haskell with real world problems, a lot of that natural elegance starts to look very artificial. Then you introduce concepts that mimic imperative programming with shared state (albeit based on more mathemat
Re: (Score:3, Informative)
Re: (Score:2)
Re: (Score:3, Insightful)
Of course OSes can do load balancing on several cores with several processes, that's trivial... What's not is real parallel code.
Re: (Score:3, Interesting)
If you run the additions in a serial fashion, you get a = 50 and c = 70. If you run them in parallel, you get a = 50
Re: (Score:3, Insightful)
I think what you meant was "attempts to parallelize the operations incorrectly may yield incorrect results."
The example that you had given above where you manual converted an algorithm from sequential to potentially parallel processed could easily be handled by a compiler. If your brain can handle the optimization so can a compiler given enough time. When writing in a higher level language (i.e. Not Assembly or Machine Code), like you used in y
Re: (Score:3, Informative)
Indeed. Thanks for the correction. :)
Of course. Which is why I showed exactly how the code could be handled in parallel. The key is that compilers already do this. Adding a new core won't help. It will just add overhead that will slow th
Re: (Score:2)
What we really need is for people to start using and developing the parallel languages (like Charm++) so that we can simply phase the older stuff out of existence.
-WS
Re: (Score:2)
Yes, the logical step of RTFA.
Sounds like someone should heed their own advice.
Either that, or go post on Fark where that comment belongs.
Re: (Score:2)
I personally am not worried at this point because it's very hard to find anyone that can do any type of complex multithreaded programming. Heck, it's hard to find someone that can just explain different types of semaphores. As such, the technology window might be ripe but the talent required to leverage it is rather weak and small. I don't expect th
Re: (Score:3, Informative)
Erlang [erlang.org] and Limbo [vitanuova.com] have concurrency primitives built-in. Both used CSP as a launching point. Both give the programming easy-to-use, lightweight processes and message passing. Processes share nothing.
However, neither have built-in support for multiple cores or multiple CPUs at the moment. It's just not a priority for the teams behind them. You can cheat such a setup with Erlang, however, as you can spawn processes on remote mach
Re: (Score:2)
A lot of good libraries are available which help in multi-threaded execution: For example Concurrency and Coordination Runtime [msdn.com] is a excellent framework.
I use OpenMT [microsoft.com] for a lot of my work, it scales well for upto 8 processors beyond that shared memory has proved to a bottleneck (I don't have NUMA hardware)
Re: (Score:2)
Re: (Score:2)
From this i get three questions in my head: -Can compilers be improved to automatically use multiple cores and where are the limits of this?
Not on a general way. Programs that depend on the order of execution, will break if run in parallel without proper synchronization. Synchronization is hard, and depends a lot on the program's logic.
-Multiple cores? Why not just treat it multiple computers?
Multiple cores are very different than multiple computers. Some applications scale well upon multiple computers (SETI@Home for example), while some other application really only scale well on multicore single image systems.
-Besides this, is there a solution to this in the form of new programming languages?
There's a solution in pretty much all programming languages, multithreading. The har
Re: (Score:3, Insightful)
First issue... who says "most" programs CAN be recompiled? The first gen dual cores were basically duplicates of full processors, but as multicore becomes more popular, the cores will be more efficient and may start leaving out 100% compatibility in favor of sending threads to the better processor... that could save millions of gates per chip by tailoring some cores for FPU and some for SSE3 etc. This means in the future multicore processors won't automatically handle
Re: (Score:3, Insightful)
Overzeetop, PE
Re: (Score:2)
When I look at a microphotgraph of a chip what I see is an urban landscape viewed from above, patterns as elegant and satisfying in their own internal logic.
If this is not architecture, I don't what is.