Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Supercomputing IT Hardware

World's Most Powerful x86 Supercomputer Boots Up in Germany 151

Nerval's Lobster writes "Europe's most powerful supercomputer — and the fourth most powerful in the world — has been officially inaugurated. The SuperMUC, ranked fourth in the June TOP500 supercomputing listing, contains 147,456 cores using Intel Xeon 2.7-GHz, 8-core E5-2680 chips. IBM, which built the supercomputer, stated in a recent press release that the supercomputer actually includes more than 155,000 processor cores. It is located at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching, Germany, near Munich. According to the TOP500 list, the SuperMUC is the world's most powerful X86-based supercomputer. The Department of Energy's 'Sequoia' supercomputer at the Lawrence Livermore National Laboratory in Livermore, Calif., the world's [overall] most powerful, relies on 16-core, 1.6-GHz POWER BQC chips."
This discussion has been archived. No new comments can be posted.

World's Most Powerful x86 Supercomputer Boots Up in Germany

Comments Filter:
  • Just in time... (Score:0, Informative)

    by Anonymous Coward on Thursday July 26, 2012 @03:32PM (#40782233)

    Windows 8 is a hog.

  • Re:power to x86 (Score:5, Informative)

    by rgbrenner ( 317308 ) on Thursday July 26, 2012 @03:45PM (#40782517)

    wow.. you're right. The i860 had 1 whole core and ran at up to 50mhz.

    Imagine if they built this supercomputer out of those. Instead of 155,000, it would only need 8,370,000.

    Now THAT's a super computer.

    On a serious note, wikipedia [wikipedia.org] says:

    On paper, performance was impressive for a single-chip solution; however, real-world performance was anything but. One problem, perhaps unrecognized at the time, was that runtime code paths are difficult to predict, meaning that it becomes exceedingly difficult to order instructions properly at compile time.

    Sounds like an earlier version of Itanium

  • Re:Slashdot Grab Bag (Score:3, Informative)

    by damien_kane ( 519267 ) on Thursday July 26, 2012 @03:56PM (#40782719)

    1) Does it run Linux?

    Yes, SUSE

    2) I for one, would like to welcome our new register constrained overlord.

    Then you had better get started on its AI routines

    3) Can you imagine a Beowulf cluster of these?

    Why yes, yes I can (and it would be huge, power hungry, and require it be run at the bottom of the ocean for heat-dissipation)
    BTW, from TFA this system is cooled "by dumping water directly on the microprocessors", after which the warmed water is used to heat the rest of the building in winter

    4) In Soviet Russia supercomputer run YOU!

    Only if you get around to finishing Point 2

    5) There is no God, I reject your fairytales.

    You mean you won't buy them any more?
    Damnit, what am I going to do with this gross of tails I tore off of faeries last week? Not cool, man...

  • by six ( 1673 ) on Thursday July 26, 2012 @04:59PM (#40783609) Homepage

    Don't mix up addressing and computing.

    The whole internet would fit in a 64 bit address space, there is really absolutely no need at all for more than 64 bit for addresses in CPUs, that's why x86_64 and other 64 bit archs are here to stay, and you'll probably never see "128 bit" processors at all.

    On the other side, today's x86_64 CPUs are capable of 128 bit (SSE) and 256 bit (AVX) computing. The width of the compute units is also bound to increase for some time, with Intel already planning to go 1024 bit in the not-so-far future.

  • by slew ( 2918 ) on Thursday July 26, 2012 @07:07PM (#40784989)

    I think we're going to be stuck in the same ghz range until we're past silicon.

    That's what I remember being told, anyway...

    Even when we get past "silicon", there are some fundamental issues that will likely constrain clock-speeds things until we solve them.

    First, the design of small low power devices (e.g, switching transistors) is currently problematic. Minimum sized geometries tend to "leak" more power, and potential subsitutes for silicon that are faster also tend to have leakier transistors (like graphene). We can make the devices bigger to minimize this, but then they switch slower, and there is more distance to traverse between devices. This is problematic in that the perf per watt tradeoff isn't great if you are doubling the watts to get 50% more perf (as an example), sometimes it doesn't make sense to go so fast.

    Second, we are reaching manufacturability limits. Today, one of the biggest problems with silicon is parametric yield loss. This is basically device-to-device variation of circuit parameters due to manufacturing variation. This requires quite a bit of over-engineering of margin which reduces the ability to use any of the intrinsic speed advances. We are now using nearly every trick in the book to get small devices that lay down stuff where you can count the dimensions of some features in atoms on your fingers and toes, so parametric yield loss due to + or - one atom dimension average change causing a 5-10% variation isn't likely to go way very soon.

    Third, re-syncronization uncertainty is now a big problem and getting worse. If a re-synchronizer circuit (say one that harmonizes two sides of an asynchronous fifo across 2 clock domains running at the same nominal frequency) is designed so that it would only fail 1-in-a-million times, you could have a reasonable failure-rate by cascading a few of them. If you are running 10 or 100 times faster, that's not a scalable strategy. Nowdays, even the jitter from a phase-locked and delay-locked loops or from two slightly mismatched clock-trees on different parts of a chip can be several clock periods long so what used to be a fairly simple syncrhonization problem now would likely be 10-100 times harder if it was 10-100 times faster (jitter isn't improving as fast as the potential clock rate).

    Of course if we stop using electrons in lattices for computational circuits (e.g., use photons in crystals), and developed new structured circuit realization technologies that allowed reducing some of the engineering margins required to yield devices, some of these limitations might be solved in different ways, but those types of advances are probably quite far off... I'm willing to wager, that we will start to leverage alternate computation technologies (e.g., like ubiquitous parallel operation, or even quantum) before we get there, so maybe going so fast won't seem as critical as it does today...

Always draw your curves, then plot your reading.

Working...