Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Supercomputing IT Hardware

World's Most Powerful x86 Supercomputer Boots Up in Germany 151

Nerval's Lobster writes "Europe's most powerful supercomputer — and the fourth most powerful in the world — has been officially inaugurated. The SuperMUC, ranked fourth in the June TOP500 supercomputing listing, contains 147,456 cores using Intel Xeon 2.7-GHz, 8-core E5-2680 chips. IBM, which built the supercomputer, stated in a recent press release that the supercomputer actually includes more than 155,000 processor cores. It is located at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching, Germany, near Munich. According to the TOP500 list, the SuperMUC is the world's most powerful X86-based supercomputer. The Department of Energy's 'Sequoia' supercomputer at the Lawrence Livermore National Laboratory in Livermore, Calif., the world's [overall] most powerful, relies on 16-core, 1.6-GHz POWER BQC chips."
This discussion has been archived. No new comments can be posted.

World's Most Powerful x86 Supercomputer Boots Up in Germany

Comments Filter:
  • by Rinikusu ( 28164 ) on Thursday July 26, 2012 @03:26PM (#40782119)

    My fatass almost got excited for a second.. a supercomputer fueled by BBQ... :(

  • power to x86 (Score:5, Interesting)

    by SlashDev ( 627697 ) on Thursday July 26, 2012 @03:28PM (#40782167) Homepage
    powerful and x86 are oxymorons. Try the i860 architecture now THAT's a processor, it's ancient I know.
    • Re:power to x86 (Score:4, Insightful)

      by Joce640k ( 829181 ) on Thursday July 26, 2012 @03:37PM (#40782375) Homepage

      I'm pretty sure they'll be running them in x64 mode, not x86.

      I'm sure modern Intel CPUs with multiple instruction dispatch and SSE for math instead of x87 will give the i860 a run for its money.

      But yeah ... some of those old chips were cool (even if they didn't have a proper divide or sqrt instruction :-)

      • by ackthpt ( 218170 )

        I'm pretty sure they'll be running them in x64 mode, not x86.

        I'm sure modern Intel CPUs with multiple instruction dispatch and SSE for math instead of x87 will give the i860 a run for its money.

        But yeah ... some of those old chips were cool (even if they didn't have a proper divide or sqrt instruction :-)

        A big thank you to AMD for proving Intel wrong, the we really do need 64 bit processors. =)

        • To be fair, Intel was right. We didn't need it at that time for most desktop apps. They never said we will never need 64-bit processors, just there were other more pressing issues they were tackling first. Seems it paid off pretty well for them. How is AMD fairing these days? When was the last time they were competitive, or turned a profit?

          • They were wrong. Try to find their original timetable for 64-bits on the desktop. You were supposed to wait for Itanium (which at the time was called either IA-64 or EPIC) to filter down.
    • Re:power to x86 (Score:5, Informative)

      by rgbrenner ( 317308 ) on Thursday July 26, 2012 @03:45PM (#40782517)

      wow.. you're right. The i860 had 1 whole core and ran at up to 50mhz.

      Imagine if they built this supercomputer out of those. Instead of 155,000, it would only need 8,370,000.

      Now THAT's a super computer.

      On a serious note, wikipedia [wikipedia.org] says:

      On paper, performance was impressive for a single-chip solution; however, real-world performance was anything but. One problem, perhaps unrecognized at the time, was that runtime code paths are difficult to predict, meaning that it becomes exceedingly difficult to order instructions properly at compile time.

      Sounds like an earlier version of Itanium

      • by 0123456 ( 636235 )

        I worked with i860s years ago. They weren't bad for graphics, but the guys who had to do general-purpose work on the i860 workstations hated them.

        Except the one who was promoted to i860 from the Clipper machine. I don't know why he always got the worst jobs.

    • Oxymorons.
      You keep using that word. I do not think it means what you think it means.

    • i860? Wow, you must be from the far future, we only have i7 here.
  • by j. andrew rogers ( 774820 ) on Thursday July 26, 2012 @03:29PM (#40782189)

    I need to find myself some of these Power BBQ Chips mentioned in the summary. Fast and tangy without the downside of Cheetoh fingers.

  • crysis won't run on highest settings. Sjeez.
    • Update your jokes. I have literally maxed out Crysis on a laptop. All settings on ultra, 1920x1080, 16x MSAA, 16x anisotropic filtering. Doesn't dip below 60fps.

      • Can you send a screencast?

        At those settings, it must be better than real life.

        • I'll take some screencaps next time I play, but yeah, it does look "better" than real life, in the same way that a big-budget movie looks better than real life.

          Of course, it all falls apart if you can see anyone's face clearly (especially if they're talking). Or if there's fire. Or something breaking. Or rotor wash. Or a million other things that look almost, but not entirely, right.

          The game looks awesome, especially for its age (it's about on par with Skyrim, which came out about three years later). In som

      • by santax ( 1541065 )
        Ok, next time I will say Crysis 2. Must be a nice laptop btw! Haven't tried Crysis here in years though, but referring to the game is a bit of a meme here. Just as, you insensitive clod ;)
        • by KDR_11k ( 778916 )

          Crysis 2 is less demanding, it was designed for consoles first and foremost.

          • Until you install that patch, whose purpose seems to be to make the game run worse, not look better. It's almost deliberately unoptimized. For instance, with the patch installed, if there is water anywhere on the map, the water is rendered across the full map, even when completely invisible. And certain objects have had their polycount bumped into orbit - I suspect they simply took the high-res meshes they used to bake normal maps, and used them as render meshes.

      • Yes, but can it run Cysis on my matrix of 3x3 (9 for those that can't multiply) monitors, and then real-time compress it to x.264 for streaming live all at full resolution (5760x3600 @ 60FPS) under wine so it can run on linux?

        • No, because it only has three video outputs.

          It might be able to do 2x2, but I actually don't have three more monitors to try it with.

  • by slashmydots ( 2189826 ) on Thursday July 26, 2012 @03:31PM (#40782223)
    But the real question is, can it run bitcoin mining software? See, you thought I was going to say Linux or Crysis, didn't you? lol.

    P.S. most miners are run on Linux btw ;-)
    • Actually, I was going to say "Could you imagine a Beowulf cluster of those?", although I suppose that would be kind of a "Yo Dawg, I herd you like clusters of clusters" type of comment.
  • If it was instead running Itanium (yes, I know that nobody uses Itanium any more) it would have been well suited to be called "Power BBQ" just by the heat output.
  • by Anonymous Coward

    Now if the Fed can just loan them our financial modeling software,
    maybe we can save Spain.

    On 2nd thought Goldman-Sachs might be a better place to shop for software.

  • I skimmed the article and couldn't find mention of what it's going to be calculating.

    • I think it's designed to run complex calculations on how the price of tea in china is related to everything.
    • I skimmed the article and couldn't find mention of what it's going to be calculating.

      Well, it's a center for supercomputing -- so likely many different things.

      It doesn't sound like it's built for a specific purpose, but the many things people use supercomputers for -- fancy physics simulations and the like seem popular. I'm sure some grad student will use it for something goofy now and then. No doubt industry will pay to get some time on it to solve some specific problems.

    • by ackthpt ( 218170 )

      I skimmed the article and couldn't find mention of what it's going to be calculating.

      It's for figuring out how long it will take for Greece to pay back their Euro loans.

      and at the rate Spain is going they'll need to add a few more CPUs.

    • Greece's deficit.

  • It's EUROPE's fastest supercomputer - but the topic title says "World." Last time I checked, Europe wasn't quite THAT big...

    • You might want to read the article again. It says it is Europe's fastest supercomputer AND the worlds fastest x86 supercomputer.

      It's #4 on the top500, and the other 3 are not x86 (POWER BQC for #1 and #2, and SPARC VIIIfx for #3).

  • That's quite a heater! Mine only goes up to 1.5 kilowatts.
    So is Xeon a faster CPU than the i7?

    • Xeons are available in 8 and 10 core models (this computer uses the 8 core version) whereas i7s are only 4 and 6 core.

      • by afidel ( 530433 )
        This is a SB Xeon, the ten core models are Westmere based and so get significantly fewer MIPS/Watt (though they have larger cache and more QPI links so they're great for DB work).
  • by Roachie ( 2180772 ) on Thursday July 26, 2012 @03:42PM (#40782473)

    1) Does it run Linux?
    2) I for one, would like to welcome our new register constrained overlord.
    3) Can you imagine a Beowulf cluster of these?
    4) In Soviet Russia supercomputer run YOU!
    5) There is no God, I reject your fairytales.

    • Re: (Score:3, Informative)

      by damien_kane ( 519267 )

      1) Does it run Linux?

      Yes, SUSE

      2) I for one, would like to welcome our new register constrained overlord.

      Then you had better get started on its AI routines

      3) Can you imagine a Beowulf cluster of these?

      Why yes, yes I can (and it would be huge, power hungry, and require it be run at the bottom of the ocean for heat-dissipation)
      BTW, from TFA this system is cooled "by dumping water directly on the microprocessors", after which the warmed water is used to heat the rest of the building in winter

      4) In Soviet Russia supercomputer run YOU!

      Only if you get around to finishing Point 2

      5) There is no God, I reject your fairytales.

      You mean you won't buy them any more?
      Damnit, what am I going to do with this gross of tails I tore off of faeries l

      • BTW, from TFA this system is cooled "by dumping water directly on the microprocessors", after which the warmed water is used to heat the rest of the building in winter

        Um, the article actually says "Aquasar system pumps water directly over the microprocessors" not "dumping water directly on"

      • Ah! yes I overlooked the lovable, ubiquitous /. pedantry, good catch my man.

        Al least you have a sense of humor, fuck.

    • Re: (Score:2, Funny)

      by slew ( 2918 )

      1) Does it run Linux?
      2) I for one, would like to welcome our new register constrained overlord.
      3) Can you imagine a Beowulf cluster of these?
      4) In Soviet Russia supercomputer run YOU!
      5) There is no God, I reject your fairytales.

      6) ???
      7) Profit!

    • To answer the absent "6)", Crysis is rendered at 40FPS.

      In software.
  • But they couldn't get the money out of Greese fast enough.

  • by Impy the Impiuos Imp ( 442658 ) on Thursday July 26, 2012 @03:47PM (#40782571) Journal

    > actually includes more than 155,000 processor cores

    Scientists and engineers toyed with putting Windows 8 on it, but Windows 8 with 150,000-200,000 core support was over $73 trillion.

  • Das Packard Bell -- ist nicht jus fur Walmart anymur

  • by jxander ( 2605655 ) on Thursday July 26, 2012 @03:55PM (#40782699)

    We're number four!!
    We're number four!!
    We're number four!!

  • play a game?
  • Has anyone created a 'super computer' out of raspberry pi's yet?

    • Has anyone created a 'super computer' out of raspberry pi's yet?

      With what supplies? There are not enough of them for single-unit buyers, let alone to buy the bulk necessary to build a super-computer. Yes, people could use the schematics and build tens (if not hundreds) of thousands to build a super-computer, cluster whatever. But the economics would simply not scale (compared to buying already stuff by the truckload.)

      Now, a better question would be "has anyone created a Beowulf cluster" with a bunch of rasberry pi's? That is a more reasonable enterprise, me thinks.

    • The proper phraseology around these parts is, "Imagine a beowulf cluster of raspberry pi's". Alternatively, you also allowed to imagine beowulf clusters of supercomputers, iPad 3's, and Jeri Ryan.
  • Every year I keep reading that there are these new technologies that allow processors to go up to 128, 256, 512 and 1024bit computing (and beyond) or at least should be seen in the next year or two but that never happens. I've been hearing about this since the turn of the century. So could somebody be kind enough to explain this to me please and whether or not this has a use for anyone?

    • by Jaywu ( 1126885 )
      The 16, 32, 64, 128-bit computing refers to the standard register size for integers and pointers in a processor. Specifically, a 32-bit computer can generally access 2^32 locations of memory, which is 4GB. A "true" 64-bit processor would be able to address 2^64 (18 quintrillian) bytes of memory. However, x86-64 only use 40 bytes for addressing, which will handle 1 TB of RAM. Additionally, doubling the data size makes every operation take significantly longer, so clock speeds have to suffer. Since very
      • Not always true. The 68000 has 8 x 32bit address and 8 x 32bit data registers, with a 24bit address bus. Nearly all operations can be performed as 32bit. Yet it's reguarded as a 16bit cpu.
        • by alexo ( 9335 )

          Not always true. The 68000 has 8 x 32bit address and 8 x 32bit data registers, with a 24bit address bus. Nearly all operations can be performed as 32bit.
          Yet it's reguarded as a 16bit cpu.

          The 68000 had a 16-bit external bus and 32-bit internal architecture.
          It was regarded as a 16/32-bit processor.

      • by Junta ( 36770 )

        However, x86-64 only use 40 bytes for addressing, which will handle 1 TB of RAM.

        40 *bytes* would be overkill and more than a TB, and 40 bits is outdated information:

        "Current implementations of the AMD64 architecture (starting from AMD 10h microarchitecture) extend this to 48-bit physical addresses[9] and therefore can address up to 256 TB of RAM. The architecture permits extending this to 52 bits in the future"

        So without a new architecture, can hit a petabyte of ram. There are systems with multiple terabytes of ram on a system already (see IBM x3850 x5 for example)

    • by Anonymous Coward

      64 bits are enough to perform any kind of needed computation. More bits would imply larger instructions, larger memory pointers, less usable cache space; basically a waste.

      Just think that now there's a proposal of switching many applications to use 32 bits pointers (on Linux) even if using 64 bits registers, so that more space is available for L1, L2 and L3 cache.

      Hope this helps, cheers

    • by six ( 1673 ) on Thursday July 26, 2012 @04:59PM (#40783609) Homepage

      Don't mix up addressing and computing.

      The whole internet would fit in a 64 bit address space, there is really absolutely no need at all for more than 64 bit for addresses in CPUs, that's why x86_64 and other 64 bit archs are here to stay, and you'll probably never see "128 bit" processors at all.

      On the other side, today's x86_64 CPUs are capable of 128 bit (SSE) and 256 bit (AVX) computing. The width of the compute units is also bound to increase for some time, with Intel already planning to go 1024 bit in the not-so-far future.

  • by barlevg ( 2111272 ) on Thursday July 26, 2012 @04:17PM (#40783041)
    I really hate how the focus these days is on more cores, not faster cores.

    Not every task is trivially parallelizable, and even with those that are, the speedup you get from running on N cores is always going to be less than Nx.

    I'd be much more impressed by a supercomputer running, say, 1/4 as many 4.0 GHz+ processors.

    Also: if what you're going for is massively parallelizable tasks, x86 is so last century--GPGPUs are where it's at.
    • by ThorGod ( 456163 )

      I think we're going to be stuck in the same ghz range until we're past silicon.

      That's what I remember being told, anyway...

      • by slew ( 2918 ) on Thursday July 26, 2012 @07:07PM (#40784989)

        I think we're going to be stuck in the same ghz range until we're past silicon.

        That's what I remember being told, anyway...

        Even when we get past "silicon", there are some fundamental issues that will likely constrain clock-speeds things until we solve them.

        First, the design of small low power devices (e.g, switching transistors) is currently problematic. Minimum sized geometries tend to "leak" more power, and potential subsitutes for silicon that are faster also tend to have leakier transistors (like graphene). We can make the devices bigger to minimize this, but then they switch slower, and there is more distance to traverse between devices. This is problematic in that the perf per watt tradeoff isn't great if you are doubling the watts to get 50% more perf (as an example), sometimes it doesn't make sense to go so fast.

        Second, we are reaching manufacturability limits. Today, one of the biggest problems with silicon is parametric yield loss. This is basically device-to-device variation of circuit parameters due to manufacturing variation. This requires quite a bit of over-engineering of margin which reduces the ability to use any of the intrinsic speed advances. We are now using nearly every trick in the book to get small devices that lay down stuff where you can count the dimensions of some features in atoms on your fingers and toes, so parametric yield loss due to + or - one atom dimension average change causing a 5-10% variation isn't likely to go way very soon.

        Third, re-syncronization uncertainty is now a big problem and getting worse. If a re-synchronizer circuit (say one that harmonizes two sides of an asynchronous fifo across 2 clock domains running at the same nominal frequency) is designed so that it would only fail 1-in-a-million times, you could have a reasonable failure-rate by cascading a few of them. If you are running 10 or 100 times faster, that's not a scalable strategy. Nowdays, even the jitter from a phase-locked and delay-locked loops or from two slightly mismatched clock-trees on different parts of a chip can be several clock periods long so what used to be a fairly simple syncrhonization problem now would likely be 10-100 times harder if it was 10-100 times faster (jitter isn't improving as fast as the potential clock rate).

        Of course if we stop using electrons in lattices for computational circuits (e.g., use photons in crystals), and developed new structured circuit realization technologies that allowed reducing some of the engineering margins required to yield devices, some of these limitations might be solved in different ways, but those types of advances are probably quite far off... I'm willing to wager, that we will start to leverage alternate computation technologies (e.g., like ubiquitous parallel operation, or even quantum) before we get there, so maybe going so fast won't seem as critical as it does today...

    • by afidel ( 530433 )
      Max turbo on the E5-2680 is 3.5Ghz and with a higher IPC than any previous processor it's going to get more done per core per wall time than anything but the E5-2687W (150W TDP) or the E5-2690 which is significantly more expensive (almost 20% more).
    • by Anonymous Coward

      You are incorrect. You can get superlinear speedups, for example when you parallelize a problem across multiple machines and each machine needs to hold less data. If you cross the threshold from out-of-cache to in-cache, the resulting system can run at more than Nx throughput.

    • by AReilly ( 9339 )

      Limiting factor these days is how much power you can get in and out of the box. They will have optimized for that. And these processors probably do have GPUs on them.

    • by Anonymous Coward

      So your complaint is that " Not every task is trivially parallelizable " and then suggest "GPGPUs" as a solution ? You realize that GPUs are only efficient at problems that can be split into millions of tiny, independent problems that execute the same function on different data, which makes it nearly impossible for some programs to even run or have any kind of speedup on GPUs. The next problem with GPUs is their lack of ram, while you can easily cram 128 GB per CPU on a regular motherboard, even the tesla

      • So your complaint is that " Not every task is trivially parallelizable " and then suggest "GPGPUs" as a solution ?

        Actually, what I said is that, if it's not massively parallelizable, I'd prefer greater clock speed on fewer cores, and if it is, then I'd prefer GPGPUs.

        The Xeon chips used for this machine occupy a very uninteresting "middle ground" in my view.

        The rest of your points are very interesting: I hadn't really thought about the RAM limitations (I use a GPGPU for my own research occasionally but have only once or twice come up against the RAM limits), and non-clockspeed bottlenecks are honestly not somethi

    • by pla ( 258480 )
      I really hate how the focus these days is on more cores, not faster cores.

      TFA refers to a supercomputer, not a high-end gaming rig.

      Yes, some problems have a "hard" sequential component, and nothing but "faster" will make a dent in it. We don't really have any options in that realm, however - For a near infinite amount of money, you could conceivably build something 100x faster than the current best commercially available chip - Though in a decade (of which the building of this superchip might take a s
    • The traces in the chips are 2.5 atoms thick, the distance between traces is 22nm for the most modern production technique, there is some room for squeezing it down a bit more, but there aren't many more significant drops in size that can be made.
      GPGPU has a long way to go to be flexible enough for general purpose work.
      Besides the real push needs to be the push for less power per op, at 1GF/watt, exaflops are a problem.
  • by Anonymous Coward

    Something that can boot Windows 8 in a reasonable amout of time...

  • I want a to get a Super Mac.
  • will consume 50% of the CPU. If they run windows, 80% of the rest will be consumed by DPCs. That leaves 10% for the intended function.
  • by StripedCow ( 776465 ) on Thursday July 26, 2012 @07:41PM (#40785321)

    World's Most Powerful x86 Supercomputer Boots Up in Germany

    So, how fast does it boot?

  • Imagine a beowulf cluster of SuperMUCs, or a cluster of clusters which would be semantically reduced to just 'cluster'....so just imagine a beowulf cluster.
  • But what sort of video card has it got?

  • Sure, it takes a while to manufacture the physical components, but they should have switched to a Flex configuration even midway in the installation process - would have saved money in the long run.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...