Supercomputing

Russia, Europe Seek Divorce From U.S. Tech Vendors 201

dcblogs writes "The Russians are building a 10-petaflop supercomputer as part of a goal to build an exascale system by 2018-20, in the same timeframe as the US. The Russians, as well as Europe and China, want to reduce reliance on U.S. tech vendors and believe that exascale system development will lead to breakthroughs that could seed new tech industries. 'Exascale computing is a challenge, and indeed an opportunity for Europe to become a global HPC leader,' said Leonardo Flores Anover, who is the European Commission's project officer for the European Exascale Software Initiative. 'The goal is to foster the development of a European industrial capability,' he said. Think what Europe accomplished with Airbus. For Russia: 'You can expect to see Russia holding its own in the exascale race with little or no dependence on foreign manufacturers,' said Mike Bernhardt, who writes The Exascale Report. For now, Russia is relying on Intel and Nvidia."
AMD

ORNL's Newest Petaflop Climate Computer To Come Online For NOAA 66

bricko writes with a description of NOAA's Gaea supercomputer, being assembled at the Oak Ridge National Laboratory. It's some big iron: 1.1 petaflops, based on 16-core Interlagos chips from AMD, and built by Cray. "The system, which is used for climate modeling and resource, also includes two separate Lustre parallel file systems 'that handle data sets that rank among the world's largest,' ORNL said. 'NOAA research partners access the system remotely through speedy wide area connections. Two 10-gigabit (billion bit) lambdas, or optical waves, pass data to NOAA's national research network through peering points at Atlanta and Chicago.'"
Supercomputing

JPMorgan Rolls Out (Another) FPGA Supercomputer 210

An anonymous reader writes "JP Morgan is expanding its use of dataflow supercomputers to speed up more of its fixed income trading operations. Earlier this year, the bank revealed how it reduced the time it took to run an end-of-day risk calculation from eight hours down to just 238 seconds. The new dataflow supercomputer, where the computer chips are tailored to perform specific, bespoke tasks (as explained in this Wall Street Journal article) — will be equivalent to more than 12,000 conventional x86 cores, providing 128 Teraflops of performance."
IBM

Cray Replaces IBM To Build $188M Supercomputer 99

wiredmikey writes "Supercomputer maker Cray today said that the University of Illinois' National Center for Supercomputing Applications (NCSA) awarded the company a contract to build a supercomputer for the National Science Foundation's Blue Waters project. The supercomputer will be powered by new 16-core AMD Opteron 6200 Series processors (formerly code-named 'Interlagos') a next-generation GPU from NVIDIA, called 'Kepler,' and a new integrated storage solution from Cray. IBM was originally selected to build the supercomputer in 2007, but terminated the contract in August 2011, saying the project was more complex and required significantly increased financial and technical support beyond its original expectations. Once fully deployed, the system is expected to have a sustained performance of more than one petaflops on demanding scientific applications."
Japan

Fujitsu Announces 16-core SPARC64 IXfx (and the Supercomputer It Powers) 68

First time accepted submitter A12m0v writes with a link to Fujitsu's announcement of its next generation of supercomputer, from which he pastes: "PRIMEHPC FX10 runs on the newly-developed SPARC64 IXfx processors, which offer a very significant boost in performance over the SPARC64 VIIIfx processor on which they are based and which power the K computer. Each processor has 16 cores and achieves world-class standalone performance levels of 236.5 gigaflops and performance per watt of over 2 gigaflops." Not that K is any slouch.
Supercomputing

Japanese Supercomputer K Hits 10.51 Petaflops 125

coondoggie writes "The Japanese supercomputer ranked #1 on the Top 500 fastest supercomputers broke its own record this week by hitting 10 quadrillion calculations per second (10.51 petaflops), according to its operators, Fujitsu and Riken.
The supercomputer 'K' consists of 864 racks, comprising a total of 88,128 interconnected CPUs and has a theoretical calculation speed of 11.28 petaflops, the companies said."
China

China Builds 1-Petaflop Homegrown Supercomputer 185

MrSeb writes "Drawing yet another battle line between the incumbent oligarchs of the West and the developing hordes of the East, China has unveiled a new supercomputer that uses entirely-homegrown processors — 8,704 of them, to be exact. The computer is called Sunway BlueLight MPP and it has a peak performance of just over 1 petaflop — or around the 15th fastest supercomputer in the world. Sunway uses the ShenWei SW-3 1600, a 16-core, 64-bit MIPS-compatible (RISC) CPU. The process used to make the chips is not known, but it is likely 65 or 45nm, a few generations behind Intel's latest and greatest. Each of the 139,264 cores runs at 1.1GHz, the entire system has 150TB of memory and 2PB of storage, and of course it's water-cooled. The ShenWei chips are based on the Loongson/Godson architecture, which China — as in, the country itself — probably reverse engineered from a DEC Alpha CPU in 2001 and has been developing ever since. Sunway is significant for two reasons: a) It's very low-power; it consumes just one megawatt, about half of its contemporaries and one seventh of the US's Jaguar — and b) This is China's first significant supercomputer to be built without Intel or AMD processors."
Supercomputing

Jaguar Supercomputer Being Upgraded To Regain Fastest Cluster Crown 89

MrSeb writes with an article in Extreme Tech about the Titan supercomputer. From the article: "Cray, AMD, Nvidia, and the Department of Energy have announced that the Oak Ridge National Laboratory's Jaguar supercomputer will soon be upgraded to yet again become the fastest HPC installation in the world. The new, mighty-morphing computer will feature thousands of Cray XK6 blades, each one accommodating up to four 16-core AMD Opteron 6200 (Interlagos) chips and four Nvidia Tesla 20-series GCGPU coprocessors. The Jaguar name will be suitably inflated, too: the new behemoth will be called Titan. The exact specs of Titan haven't been revealed, but the Jaguar supercomputer currently sports 200 cabinets of Cray XT5 blades — and each cabinet, in theory, can be upgraded to hold 24 XK6 blades. That's a total of 4,800 servers, or 38,400 processors in total; 19,200 Opterons 6200s, and 19,200 Tesla GPUs. ... that's 307,200 CPU cores — and with 512 shaders in each Tesla chip that's 9,830,400 compute units. In other words, Titan should be capable of massive parallelism of more than one million concurrent operations. When the server is complete, towards the end of 2012, Titan will be capable of between 10 and 20 petaflops, and should recapture the crown of Fastest Supercomputer in the World from the Japanese 'K' computer."
Australia

New Supercomputer Boosts Aussie SKA Telescope Bid 32

angry tapir writes "Australian academic supercomputing consortium iVEC has acquired another major supercomputer, Fornax, to be based at the University of Western Australia, to further the country's ability to conduct data-intensive research. The SGI GPU-based system, also known as iVEC@UWA, is made up of 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU, 48 GB RAM and 7TB of storage. All up, the system has 1152 cores, 96 GPUs and an additional dedicated 500TB fabric attached storage-based global filesystem. The system is a boost to the Australian-NZ bid to host the Square Kilometer Array radio telescope."
IBM

Behind the Parting of IBM and Blue Waters 36

An anonymous reader writes "The News-Gazette has an article about the troubled Blue Waters supercomputer project, providing some new information about why IBM and the University of Illinois parted ways back in August. Quoting: 'More than three dozen changes, most suggested by IBM, would have delayed the Blue Waters project by a year ... The requested changes caused friction as early as December 2010, eight months before IBM pulled out, leaving the project to look for a new vendor for the supercomputer. Documents released under the Freedom of Information Act show Big Blue and the Big U asserting their rights in lengthy and increasingly testy, but always polite, language. In the documents, IBM suggested that if changes were not made, the project would become overly expensive.'"
Supercomputing

10-Petaflops Supercomputer Being Built For Open Science Community 55

An anonymous reader tips news that Dell, Intel, and the Texas Advanced Computing Center will be working together to build "Stampede," a supercomputer project aiming for peak performance of 10 petaflops. The National Science Foundation is providing $27.5 million in initial funding, and it's hoped that Stampede will be "a model for supporting petascale simulation-based science and data-driven science." From the announcement: "When completed, Stampede will comprise several thousand Dell 'Zeus' servers with each server having dual 8-core processors from the forthcoming Intel Xeon Processor E5 Family (formerly codenamed "Sandy Bridge-EP") and each server with 32 gigabytes of memory. ... [It also incorporates Intel 'Many Integrated Core' co-processors,] designed to process highly parallel workloads and provide the benefits of using the most popular x86 instruction set. This will greatly simplify the task of porting and optimizing applications on Stampede to utilize the performance of both the Intel Xeon processors and Intel MIC co-processors. ... Altogether, Stampede will have a peak performance of 10 petaflops, 272 terabytes of total memory, and 14 petabytes of disk storage."
Power

Whither Moore's Law; Introducing Koomey's Law 105

Joining the ranks of accepted submitters, Beorytis writes "MIT Technology review reports on a recent paper by Stanford professor Dr. Jon Koomey, which claims to show that the energy efficiency of computing doubles every 1.5 years. Note that efficiency is considered in terms of a fixed computing load, a point soon to be lost on the mainstream press. Also interesting is a graph in a related blog post that really highlights the meaning of the 'fixed computing load' assumption by plotting computations per kWh vs. time. An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era."
Networking

Ask Slashdot: Best Use For a New Supercomputing Cluster? 387

Supp0rtLinux writes "In about 2 weeks time I will be receiving everything necessary to build the largest x86_64-based supercomputer on the east coast of the U.S. (at least until someone takes the title away from us). It's spec'ed to start with 1200 dual-socket six-core servers. We primarily do life-science/health/biology related tasks on our existing (fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease access to recoup some of our costs. So, what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module? Additionally, due to cost contracts, we have to choose either InfiniBand or 10Gb Ethernet for the backend: which would Slashdot readers go with if they had to choose? Either way, all nodes will have four 1Gbps Ethernet ports. Finally, all nodes include only a basic onboard GPU. We intend to put powerful GPUs into the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most powerful Linux friendly PCI-e GPU available?"
AI

IBM's Watson To Help Diagnose, Treat Cancer 150

Lucas123 writes "IBM's Jeopardy-playing supercomputer, Watson, will be turning its data compiling engine toward helping oncologists diagnose and treat cancer. According to IBM, the computer is being assembled in the Richmond, Va. data center of WellPoint, the country's largest Blue Cross, Blue Shield-based healthcare company. Physicians will be able to input a patient's symptoms and Watson will use data from a patient's electronic health record, insurance claims data, and worldwide clinical research to come up with both a diagnosis and treatment based on evidence-based medicine. 'If you think about the power of [combining] all our information along with all that comparative research and medical knowledge... that's what really creates this game changing capability for healthcare,' said Lori Beer, executive vice president of Enterprise Business Services at WellPoint."
Communications

How Killing the Internet Helped Revolutionaries 90

An anonymous reader writes "In a widely circulated American Political Science Association conference paper, Yale scholar Navid Hassanpour argues that shutting down the internet made things difficult for sustaining a centralized revolutionary movement in Egypt. But, he adds, the shutdown actually encouraged the development of smaller revolutionary uprisings at local levels where the face-to-face interaction between activists was more intense and the mobilization of inactive lukewarm dissidents was easier. In other words, closing down the internet made the revolution more diffuse and more difficult for the authorities to contain." As long as we're on the subject, reader lecheiron points out news of research into predicting revolutions by feeding millions of news articles into a supercomputer and using word analysis to chart national sentiment. So far it's pretty good at predicting things that have already happened, but we should probably wait until it finds something new before contacting Hari Seldon.
Data Storage

IBM Building 120PB Cluster Out of 200,000 Hard Disks 290

MrSeb writes "Smashing all known records by some margin, IBM Research Almaden, California, has developed hardware and software technologies that will allow it to strap together 200,000 hard drives to create a single storage cluster of 120 petabytes — 120 million gigabytes. The data repository, which currently has no name, is being developed for an unnamed customer, but with a capacity of 120PB, it's most likely use will be a storage device for a governmental (or Facebook) supercomputer. With IBM's GPFS (General Parallel File System), over 30,000 files can be created per second — and with massive parallelism, and no doubt thanks to the 200,000 individual drives in the array, single files can be read or written at several terabytes per second."
AI

IBM Shows Off Brain-Inspired Microchips 106

An anonymous reader writes "Researchers at IBM have created microchips inspired by the basic functioning of the human brain. They believe the chips could perform tasks that humans excel at but computers normally don't. So far they have been taught to recognize handwriting, play Pong, and guide a car around a track. The same researchers previously modeled this kind of neurologically inspired computing using supercomputer simulations, and claimed to have simulated the complexity of a cat's cortex — a claim that sparked a firestorm of controversy at the time. The new hardware is designed to run this same software much more efficiently."
Supercomputing

JPMorgan Rolls Out FPGA Supercomputer 194

An anonymous reader writes "As heterogeneous computing starts to take off, JP Morgan have revealed they are using an FPGA based supercomputer to process risk on their credit portfolio. 'Prior to the implementation, JP Morgan would take eight hours to do a complete risk run, and an hour to run a present value, on its entire book. If anything went wrong with the analysis, there was no time to re-run it. It has now reduced that to about 238 seconds, with an FPGA time of 12 seconds.' Also mentioned is a Stanford talk given in May."
Supercomputing

A Million Node Supercomputer 116

An anonymous reader writes "Veteran of microcomputing Steve Furber, in his role as ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, has called upon some old friends for his latest project: a brain-simulating supercomputer based on more than a million ARM processors." More detailed information can be found in the research paper.

Slashdot Top Deals