×
Supercomputing

Jaguar Supercomputer Being Upgraded To Regain Fastest Cluster Crown 89

MrSeb writes with an article in Extreme Tech about the Titan supercomputer. From the article: "Cray, AMD, Nvidia, and the Department of Energy have announced that the Oak Ridge National Laboratory's Jaguar supercomputer will soon be upgraded to yet again become the fastest HPC installation in the world. The new, mighty-morphing computer will feature thousands of Cray XK6 blades, each one accommodating up to four 16-core AMD Opteron 6200 (Interlagos) chips and four Nvidia Tesla 20-series GCGPU coprocessors. The Jaguar name will be suitably inflated, too: the new behemoth will be called Titan. The exact specs of Titan haven't been revealed, but the Jaguar supercomputer currently sports 200 cabinets of Cray XT5 blades — and each cabinet, in theory, can be upgraded to hold 24 XK6 blades. That's a total of 4,800 servers, or 38,400 processors in total; 19,200 Opterons 6200s, and 19,200 Tesla GPUs. ... that's 307,200 CPU cores — and with 512 shaders in each Tesla chip that's 9,830,400 compute units. In other words, Titan should be capable of massive parallelism of more than one million concurrent operations. When the server is complete, towards the end of 2012, Titan will be capable of between 10 and 20 petaflops, and should recapture the crown of Fastest Supercomputer in the World from the Japanese 'K' computer."
Australia

New Supercomputer Boosts Aussie SKA Telescope Bid 32

angry tapir writes "Australian academic supercomputing consortium iVEC has acquired another major supercomputer, Fornax, to be based at the University of Western Australia, to further the country's ability to conduct data-intensive research. The SGI GPU-based system, also known as iVEC@UWA, is made up of 96 nodes, each containing two 6-core Intel Xeon X5650 CPUs, an NVIDIA Tesla C2050 GPU, 48 GB RAM and 7TB of storage. All up, the system has 1152 cores, 96 GPUs and an additional dedicated 500TB fabric attached storage-based global filesystem. The system is a boost to the Australian-NZ bid to host the Square Kilometer Array radio telescope."
IBM

Behind the Parting of IBM and Blue Waters 36

An anonymous reader writes "The News-Gazette has an article about the troubled Blue Waters supercomputer project, providing some new information about why IBM and the University of Illinois parted ways back in August. Quoting: 'More than three dozen changes, most suggested by IBM, would have delayed the Blue Waters project by a year ... The requested changes caused friction as early as December 2010, eight months before IBM pulled out, leaving the project to look for a new vendor for the supercomputer. Documents released under the Freedom of Information Act show Big Blue and the Big U asserting their rights in lengthy and increasingly testy, but always polite, language. In the documents, IBM suggested that if changes were not made, the project would become overly expensive.'"
Supercomputing

10-Petaflops Supercomputer Being Built For Open Science Community 55

An anonymous reader tips news that Dell, Intel, and the Texas Advanced Computing Center will be working together to build "Stampede," a supercomputer project aiming for peak performance of 10 petaflops. The National Science Foundation is providing $27.5 million in initial funding, and it's hoped that Stampede will be "a model for supporting petascale simulation-based science and data-driven science." From the announcement: "When completed, Stampede will comprise several thousand Dell 'Zeus' servers with each server having dual 8-core processors from the forthcoming Intel Xeon Processor E5 Family (formerly codenamed "Sandy Bridge-EP") and each server with 32 gigabytes of memory. ... [It also incorporates Intel 'Many Integrated Core' co-processors,] designed to process highly parallel workloads and provide the benefits of using the most popular x86 instruction set. This will greatly simplify the task of porting and optimizing applications on Stampede to utilize the performance of both the Intel Xeon processors and Intel MIC co-processors. ... Altogether, Stampede will have a peak performance of 10 petaflops, 272 terabytes of total memory, and 14 petabytes of disk storage."
Power

Whither Moore's Law; Introducing Koomey's Law 105

Joining the ranks of accepted submitters, Beorytis writes "MIT Technology review reports on a recent paper by Stanford professor Dr. Jon Koomey, which claims to show that the energy efficiency of computing doubles every 1.5 years. Note that efficiency is considered in terms of a fixed computing load, a point soon to be lost on the mainstream press. Also interesting is a graph in a related blog post that really highlights the meaning of the 'fixed computing load' assumption by plotting computations per kWh vs. time. An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era."
Networking

Ask Slashdot: Best Use For a New Supercomputing Cluster? 387

Supp0rtLinux writes "In about 2 weeks time I will be receiving everything necessary to build the largest x86_64-based supercomputer on the east coast of the U.S. (at least until someone takes the title away from us). It's spec'ed to start with 1200 dual-socket six-core servers. We primarily do life-science/health/biology related tasks on our existing (fairly small) HPC. We intend to continue this usage, but to also open it up for new uses (energy comes to mind). Additionally, we'd like to lease access to recoup some of our costs. So, what's the best Linux distro for something of this size and scale? Any that include a chargeback option/module? Additionally, due to cost contracts, we have to choose either InfiniBand or 10Gb Ethernet for the backend: which would Slashdot readers go with if they had to choose? Either way, all nodes will have four 1Gbps Ethernet ports. Finally, all nodes include only a basic onboard GPU. We intend to put powerful GPUs into the PCI-e slot and open up the new HPC for GPU related crunching. Any suggestions on the most powerful Linux friendly PCI-e GPU available?"
AI

IBM's Watson To Help Diagnose, Treat Cancer 150

Lucas123 writes "IBM's Jeopardy-playing supercomputer, Watson, will be turning its data compiling engine toward helping oncologists diagnose and treat cancer. According to IBM, the computer is being assembled in the Richmond, Va. data center of WellPoint, the country's largest Blue Cross, Blue Shield-based healthcare company. Physicians will be able to input a patient's symptoms and Watson will use data from a patient's electronic health record, insurance claims data, and worldwide clinical research to come up with both a diagnosis and treatment based on evidence-based medicine. 'If you think about the power of [combining] all our information along with all that comparative research and medical knowledge... that's what really creates this game changing capability for healthcare,' said Lori Beer, executive vice president of Enterprise Business Services at WellPoint."
Communications

How Killing the Internet Helped Revolutionaries 90

An anonymous reader writes "In a widely circulated American Political Science Association conference paper, Yale scholar Navid Hassanpour argues that shutting down the internet made things difficult for sustaining a centralized revolutionary movement in Egypt. But, he adds, the shutdown actually encouraged the development of smaller revolutionary uprisings at local levels where the face-to-face interaction between activists was more intense and the mobilization of inactive lukewarm dissidents was easier. In other words, closing down the internet made the revolution more diffuse and more difficult for the authorities to contain." As long as we're on the subject, reader lecheiron points out news of research into predicting revolutions by feeding millions of news articles into a supercomputer and using word analysis to chart national sentiment. So far it's pretty good at predicting things that have already happened, but we should probably wait until it finds something new before contacting Hari Seldon.
Data Storage

IBM Building 120PB Cluster Out of 200,000 Hard Disks 290

MrSeb writes "Smashing all known records by some margin, IBM Research Almaden, California, has developed hardware and software technologies that will allow it to strap together 200,000 hard drives to create a single storage cluster of 120 petabytes — 120 million gigabytes. The data repository, which currently has no name, is being developed for an unnamed customer, but with a capacity of 120PB, it's most likely use will be a storage device for a governmental (or Facebook) supercomputer. With IBM's GPFS (General Parallel File System), over 30,000 files can be created per second — and with massive parallelism, and no doubt thanks to the 200,000 individual drives in the array, single files can be read or written at several terabytes per second."
AI

IBM Shows Off Brain-Inspired Microchips 106

An anonymous reader writes "Researchers at IBM have created microchips inspired by the basic functioning of the human brain. They believe the chips could perform tasks that humans excel at but computers normally don't. So far they have been taught to recognize handwriting, play Pong, and guide a car around a track. The same researchers previously modeled this kind of neurologically inspired computing using supercomputer simulations, and claimed to have simulated the complexity of a cat's cortex — a claim that sparked a firestorm of controversy at the time. The new hardware is designed to run this same software much more efficiently."
Supercomputing

JPMorgan Rolls Out FPGA Supercomputer 194

An anonymous reader writes "As heterogeneous computing starts to take off, JP Morgan have revealed they are using an FPGA based supercomputer to process risk on their credit portfolio. 'Prior to the implementation, JP Morgan would take eight hours to do a complete risk run, and an hour to run a present value, on its entire book. If anything went wrong with the analysis, there was no time to re-run it. It has now reduced that to about 238 seconds, with an FPGA time of 12 seconds.' Also mentioned is a Stanford talk given in May."
Supercomputing

A Million Node Supercomputer 116

An anonymous reader writes "Veteran of microcomputing Steve Furber, in his role as ICL Professor of Computer Engineering in the School of Computer Science at the University of Manchester, has called upon some old friends for his latest project: a brain-simulating supercomputer based on more than a million ARM processors." More detailed information can be found in the research paper.
Security

Los Alamos Fire Idles NSA Supercomputer 65

ygslash writes "Among the many facilities shut down since Monday at Los Alamos National Laboratory due to the approaching wildfire is Cielo, one of the most powerful supercomputers in the world. The National Nuclear Security Administration's three national laboratories - Los Alamos, Sandia, and Lawrence Livermore - all share computing time on Cielo, according to Associated Press." Update: 06/30 14:48 GMT by S : As readers have pointed out, this article refers to the National Nuclear Security Administration, not the National Security Agency. Summary updated to reflect that.
AMD

AMD Gains In the TOP500 List 77

MojoKid writes "AMD recently announced its share of the TOP500 supercomputer list has grown 15 percent in the past six months. The company credits industry trends, upgrade paths, and competitive pricing for the increase. Of the 68 Opteron-based systems on the list, more than half of them use the Opteron 6100 series processors. The inflection point was marked by AMD's launch of their Magny-Cours architecture more than a year ago and includes the twelve-core Opteron 6180 SE at 2.5GHz at one end and two low-power parts at the other. Magny-Cours adoption is important. Companies typically don't upgrade HPC clusters with new CPUs, but AMD is billing their next-gen Interlagos architecture as a drop-in option for Magny-Cours. As such, it'll offer up to 2x the cores as well as equal-to or faster clock speeds."
Supercomputing

Could Wikipedia Become a Supercomputer? 165

An anonymous reader writes "Large websites represent an enormous resource of untapped computational power. This short post explains how a large website like Wikipedia could give a tremendous contribution to science, by harnessing the computational power of its readers' CPUs and help solve difficult computational problems." It's an interesting thought experiment, at least — if such a system were practical to implement, what kind of problems would you want it chugging away at?
Supercomputing

Intel Aims For Exaflops Supercomputer By 2018 66

siliconbits writes "Intel has laid down its roadmap in terms of computing performance for the next seven years in a press release; in addition, it revealed its expectations until 2027 in one deck of slides shown last week. The semiconductor chip maker wants a supercomputer capable of reaching 1000 petaflops (or one exaflops) to be unveiled by the end of 2018 (just in time for the company's 50th anniversary) with four exaflops being the upper end target by the end of the decade. The slide that was shared also shows that Intel wants to smash the zettaflops barrier — that's one million petaflops — sometime before 2030. This, Intel expects, will allow for significant strides in the field of genomics research, as well as much more accurate weather prediction (assuming Skynet or the Matrix hasn't taken over the world)."
Supercomputing

Japan's 8-petaflop K Computer Is Fastest On Earth 179

Stoobalou writes "An eight-petaflop Japanese supercomputer has grabbed the title of fastest computer on earth in the new Top 500 Supercomputing List to be officially unveiled at the International Supercomputing Conference in Hamburg today. The K Computer is based at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, and smashes the previous supercomputing records with a processing power of more than 8 petaflop/s (quadrillion calculations per second) — three times that of its nearest rival."
China

Chinese Tianhe-1A Supercomputer Starts Churning Out the Science 103

gupg writes "When China built the world's fastest supercomputer based on NVIDIA GPUs last year, a lot of naysayers said this was just a stunt machine. Well, guess what — here comes the science! They are working on better material for solar panels and they ran the world's fastest simulation ever. NVIDIA (whose GPUs accelerate these applications as a co-processor) blogged on this a while ago, where they talk about how the US really needs to up its investment in high performance computing."

Slashdot Top Deals