Bug

Too Much Gold Delays World's Fastest Supercomputer 111

Nerval's Lobster writes "The fastest supercomputer in the world, Oak Ridge National Laboratory's 'Titan,' has been delayed because an excess of gold on its motherboard connectors has prevented it from working properly. Titan was originally turned on last October and climbed to the top of the Top500 list of the fastest supercomputers shortly thereafter. Problems with Titan were first discovered in February, when the supercomputer just missed its stability requirement. At that time, the problems with the connectors were isolated as the culprit, and ORNL decided to take some of Titan's 200 cabinets offline and ship their motherboards back to the manufacturer, Cray, for repairs. The connectors affected the ability of the GPUs in the system to talk to the main processors. Oak Ridge Today's John Huotari noted the problem was due to too much gold mixed in with the solder."
AI

Computers Shown To Be Better Than Docs At Diagnosing, Prescribing Treatment 198

Lucas123 writes "Applying the same technology used for voice recognition and credit card fraud detection to medical treatments could cut healthcare costs and improve patient outcomes by almost 50%, according to new research. Scientists at Indiana University found that using patient data with machine-learning algorithms can drastically improve both the cost and quality of healthcare through simulation modeling.The artificial intelligence models used for diagnosing and treating patients obtained a 30% to 35% increase in positive patient outcomes, the research found. This is not the first time AI has been used to diagnose and suggest treatments. Last year, IBM announced that its Watson supercomputer would be used in evaluating evidence-based cancer treatment options for physicians, driving the decision-making process down to a matter of seconds."
Robotics

Supercomputer Designer Asked To Improve Robo-Bugs 21

Nerval's Lobster writes "The man who designed the world's most energy-efficient supercomputer in 2011 has taken on a new task: improving how robo-bugs fly. Wu-chun Feng, an associate professor of computer science in the College of Engineering at Virginia Tech, previously built Green Destiny, a 240-node supercomputer that consumed 3.2 kilowatts of power—the equivalent of a couple of hair dryers. That was before the Green500, a list that Feng and his team began compiling in 2005, which ranks the world's fastest supercomputers by performance per watt. On Feb. 5, the Air Force's Office of Scientific Research announced it had awarded Feng $3.5 million over three years, plus an option to add $2.5 million funding over an additional two years. The contract's goal: speed up how quickly a supercomputer can simulate the computational fluid dynamics of micro-air vehicles (MAVs), or unmanned aerial vehicles. MAVs can be as small as about five inches, with an aircraft close to insect size expected in the near future. While the robo-bugs can obviously be used for military purposes, they could also serve as scouts in rescue operations."
Education

IBM's Watson Goes To College To Extend Abilities 94

An anonymous reader writes in with news that IBM's Jeopardy winning supercomputer is going back to school"A modified version of the powerful IBM Watson computer system, able to understand natural spoken language and answer complex questions, will be provided to Rensselaer Polytechnic Institute in New York, making it the first university to receive such a system. IBM announced Wednesday that the Watson system is intended to enable upstate New York-based RPI to find new uses for Watson and deepen the systems' cognitive computing capabilities - for example by broadening the volume, types, and sources of data Watson can draw upon to answer questions."
IBM

Stanford Uses Million-Core Supercomputer To Model Supersonic Jet Noise 66

coondoggie writes "Stanford researchers said this week they had used a supercomputer with 1,572,864 compute cores to predict the noise generated by a supersonic jet engine. 'Computational fluid dynamics simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be. And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.'"
Supercomputing

DOE Asks For 30-Petaflop Supercomputer 66

Nerval's Lobster writes "The U.S. Department of Science has presented a difficult challenge to vendors: deliver a supercomputer with roughly 10 to 30 petaflops of performance, yet filled with energy-efficient multi-core architecture. The draft copy (.DOC) of the DOE's requirements provide for two systems: 'Trinity,' which will offer computing resources to the Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL), during the 2016-2020 timeframe; and NERSC-8, the replacement for the current NERSC-6 'Hopper' supercomputer first deployed in 2010 for the DOE facilities. Hopper debuted at number five in the list of Top500 supercomputers, and can crunch numbers at the petaflop level. The DOE wants a machine with performance at between 10 to 30 times Hopper's capabilities, with the ability to support one compute job that could take up over half of the available compute resources at any one time."
Supercomputing

Three-Mile-High Supercomputer Poses Unique Challenges 80

Nerval's Lobster writes "Building and operating a supercomputer at more than three miles above sea level poses some unique problems, the designers of the recently installed Atacama Large Millimeter/submillimeter Array (ALMA) Correlator discovered. The ALMA computer serves as the brains behind the ALMA astronomical telescope, a partnership between Europe, North American, and South American agencies. It's the largest such project in existence. Based high in the Andes mountains in northern Chile, the telescope includes an array of 66 dish-shaped antennas in two groups. The telescope correlator's 134 million processors continually combine and compare faint celestial signals received by the antennas in the ALMA array, which are separated by up to 16 kilometers, enabling the antennas to work together as a single, enormous telescope, according to Space Daily. The extreme high altitude makes it nearly impossible to maintain on-site support staff for significant lengths of time, with ALMA reporting that human intervention will be kept to an absolute minimum. Data acquired via the array is archived at a lower-altitude support site. The altitude also limited the construction crew's ability to actually build the thing, requiring 20 weeks of human effort just to unpack and install it."
Supercomputing

Supercomputer Repossessed By State, May Be Sold In Pieces 123

1sockchuck writes "A supercomputer that was the third-fastest machine in the world in 2008 has been repossessed by the state of New Mexico and will likely be sold in pieces to three universities in the state. The state has been unable to find a buyer for the Encanto supercomputer, which was built and maintained with $20 million in state funding. The supercomputer had the enthusiastic backing of Gov. Bill Richardson, who saw the project as an economic development tool for New Mexico. But the commercial projects did not materialize, and Richardson's successor, Susana Martinez, says the supercomputer is a 'symbol of excess.'"
Space

All Systems Go For Highest Altitude Supercomputer 36

An anonymous reader writes "One of the most powerful supercomputers in the world has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. It's a critical part of the Atacama Large Millimeter/submillimeter Array (ALMA), the most elaborate ground-based astronomical telescope in history. The special-purpose ALMA correlator has over 134 million processors and performs up to 17 quadrillion operations per second, a speed comparable to the fastest general-purpose supercomputer in operation today."
Ubuntu

Mark Shuttleworth Answers Your Questions 236

A couple of weeks ago you had a chance to ask Canonical Ltd. and the Ubuntu Foundation founder, Mark Shuttleworth, anything about software and vacationing in space. Below you'll find his answers to your questions. Make sure to look for our live discussion tomorrow with free software advocate and CTO of Rhombus Tech, Luke Leighton. The interview will start at 1:30 EST.
The Internet

Study Finds Similar Structures In the Universe, Internet, and Brain 171

A reader writes "The structure of the universe and the laws that govern its growth may be more similar than previously thought to the structure and growth of the human brain and other complex networks, such as the Internet or a social network of trust relationships between people, according to a new study. 'By no means do we claim that the universe is a global brain or a computer,' said Dmitri Krioukov, co-author of the paper, published by the Cooperative Association for Internet Data Analysis (CAIDA), based at the San Diego Supercomputer Center (SDSC) at the University of California, San Diego.'But the discovered equivalence between the growth of the universe and complex networks strongly suggests that unexpectedly similar laws govern the dynamics of these very different complex systems,' Krioukov noted."
Intel

Cray Unveils XC30 Supercomputer 67

Nerval's Lobster writes "Cray has unveiled a XC30 supercomputer capable of high-performance computing workloads of more than 100 petaflops. Originally code-named 'Cascade,' the system relies on Intel Xeon processors and Aries interconnect chipset technology, paired with Cray's integrated software environment. Cray touts the XC30's ability to utilize a wide variety of processor types; future versions of the platform will apparently feature Intel Xeon Phi and Nvidia Tesla GPUs based on the Kepler GPU computing architecture. Cray leveraged its work with DARPA's High Productivity Computing Systems program in order to design and build the XC30. Cray's XC30 isn't the only supercomputer aiming for that 100-petaflop crown. China's Guangzhou Supercomputing Center recently announced the development of a Tianhe-2 supercomputer theoretically capable of 100 petaflops, but that system isn't due to launch until 2015. Cray also faces significant competition in the realm of super-computer makers: it only built 5.4 percent of the systems on the Top500 list, compared to IBM with 42.6 percent and Hewlett-Packard with 27.6 percent."
Image

How To Build a Supercomputer In 24 Hours Screenshot-sm 161

An anonymous reader writes with a link to this "time lapse video of students and postdocs at the University of Zurich constructing the zBox4 supercomputer. The machine has a theoretical compute capacity of ~1% of the human brain and will be used for simulating the formation of stars, planets and galaxies." That rack has "3,072 2.2GHz Intel Xeon cores and over 12TB of RAM." Also notable: for once, several of the YouTube comments are worth reading for more details on the construction and specs.
China

China Building a 100-petaflop Supercomputer Using Domestic Processors 154

concealment writes "As the U.S. launched what's expected to be the world's fastest supercomputer at 20 petaflops, China is building a machine that is intended to be five times faster when it is deployed in 2015. China's Tianhe-2 supercomputer will run at 100 petaflops (quadrillion floating-point calculations per second), according to the Guangzhou Supercomputing Center, where the machine will be housed. Tianhe-2 could help keep China competitive with the future supercomputers of other countries, as industry experts estimate machines will start reaching 1,000-petaflop performance by 2018." And, naturally, it's planned to use a domestically developed MIPS processor
Supercomputing

Titan Supercomputer Debuts for Open Scientific Research 87

hypnosec writes "The Oak Ridge National Laboratory has unveiled a new supercomputer – Titan, which it claims is the world's most powerful supercomputer, capable of 20 petaflops of performance. The Cray XK7 supercomputer contains a total of 18,688 nodes and each node is based on a 16-core AMD Opteron 6274 processor and a Nvidia Tesla K20 Graphical Processing Unit (GPU). To be used for researching climate change and other data-intensive tasks, the supercomputer is equipped with more than 700 terabytes of memory."
Google

How Google Cools Its 1 Million Servers 87

1sockchuck writes "As Google showed the world its data centers this week, it disclosed one of its best-kept secrets: how it cools its custom servers in high-density racks. All the magic happens in enclosed hot aisles, including supercomputer-style steel tubing that transports water — sometimes within inches of the servers. How many of those servers are there? Google has deployed at least 1 million servers, according to Wired, which got a look inside the company's North Carolina data center. The disclosures accompany a gallery of striking photos by architecture photographer Connie Zhou, who discusses the experience and her approach to the unique assignment."
Earth

Climate Change Research Gets Petascale Supercomputer 121

dcblogs writes "The National Center for Atmospheric Research (NCAR) has begun has begun using a 1.5 petaflop IBM system, called Yellowstone. For NCAR researchers it is an enormous leap in compute capability — a roughly 30x improvement over its existing 77 teraflop supercomputer. Yellowstone is capable of 1.5 quadrillion calculations per second using 72,288 Intel Xeon cores. The supercomputer gives researchers new capabilities. They can run more experiments with increased complexity and at a higher resolution. This new system may be able to reduce resolution to as much as 10 km (6.2 miles), giving scientists the ability to examine climate impacts in greater detail. Increase complexity allows researchers to add more conditions to their models, such as methane gas released from thawing tundra on polar sea ice. NCAR believes it is the world's most powerful computer dedicated to geosciences."
Moon

A Supercomputer On the Moon To Direct Deep Space Traffic 166

Hugh Pickens writes "NASA currently controls its deep space missions through a network of 13 giant antennas in California, Spain and Australia known as the Deep Space Network (DSN) but the network is obsolete and just not up to the job of transmitting the growing workload of extra-terrestrial data from deep space missions. That's why Ouliang Chang has proposed building a massive supercomputer in a deep dark crater on the side of the moon facing away from Earth and all of its electromagnetic chatter. Nuclear-powered, it would accept signals from space, store them, process them if needed and then relay the data back to Earth as time and bandwidth allows. The supercomputer would run in frigid regions near one of the moon's poles where cold temperatures would make cooling the supercomputer easier, and would communicate with spaceships and earth using a system of inflatable, steerable antennas that would hang suspended over moon craters, giving the Deep Space Network a second focal point away from earth. As well as boosting humanity's space-borne communication abilities, Chang's presentation at a space conference (PDF) in Pasadena, California also suggests that the moon-based dishes could work in unison with those on Earth to perform very-long-baseline interferometry, which allows multiple telescopes to be combined to emulate one huge telescope. Best of all the project has the potential to excite the imagination of future spacegoers and get men back on the moon."
Supercomputing

Parallella: an Open Multi-Core CPU Architecture 103

First time accepted submitter thrae writes "Adapteva has just released the architecture and software reference manuals for their many-core Epiphany processors. Adapteva's goal is to bring massively parallel programming to the masses with a sub-$100 16-core system and a sub-$200 64-core system. The architecture has advantages over GPUs in terms of future scaling and ease of use. Adapteva is planning to make the products open source. Ars Technica has a nice overview of the project."
NASA

How Cosmological Supercomputers Evolve the Universe All Over Again 144

the_newsbeagle writes "To study the mysterious phenomena of dark matter and dark energy, astronomers are turning to supercomputers that can simulate the entire evolution of the universe. One such simulation, the Bolshoi projection, recently did a complete run-through. It started with the state the universe was in around 13.7 billion years ago (not long after the Big Bang) and modeled the evolution of dark matter and energy up to the present day. The run used 14,000 CPUs on NASA's fastest supercomputer."

Slashdot Top Deals