Forgot your password?
typodupeerror
Supercomputing

Does Being First Still Matter In America? 220

Posted by timothy
from the by-jingo dept.
dcblogs writes At the supercomputing conference, SC14, this week, a U.S. Dept. of Energy offical said the government has set a goal of 2023 as its delivery date for an exascale system. It may be taking a risky path with that amount of lead time because of increasing international competition. There was a time when the U.S. didn't settle for second place. President John F. Kennedy delivered his famous "we choose to go to the moon" speech in 1962, and seven years later a man walked on the moon. The U.S. exascale goal is nine years away. China, Europe and Japan all have major exascale efforts, and the government has already dropped on supercomputing. The European forecast of Hurricane Sandy in 2012 was so far ahead of U.S. models in predicting the storm's path that the National Oceanic and Atmospheric Administration was called before Congress to explain how it happened. It was told by a U.S. official that NOAA wasn't keeping up in computational capability. It's still not keeping up. Cliff Mass, a professor of meteorology at the University of Washington, wrote on his blog last month that the U.S. is "rapidly falling behind leading weather prediction centers around the world" because it has yet to catch up in computational capability to Europe. That criticism followed the $128 million recent purchase a Cray supercomputer by the U.K.'s Met Office, its meteorological agency.
Supercomputing

US DOE Sets Sights On 300 Petaflop Supercomputer 127

Posted by timothy
from the who-is-this-we-paleface? dept.
dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.
Earth

Interviews: Ask CMI Director Alex King About Rare Earth Mineral Supplies 62

Posted by timothy
from the dude-I-loved-their-2nd-album dept.
The modern electronics industry relies on inputs and supply chains, both material and technological, and none of them are easy to bypass. These include, besides expertise and manufacturing facilities, the actual materials that go into electronic components. Some of them are as common as silicon; rare earth minerals, not so much. One story linked from Slashdot a few years back predicted that then-known supplies would be exhausted by 2017, though such predictions of scarcity are notoriously hard to get right, as people (and prices) adjust to changes in supply. There's no denying that there's been a crunch on rare earths, though, over the last several years. The minerals themselves aren't necessarily rare in an absolute sense, but they're expensive to extract. The most economically viable deposits are found in China, and rising prices for them as exports to the U.S., the EU, and Japan have raised political hackles. At the same time, those rising prices have spurred exploration and reexamination of known deposits off the coast of Japan, in the midwestern U.S., and elsewhere.

Alex King is director of the Critical Materials Institute, a part of the U.S. Department of Energy's Ames Laboratory. CMI is heavily involved in making rare earth minerals slightly less rare by means of supercomputer analysis; researchers there are approaching the ongoing crunch by looking both for substitute materials for things like gallium, indium, and tantalum, and easier ways of separating out the individual rare earths (a difficult process). One team there is working with "ligands – molecules that attach with a specific rare-earth – that allow metallurgists to extract elements with minimal contamination from surrounding minerals" to simplify the extraction process. We'll be talking with King soon; what questions would you like to see posed? (This 18-minute TED talk from King is worth watching first, as is this Q&A.)
Supercomputing

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office 125

Posted by Soulskill
from the crayzy-powerful dept.
Memetic writes: The UK weather forecasting service is replacing its IBM supercomputer with a Cray XC40 containing 17 petabytes of storage and capable of 16 TeraFLOPS. This is Cray's biggest contract outside the U.S. With 480,000 CPUs, it should be 13 times faster than the current system. It will weigh 140 tons. The aim is to enable more accurate modeling of the unstable UK climate, with UK-wide forecasts at a resolution of 1.5km run hourly, rather than every three hours, as currently happens. (Here's a similar system from the U.S.)
Supercomputing

Supercomputing Upgrade Produces High-Resolution Storm Forecasts 77

Posted by samzenpus
from the clearer-pictures dept.
dcblogs writes A supercomputer upgrade is paying off for the U.S. National Weather Service, with new high-resolution models that will offer better insight into severe weather. This improvement in modeling detail is a result of a supercomputer upgrade. The National Oceanic and Atmospheric Administration, which runs the weather service, put into production two new IBM supercomputers, each 213 teraflops, running Linux on Intel processors. These systems replaced 74-teraflop, four-year old systems. More computing power means systems can run more mathematics, and increase the resolution or detail on the maps from 8 miles to 2 miles.
Cloud

IBM Opens Up Its Watson Supercomputer To Researchers 28

Posted by samzenpus
from the try-it-out dept.
An anonymous reader writes IBM has announced the "Watson Discovery Advisor" a cloud-based tool that will let researchers comb through massive troves of data, looking for insights and connections. The company says it's a major expansion in capabilities for the Watson Group, which IBM seeded with a $1 billion investment. "Scientific discovery takes us to a different level as a learning system," said Steve Gold, vice president of the Watson Group. "Watson can provide insights into the information independent of the question. The ability to connect the dots opens up a new world of possibilities."
Supercomputing

How a Supercomputer Beat the Scrap Heap and Lived On To Retire In Africa 145

Posted by Unknown Lamer
from the spread-the-computing dept.
New submitter jorge_salazar (3562633) writes Pieces of the decommissioned Ranger supercomputer, 40 racks in all, were shipped to researchers in South Africa, Tanzania, and Botswana to help seed their supercomputing aspirations. They say they'll need supercomputers to solve their growing science problems in astronomy, bioinformatics, climate modeling and more. Ranger's own beginnings were described by the co-founder of Sun Microsystems as a 'historic moment in petaflop computing."
Programming

545-Person Programming War Declares a Winner 57

Posted by Soulskill
from the bring-me-the-severed-subroutine-of-your-fallen-foe dept.
An anonymous reader writes: A while back we discussed Code Combat, a multiplayer game that lets players program their way to victory. They recently launched a tournament called Greed, where coders had to write algorithms for competitively collecting coins. 545 programmers participated, submitting over 126,000 lines of code, which resulted in 390 billion statements being executed on a 673-core supercomputer. The winner, going by the name of "Wizard Dude," won 363 matches, tied 14, and lost none! He explains his strategy: "My coin-collecting algorithm uses a novel forces-based mechanism to control movement. Each coin on the map applies an attractive force on collectors (peasants/peons) proportional to its value over distance squared. Allied collectors and the arena edges apply a repulsive force, pushing other collectors away. The sum of these forces produces a vector indicating the direction in which the collector should move this turn. The result is that: 1) collectors naturally move towards clusters of coins that give the greatest overall payoff, 2) collectors spread out evenly to cover territory. Additionally, the value of each coin is scaled depending on its distance from the nearest enemy collector, weighting in favor of coins with an almost even distance. This encourages collectors not to chase lost coins, but to deprive the enemy of contested coins first and leave safer coins for later."
Science

Making Graphene Work For Real-World Devices 18

Posted by Soulskill
from the more-better-faster-lighter-cheaper dept.
aarondubrow writes: "Graphene, a one-atom-thick form of the carbon material graphite, is strong, light, nearly transparent and an excellent conductor of electricity and heat, but a number of practical challenges must be overcome before it can emerge as a replacement for silicon in electronics or energy devices. One particular challenge concerns the question of how graphene diffuses heat, in the form of phonons. Thermal conductivity is critical in electronics, especially as components shrink to the nanoscale. Using the Stampede supercomputer at the Texas Advanced Computing Center, Professor Li Shi simulated how phonons (heat-carrying vibrations in solids) scatter as a function of the thickness of the graphene layers. He also investigated how graphene interacts with substrate materials and how phonon scattering can be controlled. The results were published in the Proceedings of the National Academy of Sciences, Applied Physical Letters and Energy and Environmental Science."
Intel

Intel and SGI Test Full-Immersion Cooling For Servers 102

Posted by samzenpus
from the cooling-it-down dept.
itwbennett (1594911) writes "Intel and SGI have built a proof-of-concept supercomputer that's kept cool using a fluid developed by 3M called Novec that is already used in fire suppression systems. The technology, which could replace fans and eliminate the need to use tons of municipal water to cool data centers, has the potential to slash data-center energy bills by more than 90 percent, said Michael Patterson, senior power and thermal architect at Intel. But there are several challenges, including the need to design new motherboards and servers."
Supercomputing

Pentago Is a First-Player Win 136

Posted by timothy
from the heads-I-win-tails-you-lose dept.
First time accepted submitter jwpeterson writes "Like chess and go, pentago is a two player, deterministic, perfect knowledge, zero sum game: there is no random or hidden state, and the goal of the two players is to make the other player lose (or at least tie). Unlike chess and go, pentago is small enough for a computer to play perfectly: with symmetries removed, there are a mere 3,009,081,623,421,558 (3e15) possible positions. Thus, with the help of several hours on 98304 threads of Edison, a Cray supercomputer at NERSC, pentago is now strongly solved. 'Strongly' means that perfect play is efficiently computable for any position. For example, the first player wins."
Programming

Ask Slashdot: What's the Most Often-Run Piece of Code -- Ever? 533

Posted by Soulskill
from the and-how-quickly-could-EC2-win-the-crown dept.
Hugo Villeneuve writes "What piece of code, in a non-assembler format, has been run the most often, ever, on this planet? By 'most often,' I mean the highest number of executions, regardless of CPU type. For the code in question, let's set a lower limit of 3 consecutive lines. For example, is it:
  • A UNIX kernel context switch?
  • A SHA2 algorithm for Bitcoin mining on an ASIC?
  • A scientific calculation running on a supercomputer?
  • A 'for-loop' inside on an obscure microcontroller that runs on all GE appliance since the '60s?"
IBM

IBM Dumping $1 Billion Into New Watson Group 182

Posted by samzenpus
from the eggs-in-one-basket dept.
Nerval's Lobster writes "IBM believes its Watson supercomputing platform is much more than a gameshow-winning gimmick: its executives are betting very big that the software will fundamentally change how people and industries compute. In the beginning, IBM assigned 27 core researchers to the then-nascent Watson. Working diligently, those scientists and developers built a tough 'Jeopardy!' competitor. Encouraged by that success on live television, Big Blue devoted a larger team to commercializing the technology—a group it made a point of hiding in Austin, Texas, so its members could better focus on hardcore research. After years of experimentation, IBM is now prepping Watson to go truly mainstream. As part of that upgraded effort (which includes lots of hype-generating), IBM will devote a billion dollars and thousands of researchers to a dedicated Watson Group, based in New York City at 51 Astor Place. The company plans on pouring another $100 million into an equity fund for Watson's growing app ecosystem. If everything goes according to IBM's plan, Watson will help kick off what CEO Ginni Rometty refers to as a third era in computing. The 19th century saw the rise of a "tabulating" era: the birth of machines designed to count. In the latter half of the 20th century, developers and scientists initiated the 'programmable' era—resulting in PCs, mobile devices, and the Internet. The third (potential) era is 'cognitive,' in which computers become adept at understanding and solving, in a very human way, some of society's largest problems. But no matter how well Watson can read, understand and analyze, the platform will need to earn its keep. Will IBM's clients pay lots of money for all that cognitive power? Or will Watson ultimately prove an overhyped sideshow?"
Supercomputing

Using Supercomputers To Find a Bacterial "Off" Switch 30

Posted by samzenpus
from the bug-crunching dept.
Nerval's Lobster writes "The comparatively recent addition of supercomputing to the toolbox of biomedical research may already have paid off in a big way: Researchers have used a bio-specialized supercomputer to identify a molecular 'switch' that might be used to turn off bad behavior by pathogens. They're now trying to figure out what to do with that discovery by running even bigger tests on the world's second-most-powerful supercomputer. The 'switch' is a pair of amino acids called Phe396 that helps control the ability of the E. coli bacteria to move under its own power. Phe396 sits on a chemoreceptor that extends through the cell wall, so it can pass information about changes in the local environment to proteins on the inside of the cell. Its role was discovered by a team of researchers from the University of Tennessee and the ORNL Joint Institute for Computational Sciences using a specialized supercomputer called Anton, which was built specifically to simulate biomolecular interactions among proteins and other molecules to give researchers a better way to study details of how molecules interact. 'For decades proteins have been viewed as static molecules, and almost everything we know about them comes from static images, such as those produced with X-ray crystallography,' according to Igor Zhulin, a researcher at ORNL and professor of microbiology at UT, in whose lab the discovery was made. 'But signaling is a dynamic process, which is difficult to fully understand using only snapshots.'"
Hardware

Elevation Plays a Role In Memory Error Rates 190

Posted by Soulskill
from the another-reason-not-to-calculate-prime-numbers-on-mt.-everest dept.
alphadogg writes "With memory, as with real estate, location matters. A group of researchers from AMD and the Department of Energy's Los Alamos National Laboratory have found that the altitude at which SRAM resides can influence how many random errors the memory produces. In a field study of two high-performance computers, the researchers found that L2 and L3 caches had more transient errors on the supercomputer located at a higher altitude, compared with the one closer to sea level. They attributed the disparity largely to lower air pressure and higher cosmic ray-induced neutron strikes. Strangely, higher elevation even led to more errors within a rack of servers, the researchers found. Their tests showed that memory modules on the top of a server rack had 20 percent more transient errors than those closer to the bottom of the rack. However, it's not clear what causes this smaller-scale effect."
Supercomputing

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118

Posted by Unknown Lamer
from the series-of-nanotubes dept.
dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
IBM

IBM To Offer Watson Services In the Cloud 56

Posted by samzenpus
from the silver-lining dept.
jfruh writes "Have you ever wanted to write code for Watson, IBM's Jeopardy-winning supercomputer? Well, now you can, sort of. Big Blue has created a standardized server that runs Watson's unique learning and language-recognition software, and will be selling developers access to these boxes as a cloud-based service. No pricing has been announced yet."
Cloud

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

Posted by Unknown Lamer
from the when-do-we-get-to-jigga-flops dept.
An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
Supercomputing

Scientists Using Supercomputers To Puzzle Out Dinosaur Movement 39

Posted by Soulskill
from the turns-out-they-sucked-at-ballet dept.
Nerval's Lobster writes "Scientists at the University of Manchester in England figured out how the largest animal ever to walk on Earth, the 80-ton Argentinosaurus, actually walked on earth. Researchers led by Bill Sellers, Rudolfo Coria and Lee Margetts at the N8 High Performance Computing facility in northern England used a 320 gigaflop/second SGI High Performance Computing Cluster supercomputer called Polaris to model the skeleton and movements of Argentinosaurus. The animal was able to reach a top speed of about 5 mph, with 'a slow, steady gait,' according to the team (PDF). Extrapolating from a few feet of bone, paleontologists were able to estimate the beast weighed between 80 and 100 tons and grew up to 115 feet in length. Polaris not only allowed the team to model the missing parts of the dinosaur and make them move, it did so quickly enough to beat the deadline for PLOS ONE Special Collection on Sauropods, a special edition of the site focusing on new research on sauropods that 'is likely to be the "de facto" international reference for Sauropods for decades to come,' according to a statement from the N8 HPC center. The really exciting thing, according to Coria, was how well Polaris was able to fill in the gaps left by the fossil records. 'It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult,' he said, despite previous research that established some rules of weight distribution, movement and the limits of dinosaurs' biological strength."

"The Mets were great in 'sixty eight, The Cards were fine in 'sixty nine, But the Cubs will be heavenly in nineteen and seventy." -- Ernie Banks

Working...