Forgot your password?
typodupeerror

Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

Supercomputing

How a Supercomputer Beat the Scrap Heap and Lived On To Retire In Africa 145

Posted by Unknown Lamer
from the spread-the-computing dept.
New submitter jorge_salazar (3562633) writes Pieces of the decommissioned Ranger supercomputer, 40 racks in all, were shipped to researchers in South Africa, Tanzania, and Botswana to help seed their supercomputing aspirations. They say they'll need supercomputers to solve their growing science problems in astronomy, bioinformatics, climate modeling and more. Ranger's own beginnings were described by the co-founder of Sun Microsystems as a 'historic moment in petaflop computing."
Programming

545-Person Programming War Declares a Winner 57

Posted by Soulskill
from the bring-me-the-severed-subroutine-of-your-fallen-foe dept.
An anonymous reader writes: A while back we discussed Code Combat, a multiplayer game that lets players program their way to victory. They recently launched a tournament called Greed, where coders had to write algorithms for competitively collecting coins. 545 programmers participated, submitting over 126,000 lines of code, which resulted in 390 billion statements being executed on a 673-core supercomputer. The winner, going by the name of "Wizard Dude," won 363 matches, tied 14, and lost none! He explains his strategy: "My coin-collecting algorithm uses a novel forces-based mechanism to control movement. Each coin on the map applies an attractive force on collectors (peasants/peons) proportional to its value over distance squared. Allied collectors and the arena edges apply a repulsive force, pushing other collectors away. The sum of these forces produces a vector indicating the direction in which the collector should move this turn. The result is that: 1) collectors naturally move towards clusters of coins that give the greatest overall payoff, 2) collectors spread out evenly to cover territory. Additionally, the value of each coin is scaled depending on its distance from the nearest enemy collector, weighting in favor of coins with an almost even distance. This encourages collectors not to chase lost coins, but to deprive the enemy of contested coins first and leave safer coins for later."
Science

Making Graphene Work For Real-World Devices 18

Posted by Soulskill
from the more-better-faster-lighter-cheaper dept.
aarondubrow writes: "Graphene, a one-atom-thick form of the carbon material graphite, is strong, light, nearly transparent and an excellent conductor of electricity and heat, but a number of practical challenges must be overcome before it can emerge as a replacement for silicon in electronics or energy devices. One particular challenge concerns the question of how graphene diffuses heat, in the form of phonons. Thermal conductivity is critical in electronics, especially as components shrink to the nanoscale. Using the Stampede supercomputer at the Texas Advanced Computing Center, Professor Li Shi simulated how phonons (heat-carrying vibrations in solids) scatter as a function of the thickness of the graphene layers. He also investigated how graphene interacts with substrate materials and how phonon scattering can be controlled. The results were published in the Proceedings of the National Academy of Sciences, Applied Physical Letters and Energy and Environmental Science."
Intel

Intel and SGI Test Full-Immersion Cooling For Servers 102

Posted by samzenpus
from the cooling-it-down dept.
itwbennett (1594911) writes "Intel and SGI have built a proof-of-concept supercomputer that's kept cool using a fluid developed by 3M called Novec that is already used in fire suppression systems. The technology, which could replace fans and eliminate the need to use tons of municipal water to cool data centers, has the potential to slash data-center energy bills by more than 90 percent, said Michael Patterson, senior power and thermal architect at Intel. But there are several challenges, including the need to design new motherboards and servers."
Supercomputing

Pentago Is a First-Player Win 136

Posted by timothy
from the heads-I-win-tails-you-lose dept.
First time accepted submitter jwpeterson writes "Like chess and go, pentago is a two player, deterministic, perfect knowledge, zero sum game: there is no random or hidden state, and the goal of the two players is to make the other player lose (or at least tie). Unlike chess and go, pentago is small enough for a computer to play perfectly: with symmetries removed, there are a mere 3,009,081,623,421,558 (3e15) possible positions. Thus, with the help of several hours on 98304 threads of Edison, a Cray supercomputer at NERSC, pentago is now strongly solved. 'Strongly' means that perfect play is efficiently computable for any position. For example, the first player wins."
Programming

Ask Slashdot: What's the Most Often-Run Piece of Code -- Ever? 533

Posted by Soulskill
from the and-how-quickly-could-EC2-win-the-crown dept.
Hugo Villeneuve writes "What piece of code, in a non-assembler format, has been run the most often, ever, on this planet? By 'most often,' I mean the highest number of executions, regardless of CPU type. For the code in question, let's set a lower limit of 3 consecutive lines. For example, is it:
  • A UNIX kernel context switch?
  • A SHA2 algorithm for Bitcoin mining on an ASIC?
  • A scientific calculation running on a supercomputer?
  • A 'for-loop' inside on an obscure microcontroller that runs on all GE appliance since the '60s?"
IBM

IBM Dumping $1 Billion Into New Watson Group 182

Posted by samzenpus
from the eggs-in-one-basket dept.
Nerval's Lobster writes "IBM believes its Watson supercomputing platform is much more than a gameshow-winning gimmick: its executives are betting very big that the software will fundamentally change how people and industries compute. In the beginning, IBM assigned 27 core researchers to the then-nascent Watson. Working diligently, those scientists and developers built a tough 'Jeopardy!' competitor. Encouraged by that success on live television, Big Blue devoted a larger team to commercializing the technology—a group it made a point of hiding in Austin, Texas, so its members could better focus on hardcore research. After years of experimentation, IBM is now prepping Watson to go truly mainstream. As part of that upgraded effort (which includes lots of hype-generating), IBM will devote a billion dollars and thousands of researchers to a dedicated Watson Group, based in New York City at 51 Astor Place. The company plans on pouring another $100 million into an equity fund for Watson's growing app ecosystem. If everything goes according to IBM's plan, Watson will help kick off what CEO Ginni Rometty refers to as a third era in computing. The 19th century saw the rise of a "tabulating" era: the birth of machines designed to count. In the latter half of the 20th century, developers and scientists initiated the 'programmable' era—resulting in PCs, mobile devices, and the Internet. The third (potential) era is 'cognitive,' in which computers become adept at understanding and solving, in a very human way, some of society's largest problems. But no matter how well Watson can read, understand and analyze, the platform will need to earn its keep. Will IBM's clients pay lots of money for all that cognitive power? Or will Watson ultimately prove an overhyped sideshow?"
Supercomputing

Using Supercomputers To Find a Bacterial "Off" Switch 30

Posted by samzenpus
from the bug-crunching dept.
Nerval's Lobster writes "The comparatively recent addition of supercomputing to the toolbox of biomedical research may already have paid off in a big way: Researchers have used a bio-specialized supercomputer to identify a molecular 'switch' that might be used to turn off bad behavior by pathogens. They're now trying to figure out what to do with that discovery by running even bigger tests on the world's second-most-powerful supercomputer. The 'switch' is a pair of amino acids called Phe396 that helps control the ability of the E. coli bacteria to move under its own power. Phe396 sits on a chemoreceptor that extends through the cell wall, so it can pass information about changes in the local environment to proteins on the inside of the cell. Its role was discovered by a team of researchers from the University of Tennessee and the ORNL Joint Institute for Computational Sciences using a specialized supercomputer called Anton, which was built specifically to simulate biomolecular interactions among proteins and other molecules to give researchers a better way to study details of how molecules interact. 'For decades proteins have been viewed as static molecules, and almost everything we know about them comes from static images, such as those produced with X-ray crystallography,' according to Igor Zhulin, a researcher at ORNL and professor of microbiology at UT, in whose lab the discovery was made. 'But signaling is a dynamic process, which is difficult to fully understand using only snapshots.'"
Hardware

Elevation Plays a Role In Memory Error Rates 190

Posted by Soulskill
from the another-reason-not-to-calculate-prime-numbers-on-mt.-everest dept.
alphadogg writes "With memory, as with real estate, location matters. A group of researchers from AMD and the Department of Energy's Los Alamos National Laboratory have found that the altitude at which SRAM resides can influence how many random errors the memory produces. In a field study of two high-performance computers, the researchers found that L2 and L3 caches had more transient errors on the supercomputer located at a higher altitude, compared with the one closer to sea level. They attributed the disparity largely to lower air pressure and higher cosmic ray-induced neutron strikes. Strangely, higher elevation even led to more errors within a rack of servers, the researchers found. Their tests showed that memory modules on the top of a server rack had 20 percent more transient errors than those closer to the bottom of the rack. However, it's not clear what causes this smaller-scale effect."
Supercomputing

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118

Posted by Unknown Lamer
from the series-of-nanotubes dept.
dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
IBM

IBM To Offer Watson Services In the Cloud 56

Posted by samzenpus
from the silver-lining dept.
jfruh writes "Have you ever wanted to write code for Watson, IBM's Jeopardy-winning supercomputer? Well, now you can, sort of. Big Blue has created a standardized server that runs Watson's unique learning and language-recognition software, and will be selling developers access to these boxes as a cloud-based service. No pricing has been announced yet."
Cloud

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

Posted by Unknown Lamer
from the when-do-we-get-to-jigga-flops dept.
An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
Supercomputing

Scientists Using Supercomputers To Puzzle Out Dinosaur Movement 39

Posted by Soulskill
from the turns-out-they-sucked-at-ballet dept.
Nerval's Lobster writes "Scientists at the University of Manchester in England figured out how the largest animal ever to walk on Earth, the 80-ton Argentinosaurus, actually walked on earth. Researchers led by Bill Sellers, Rudolfo Coria and Lee Margetts at the N8 High Performance Computing facility in northern England used a 320 gigaflop/second SGI High Performance Computing Cluster supercomputer called Polaris to model the skeleton and movements of Argentinosaurus. The animal was able to reach a top speed of about 5 mph, with 'a slow, steady gait,' according to the team (PDF). Extrapolating from a few feet of bone, paleontologists were able to estimate the beast weighed between 80 and 100 tons and grew up to 115 feet in length. Polaris not only allowed the team to model the missing parts of the dinosaur and make them move, it did so quickly enough to beat the deadline for PLOS ONE Special Collection on Sauropods, a special edition of the site focusing on new research on sauropods that 'is likely to be the "de facto" international reference for Sauropods for decades to come,' according to a statement from the N8 HPC center. The really exciting thing, according to Coria, was how well Polaris was able to fill in the gaps left by the fossil records. 'It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult,' he said, despite previous research that established some rules of weight distribution, movement and the limits of dinosaurs' biological strength."
Supercomputing

National Weather Service Upgrades Storm-Tracking Supercomputers 34

Posted by Soulskill
from the maybe-the-weatherman-will-stop-lying-to-me-now dept.
Nerval's Lobster writes "Just in time for hurricane season, the National Weather Service has finished upgrading the supercomputers it uses to track and model super-storms. 'These improvements are just the beginning and build on our previous success. They lay the foundation for further computing enhancements and more accurate forecast models that are within reach,' National Weather Service director Louis W. Uccellini wrote in a statement. The National Weather Service's 'Tide' supercomputer — along with its 'Gyre' backup — are capable of operating at a combined 213 teraflops. The National Oceanic and Atmospheric Administration (NOAA), which runs the Service, has asked for funding that would increase that supercomputing power even more, to 1,950 teraflops. The National Weather Service uses that hardware for projects such as the Hurricane Weather Research and Forecasting (HWRF) model, a complex bit of forecasting that allows the organization to more accurately predict storms' intensity and movement. The HWRF can leverage real-time data taken from Doppler radar installed in the NOAA's P3 hurricane hunter aircraft."
Supercomputing

Supercomputer Becomes Massive Router For Global Radio Telescope 60

Posted by Soulskill
from the go-big-or-go-home dept.
Nerval's Lobster writes "Astrophysicists at MIT and the Pawsey supercomputing center in Western Australia have discovered a whole new role for supercomputers working on big-data science projects: They've figured out how to turn a supercomputer into a router. (Make that a really, really big router.) The supercomputer in this case is a Cray Cascade system with a top performance of 0.3 petaflops — to be expanded to 1.2 petaflops in 2014 — running on a combination of Intel Ivy Bridge, Haswell and MIC processors. The machine, which is still being installed at the Pawsey Centre in Kensington, Western Australia and isn't scheduled to become operational until later this summer, had to go to work early after researchers switched on the world's most sensitive radio telescope June 9. The Murchison Widefield Array is a 2,000-antenna radio telescope located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, built with the backing of universities in the U.S., Australia, India and New Zealand. Though it is the most powerful radio telescope in the world right now, it is only one-third of the Square Kilometer Array — a spread of low-frequency antennas that will be spread across a kilometer of territory in Australia and Southern Africa. It will be 50 times as sensitive as any other radio telescope and 10,000 times as quick to survey a patch of sky. By comparison, the Murchison Widefield Array is a tiny little thing stuck out as far in the middle of nowhere as Australian authorities could find to keep it as far away from terrestrial interference as possible. Tiny or not, the MWA can look farther into the past of the universe than any other human instrument to date. What it has found so far is data — lots and lots of data. More than 400 megabytes of data per second come from the array to the Murchison observatory, before being streamed across 500 miles of Australia's National Broadband Network to the Pawsey Centre, which gets rid of most of it as quickly as possible."
Supercomputing

Adapteva Parallella Supercomputing Boards Start Shipping 98

Posted by Unknown Lamer
from the value-of-btc-drops-again dept.
hypnosec writes "Adapteva has started shipping its $99 Parallella parallel processing single-board supercomputer to initial Kickstarter backers. Parallella is powered by Adapteva's 16-core and 64-core Epiphany multicore processors that are meant for parallel computing unlike other commercial off-the-shelf (COTS) devices like Raspberry Pi that don't support parallel computing natively. The first model to be shipped has the following specifications: a Zynq-7020 dual-core ARM A9 CPU complemented with Epiphany Multicore Accelerator (16 or 64 cores), 1GB RAM, MicroSD Card, two USB 2.0 ports, optional four expansion connectors, Ethernet, and an HDMI port." They are also releasing documentation, examples, and an SDK (brief overview, it's Free Software too). And the device runs GNU/Linux for the non-parallel parts (Ubuntu is the suggested distribution).
Supercomputing

Meet the Stampede Supercomputing Cluster's Administrator (Video) 34 Screenshot-sm

Posted by timothy
from the them's-blinkenlights-y'all dept.
UT Austin tends not to do things by half measures, as illustrated by the Texas Advanced Computing Center, which has been home to an evolving family of supercomputing clusters. The latest of these, Stampede, was first mentioned here back in 2011, before it was actually constructed. In the time since, Stampede has been not only completed, but upgraded; it's just successfully completed a successful six months since its last major update — the labor-intensive installation of Xeon Phi processors throughout 106 densely packed racks. I visited TACC, camera in hand, to take a look at this megawatt-eating electronic hive (well, herd) and talk with director of high-performance computing Bill Barth, who has insight into what it's like both as an end-user (both commercial and academic projects get to use Stampede) and as an administrator on such a big system.
Virtualization

Cray X-MP Simulator Resurrects Piece of Computer History 55

Posted by timothy
from the just-plain-cray-zee-ness! dept.
An anonymous reader writes "If you have a fascination with old supercomputers, like I do, this project might tickle your interest: A functional simulation of a Cray X-MP supercomputer, which can boot to its old batch operating system, called COS. It's complete with hard drive and tape simulation (no punch card readers, sorry) and consoles. Source code and binaries are available. You can also read about the journey that got me there, like recovering the OS image from a 30 year old hard drive or reverse-engineering CRAY machine code to understand undocumented tape drive operation and disk file-systems."
Supercomputing

Breaking Supercomputers' Exaflops Barrier 96

Posted by Soulskill
from the have-you-tried-hitting-the-turbo-button dept.
Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."

"The pyramid is opening!" "Which one?" "The one with the ever-widening hole in it!" -- The Firesign Theatre

Working...