×
Supercomputing

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118

dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
IBM

IBM To Offer Watson Services In the Cloud 56

jfruh writes "Have you ever wanted to write code for Watson, IBM's Jeopardy-winning supercomputer? Well, now you can, sort of. Big Blue has created a standardized server that runs Watson's unique learning and language-recognition software, and will be selling developers access to these boxes as a cloud-based service. No pricing has been announced yet."
Cloud

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
Supercomputing

Scientists Using Supercomputers To Puzzle Out Dinosaur Movement 39

Nerval's Lobster writes "Scientists at the University of Manchester in England figured out how the largest animal ever to walk on Earth, the 80-ton Argentinosaurus, actually walked on earth. Researchers led by Bill Sellers, Rudolfo Coria and Lee Margetts at the N8 High Performance Computing facility in northern England used a 320 gigaflop/second SGI High Performance Computing Cluster supercomputer called Polaris to model the skeleton and movements of Argentinosaurus. The animal was able to reach a top speed of about 5 mph, with 'a slow, steady gait,' according to the team (PDF). Extrapolating from a few feet of bone, paleontologists were able to estimate the beast weighed between 80 and 100 tons and grew up to 115 feet in length. Polaris not only allowed the team to model the missing parts of the dinosaur and make them move, it did so quickly enough to beat the deadline for PLOS ONE Special Collection on Sauropods, a special edition of the site focusing on new research on sauropods that 'is likely to be the "de facto" international reference for Sauropods for decades to come,' according to a statement from the N8 HPC center. The really exciting thing, according to Coria, was how well Polaris was able to fill in the gaps left by the fossil records. 'It is frustrating there was so little of the original dinosaur fossilized, making any reconstruction difficult,' he said, despite previous research that established some rules of weight distribution, movement and the limits of dinosaurs' biological strength."
Supercomputing

National Weather Service Upgrades Storm-Tracking Supercomputers 34

Nerval's Lobster writes "Just in time for hurricane season, the National Weather Service has finished upgrading the supercomputers it uses to track and model super-storms. 'These improvements are just the beginning and build on our previous success. They lay the foundation for further computing enhancements and more accurate forecast models that are within reach,' National Weather Service director Louis W. Uccellini wrote in a statement. The National Weather Service's 'Tide' supercomputer — along with its 'Gyre' backup — are capable of operating at a combined 213 teraflops. The National Oceanic and Atmospheric Administration (NOAA), which runs the Service, has asked for funding that would increase that supercomputing power even more, to 1,950 teraflops. The National Weather Service uses that hardware for projects such as the Hurricane Weather Research and Forecasting (HWRF) model, a complex bit of forecasting that allows the organization to more accurately predict storms' intensity and movement. The HWRF can leverage real-time data taken from Doppler radar installed in the NOAA's P3 hurricane hunter aircraft."
Supercomputing

Supercomputer Becomes Massive Router For Global Radio Telescope 60

Nerval's Lobster writes "Astrophysicists at MIT and the Pawsey supercomputing center in Western Australia have discovered a whole new role for supercomputers working on big-data science projects: They've figured out how to turn a supercomputer into a router. (Make that a really, really big router.) The supercomputer in this case is a Cray Cascade system with a top performance of 0.3 petaflops — to be expanded to 1.2 petaflops in 2014 — running on a combination of Intel Ivy Bridge, Haswell and MIC processors. The machine, which is still being installed at the Pawsey Centre in Kensington, Western Australia and isn't scheduled to become operational until later this summer, had to go to work early after researchers switched on the world's most sensitive radio telescope June 9. The Murchison Widefield Array is a 2,000-antenna radio telescope located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, built with the backing of universities in the U.S., Australia, India and New Zealand. Though it is the most powerful radio telescope in the world right now, it is only one-third of the Square Kilometer Array — a spread of low-frequency antennas that will be spread across a kilometer of territory in Australia and Southern Africa. It will be 50 times as sensitive as any other radio telescope and 10,000 times as quick to survey a patch of sky. By comparison, the Murchison Widefield Array is a tiny little thing stuck out as far in the middle of nowhere as Australian authorities could find to keep it as far away from terrestrial interference as possible. Tiny or not, the MWA can look farther into the past of the universe than any other human instrument to date. What it has found so far is data — lots and lots of data. More than 400 megabytes of data per second come from the array to the Murchison observatory, before being streamed across 500 miles of Australia's National Broadband Network to the Pawsey Centre, which gets rid of most of it as quickly as possible."
Supercomputing

Adapteva Parallella Supercomputing Boards Start Shipping 98

hypnosec writes "Adapteva has started shipping its $99 Parallella parallel processing single-board supercomputer to initial Kickstarter backers. Parallella is powered by Adapteva's 16-core and 64-core Epiphany multicore processors that are meant for parallel computing unlike other commercial off-the-shelf (COTS) devices like Raspberry Pi that don't support parallel computing natively. The first model to be shipped has the following specifications: a Zynq-7020 dual-core ARM A9 CPU complemented with Epiphany Multicore Accelerator (16 or 64 cores), 1GB RAM, MicroSD Card, two USB 2.0 ports, optional four expansion connectors, Ethernet, and an HDMI port." They are also releasing documentation, examples, and an SDK (brief overview, it's Free Software too). And the device runs GNU/Linux for the non-parallel parts (Ubuntu is the suggested distribution).
Supercomputing

Video Meet the Stampede Supercomputing Cluster's Administrator (Video) Screenshot-sm 34

UT Austin tends not to do things by half measures, as illustrated by the Texas Advanced Computing Center, which has been home to an evolving family of supercomputing clusters. The latest of these, Stampede, was first mentioned here back in 2011, before it was actually constructed. In the time since, Stampede has been not only completed, but upgraded; it's just successfully completed a successful six months since its last major update — the labor-intensive installation of Xeon Phi processors throughout 106 densely packed racks. I visited TACC, camera in hand, to take a look at this megawatt-eating electronic hive (well, herd) and talk with director of high-performance computing Bill Barth, who has insight into what it's like both as an end-user (both commercial and academic projects get to use Stampede) and as an administrator on such a big system.
Virtualization

Cray X-MP Simulator Resurrects Piece of Computer History 55

An anonymous reader writes "If you have a fascination with old supercomputers, like I do, this project might tickle your interest: A functional simulation of a Cray X-MP supercomputer, which can boot to its old batch operating system, called COS. It's complete with hard drive and tape simulation (no punch card readers, sorry) and consoles. Source code and binaries are available. You can also read about the journey that got me there, like recovering the OS image from a 30 year old hard drive or reverse-engineering CRAY machine code to understand undocumented tape drive operation and disk file-systems."
Supercomputing

Breaking Supercomputers' Exaflops Barrier 96

Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."
IBM

Harvard, IBM Crunch Data For More Efficient Solar Cells 65

Nerval's Lobster writes "Harvard's Clean Energy Project (CEP) is using IBM's World Community Grid, a 'virtual supercomputer' that leverages volunteers' surplus computing power, to determine which organic carbon compounds are best suited for converting sunlight into electricity. IBM claims that the resulting database of compounds is the 'most extensive investigation of quantum chemicals ever performed.' In theory, all that information can be utilized to develop organic semiconductors and solar cells. Roughly a thousand of the molecular structures explored by the project are capable of converting 11 percent (or more) of captured sunlight into electricity—a significant boost from many organic cells currently in use, which convert between 4 and 5 percent of sunlight. That's significantly less than solar cells crafted from silicon, which can produce efficiencies of up to nearly 20 percent (at least in the case of black silicon solar cells). But silicon solar cells can be costly to produce, experiments with low-grade materials notwithstanding; organic cells could be a cheap and recyclable alternative, provided researchers can make them more efficient. The World Community Grid asks volunteers to download a small program (called an 'agent') onto their PC. Whenever the machine is idle, it requests data from whatever project is on the World Community Grid's server, which it crunches before sending back (and requesting another data packet). Several notable projects have embraced grid computing as a way to analyze massive datasets, including SETI@Home."
Intel

Intel Announces New Enterprise Xeons, More Powerful Xeon Phi Cards 57

MojoKid writes "Intel announced a set of new enterprise products today aimed at furthering its strengths in the TOP500 supercomputing market. As of today, the Chinese Tiahne-2 supercomputer (aka Milky Way 2) is now the fastest supercomputer on the planet at roughly ~54PFLOPs. Intel is putting its own major push behind heterogeneous computing with the Tianhe-2. Each node contains two Ivy Bridge sockets and three Xeon Phi cards. Each node, therefore, contains 422.4GFLOP/s in Ivy Bridge performance — but 3.43TFLOPs/s worth of Xeon Phi. In addition, we'll see new Xeons based on this technology later this year, in the 22nm E5-2600 V2 family, with up to 12 cores. The new chips will be built on Ivy Bridge technology and will offer up to 12 cores / 24 threads. The new Xeons, however, aren't really the interesting part of the story. Today, Intel is adding cards to the current Xeon Phi lineup — the 7120P, 3120P, 3120A, and 5120D. The 3120P and 3120A are the same card — the 'P' is passively cooled, while the "A" integrates a fan. Both of these solutions have 57 CPUs and 6GB of RAM. Intel states that they offer ~1TFLOP of performance, which puts them on par with the 5110P that launched last year, but with slightly less memory and presumably a lower price point. At the top of the line, Intel is introducing the 7120P and 7120X — the 7120P comes with an integrated heat spreader, the 7120X doesn't. Clock speeds are higher on this card, it has 61 cores instead of 60, 16GB of GDDR5, and 352GBps of memory bandwidth. Customers who need lots of cores and not much RAM can opt for one of the cheaper 3100 cards, while the 7100 family allows for much greater data sets."
Supercomputing

China Bumps US Out of First Place For Fastest Supercomptuer 125

An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."
Databases

A Database of Brains 25

aarondubrow writes "Researchers recently created OpenfMRI, a web-based, supercomputer-powered tool that makes it easier for researchers to process, share, compare and rapidly analyze fMRI brain scans from many different studies. Applying supercomputing to the fMRI analysis allows researchers to conduct larger studies, test more hypotheses, and accommodate the growing spatial and time resolution of brain scans. The ultimate goal is to collect enough brain data to develop a bottom-up understanding of brain function."
The Internet

Hackers Spawn Web Supercomputer On Way To Chess World Record 130

New submitter DeathGrippe sends in an article from Wired about a new take on distributed computing efforts like SETI@Home. From Wired: "By inserting a bit of JavaScript into a webpage, Pethiyagoda says, a site owner could distribute a problem amongst all the site's visitors. Visitors' computers or phones would be running calculations in the background while they read a page. With enough visitors, he says, a site could farm out enough small calculations to solve some difficult problems. ... With this year's run on the value of Bitcoins — the popular digital currency — security expert Mikko Hyppönen thinks that criminals might soon start experimenting with this type of distributed computing too. He believes that crooks could infect websites with JavaScript code that would turn visitors into unsuspecting Bitcoin miners. As long as you're visiting the website, you're mining coins for someone else."
China

Full Details Uncovered on Chinese Tianhe-2 Supercomputer 56

An anonymous reader writes "With help from a draft report (PDF) from Oak Ridge National Laboratory's Jack Dongarra, who also spearheads the process of verifying the top of the pack supercomputer, we get a detailed look at China's Tianhe-2 system. As noted previously, the system will be housed at the National Supercomputer Center in Guangzhou and has been aimed at providing an open platform for research and education and to provide a high performance computing service for southern China. From Jack's details: '... was sent results showing a run of HPL benchmark using 14,336 nodes, that run was made using 50 GB of the memory of each node and achieved 30.65 petaflops out of a theoretical peak of 49.19 petaflops, or an efficiency of 62.3% of theoretical peak performance taking a little over 5 hours to complete.The fastest result shown was using 90% of the machine. They are expecting to make improvements and increase the number of nodes used in the test.'"
Biotech

Researchers Determine Chemical Structure of HIV Capsid 90

adeelarshad82 writes "Researchers at the University of Illinois at Urbana-Champaign (UIUC) have determined the precise chemical structure of the HIV 'capsid,' a protein shell that protects the virus's genetic material and is a key to its virulence. The experiment involved mapping an incredible 64 million atoms to simulate the HIV capsid, pictured here. Interestingly no current HIV drugs target the HIV capsid and researchers believe that understanding the structure of the HIV capsid may hold the key to the development of new and more effective antiretroviral drugs. What makes this whole experiment even more fascinating is the use of Blue Waters, a Cray XK7 supercomputer with 3,000 Nvidia Tesla K20X GPU accelerators."
Handhelds

Motorola Building "Self-Aware" Smartphone 117

Nerval's Lobster writes "Back in the ancient days of 2009, Motorola Mobility earned considerable buzz with its Droid smartphone. Marketed as an iPhone alternative, the device featured a sliding QWERTY keyboard and a chunky black body that seemed positively Schwarzenegger-esque in comparison to its svelte Apple rival. But Motorola failed to translate that buzz into sustained momentum in the smartphone space. Instead, Samsung became the dominant Android smartphone manufacturer, battling toe-to-toe with Apple for market-share and profits. Even Google acquiring Motorola for the princely sum of $12.1 billion didn't really seem to alter the equation very much. Motorola CEO Dennis Woodside wants to change all that. In a May 29 talk at AllThingsD's D11 conference, he told the audience that Motorola has a 'hero phone' in the works, dubbed the Moto X—and that it's self-aware. 'It anticipates my needs,' he said, according to AllThingD's live blog of the event. But what does that actually mean? Thanks to embedded sensors, the phone knows when the user removes it from his or her pocket; in theory, that capability could serve broader applications, such as the phone recognizing where the user is located within a city and serving up content and applications accordingly. In fact, it sounds a bit like Google Now on steroids—or like the smartphone precursor to SkyNet, the supercomputer from the Terminator movies that's so intelligent, it decides that the world would be better off if it ruled over humanity."
Supercomputing

Supercomputers At TACC Getting a Speed Boost 14

Nerval's Lobster writes "The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is going to get a major speed boost this summer, and it won't come from new CPUs. Internet2, the research project that acts as a test bed for new Internet technologies, will take TACC's massive computing system from 10GB to 100GB of Ethernet throughput. TACC supercomputers are regularly found near the top of the Top 500 supercomputer list, which ranks the world's fastest supercomputers. But while the supercomputers were fast, the connectivity wasn't quite up to snuff. So TACC began the emigration to the Internet2 network. TACC is a key partner in the UT Research Cyberinfrastructure, which provides a combination of advanced computing, high-bandwidth network connectivity, and large data storage to all 15 of the UT system schools. So not only is TACC upgraded to Internet2s 100GB and 8.8 terabit-per-second optical network, platform, services and technologies, so is the entire UT system. 'This Internet2 bandwidth upgrade will enable researchers to achieve a tenfold increase in moving data to/from TACC's supercomputing, visualization and data storage systems, greatly increasing their productivity and their ability to make new discoveries,' TACC director Jay Boisseau wrote in a statement."
Technology

Computer Network Piecing Together a Jigsaw of Ancient Jewish Lore 127

First time accepted submitter aravenwood writes "The New York Times and the Times of Israel report today that artificial intelligence and a network of 100 computers in a basement in Tel Aviv University are being used to match 320,000 fragments of documents dating as far back as the 9th century in an attempt to reassemble the original documents. Since the trove of documents from the Jewish community of Cairo was discovered in 1896 only about 4000 of them have been pieced together, and the hope is that the new technique, which involves taking photographs of the fragments and using image recognition and other algorithms to match the language, spacing, and handwriting style of the text along with the shape of the fragment to other fragments could revolutionize not only the study of this trove documents, which has been split up into 67 different collections around the world since its discovery, but also how humanities disciplines study documents like these. They expect to make 12 billion comparisons of different fragments before the project is completed — they have already performed 2.8 billion. Among the documents, some dating from 950, was the discovery of letters by Moses Maimonides and that Cairene Jews were involved in the import of flax, linen, and sheep cheese from Sicily."

Slashdot Top Deals