Supercomputing

National Weather Service Upgrades Storm-Tracking Supercomputers 34

Nerval's Lobster writes "Just in time for hurricane season, the National Weather Service has finished upgrading the supercomputers it uses to track and model super-storms. 'These improvements are just the beginning and build on our previous success. They lay the foundation for further computing enhancements and more accurate forecast models that are within reach,' National Weather Service director Louis W. Uccellini wrote in a statement. The National Weather Service's 'Tide' supercomputer — along with its 'Gyre' backup — are capable of operating at a combined 213 teraflops. The National Oceanic and Atmospheric Administration (NOAA), which runs the Service, has asked for funding that would increase that supercomputing power even more, to 1,950 teraflops. The National Weather Service uses that hardware for projects such as the Hurricane Weather Research and Forecasting (HWRF) model, a complex bit of forecasting that allows the organization to more accurately predict storms' intensity and movement. The HWRF can leverage real-time data taken from Doppler radar installed in the NOAA's P3 hurricane hunter aircraft."
Supercomputing

Supercomputer Becomes Massive Router For Global Radio Telescope 60

Nerval's Lobster writes "Astrophysicists at MIT and the Pawsey supercomputing center in Western Australia have discovered a whole new role for supercomputers working on big-data science projects: They've figured out how to turn a supercomputer into a router. (Make that a really, really big router.) The supercomputer in this case is a Cray Cascade system with a top performance of 0.3 petaflops — to be expanded to 1.2 petaflops in 2014 — running on a combination of Intel Ivy Bridge, Haswell and MIC processors. The machine, which is still being installed at the Pawsey Centre in Kensington, Western Australia and isn't scheduled to become operational until later this summer, had to go to work early after researchers switched on the world's most sensitive radio telescope June 9. The Murchison Widefield Array is a 2,000-antenna radio telescope located at the Murchison Radio-astronomy Observatory (MRO) in Western Australia, built with the backing of universities in the U.S., Australia, India and New Zealand. Though it is the most powerful radio telescope in the world right now, it is only one-third of the Square Kilometer Array — a spread of low-frequency antennas that will be spread across a kilometer of territory in Australia and Southern Africa. It will be 50 times as sensitive as any other radio telescope and 10,000 times as quick to survey a patch of sky. By comparison, the Murchison Widefield Array is a tiny little thing stuck out as far in the middle of nowhere as Australian authorities could find to keep it as far away from terrestrial interference as possible. Tiny or not, the MWA can look farther into the past of the universe than any other human instrument to date. What it has found so far is data — lots and lots of data. More than 400 megabytes of data per second come from the array to the Murchison observatory, before being streamed across 500 miles of Australia's National Broadband Network to the Pawsey Centre, which gets rid of most of it as quickly as possible."
Supercomputing

Adapteva Parallella Supercomputing Boards Start Shipping 98

hypnosec writes "Adapteva has started shipping its $99 Parallella parallel processing single-board supercomputer to initial Kickstarter backers. Parallella is powered by Adapteva's 16-core and 64-core Epiphany multicore processors that are meant for parallel computing unlike other commercial off-the-shelf (COTS) devices like Raspberry Pi that don't support parallel computing natively. The first model to be shipped has the following specifications: a Zynq-7020 dual-core ARM A9 CPU complemented with Epiphany Multicore Accelerator (16 or 64 cores), 1GB RAM, MicroSD Card, two USB 2.0 ports, optional four expansion connectors, Ethernet, and an HDMI port." They are also releasing documentation, examples, and an SDK (brief overview, it's Free Software too). And the device runs GNU/Linux for the non-parallel parts (Ubuntu is the suggested distribution).
Supercomputing

Video Meet the Stampede Supercomputing Cluster's Administrator (Video) Screenshot-sm 34

UT Austin tends not to do things by half measures, as illustrated by the Texas Advanced Computing Center, which has been home to an evolving family of supercomputing clusters. The latest of these, Stampede, was first mentioned here back in 2011, before it was actually constructed. In the time since, Stampede has been not only completed, but upgraded; it's just successfully completed a successful six months since its last major update — the labor-intensive installation of Xeon Phi processors throughout 106 densely packed racks. I visited TACC, camera in hand, to take a look at this megawatt-eating electronic hive (well, herd) and talk with director of high-performance computing Bill Barth, who has insight into what it's like both as an end-user (both commercial and academic projects get to use Stampede) and as an administrator on such a big system.
Virtualization

Cray X-MP Simulator Resurrects Piece of Computer History 55

An anonymous reader writes "If you have a fascination with old supercomputers, like I do, this project might tickle your interest: A functional simulation of a Cray X-MP supercomputer, which can boot to its old batch operating system, called COS. It's complete with hard drive and tape simulation (no punch card readers, sorry) and consoles. Source code and binaries are available. You can also read about the journey that got me there, like recovering the OS image from a 30 year old hard drive or reverse-engineering CRAY machine code to understand undocumented tape drive operation and disk file-systems."
Supercomputing

Breaking Supercomputers' Exaflops Barrier 96

Nerval's Lobster writes "Breaking the exaflops barrier remains a development goal for many who research high-performance computing. Some developers predicted that China's new Tianhe-2 supercomputer would be the first to break through. Indeed, Tianhe-2 did pretty well when it was finally revealed — knocking the U.S.-based Titan off the top of the Top500 list of the world's fastest supercomputers. Yet despite sustained performance of 33 petaflops to 35 petaflops and peaks ranging as high as 55 petaflops, even the world's fastest supercomputer couldn't make it past (or even close to) the big barrier. Now, the HPC market is back to chattering over who'll first build an exascale computer, and how long it might take to bring such a platform online. Bottom line: It will take a really long time, combined with major breakthroughs in chip design, power utilization and programming, according to Nvidia chief scientist Bill Dally, who gave the keynote speech at the 2013 International Supercomputing Conference last week in Leipzig, Germany. In a speech he called 'Future Challenges of Large-scale Computing' (and in a blog post covering similar ground), Dally described some of the incredible performance hurdles that need to be overcome in pursuit of the exaflops barrier."
IBM

Harvard, IBM Crunch Data For More Efficient Solar Cells 65

Nerval's Lobster writes "Harvard's Clean Energy Project (CEP) is using IBM's World Community Grid, a 'virtual supercomputer' that leverages volunteers' surplus computing power, to determine which organic carbon compounds are best suited for converting sunlight into electricity. IBM claims that the resulting database of compounds is the 'most extensive investigation of quantum chemicals ever performed.' In theory, all that information can be utilized to develop organic semiconductors and solar cells. Roughly a thousand of the molecular structures explored by the project are capable of converting 11 percent (or more) of captured sunlight into electricity—a significant boost from many organic cells currently in use, which convert between 4 and 5 percent of sunlight. That's significantly less than solar cells crafted from silicon, which can produce efficiencies of up to nearly 20 percent (at least in the case of black silicon solar cells). But silicon solar cells can be costly to produce, experiments with low-grade materials notwithstanding; organic cells could be a cheap and recyclable alternative, provided researchers can make them more efficient. The World Community Grid asks volunteers to download a small program (called an 'agent') onto their PC. Whenever the machine is idle, it requests data from whatever project is on the World Community Grid's server, which it crunches before sending back (and requesting another data packet). Several notable projects have embraced grid computing as a way to analyze massive datasets, including SETI@Home."
Intel

Intel Announces New Enterprise Xeons, More Powerful Xeon Phi Cards 57

MojoKid writes "Intel announced a set of new enterprise products today aimed at furthering its strengths in the TOP500 supercomputing market. As of today, the Chinese Tiahne-2 supercomputer (aka Milky Way 2) is now the fastest supercomputer on the planet at roughly ~54PFLOPs. Intel is putting its own major push behind heterogeneous computing with the Tianhe-2. Each node contains two Ivy Bridge sockets and three Xeon Phi cards. Each node, therefore, contains 422.4GFLOP/s in Ivy Bridge performance — but 3.43TFLOPs/s worth of Xeon Phi. In addition, we'll see new Xeons based on this technology later this year, in the 22nm E5-2600 V2 family, with up to 12 cores. The new chips will be built on Ivy Bridge technology and will offer up to 12 cores / 24 threads. The new Xeons, however, aren't really the interesting part of the story. Today, Intel is adding cards to the current Xeon Phi lineup — the 7120P, 3120P, 3120A, and 5120D. The 3120P and 3120A are the same card — the 'P' is passively cooled, while the "A" integrates a fan. Both of these solutions have 57 CPUs and 6GB of RAM. Intel states that they offer ~1TFLOP of performance, which puts them on par with the 5110P that launched last year, but with slightly less memory and presumably a lower price point. At the top of the line, Intel is introducing the 7120P and 7120X — the 7120P comes with an integrated heat spreader, the 7120X doesn't. Clock speeds are higher on this card, it has 61 cores instead of 60, 16GB of GDDR5, and 352GBps of memory bandwidth. Customers who need lots of cores and not much RAM can opt for one of the cheaper 3100 cards, while the 7100 family allows for much greater data sets."
Supercomputing

China Bumps US Out of First Place For Fastest Supercomptuer 125

An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."
Databases

A Database of Brains 25

aarondubrow writes "Researchers recently created OpenfMRI, a web-based, supercomputer-powered tool that makes it easier for researchers to process, share, compare and rapidly analyze fMRI brain scans from many different studies. Applying supercomputing to the fMRI analysis allows researchers to conduct larger studies, test more hypotheses, and accommodate the growing spatial and time resolution of brain scans. The ultimate goal is to collect enough brain data to develop a bottom-up understanding of brain function."
The Internet

Hackers Spawn Web Supercomputer On Way To Chess World Record 130

New submitter DeathGrippe sends in an article from Wired about a new take on distributed computing efforts like SETI@Home. From Wired: "By inserting a bit of JavaScript into a webpage, Pethiyagoda says, a site owner could distribute a problem amongst all the site's visitors. Visitors' computers or phones would be running calculations in the background while they read a page. With enough visitors, he says, a site could farm out enough small calculations to solve some difficult problems. ... With this year's run on the value of Bitcoins — the popular digital currency — security expert Mikko Hyppönen thinks that criminals might soon start experimenting with this type of distributed computing too. He believes that crooks could infect websites with JavaScript code that would turn visitors into unsuspecting Bitcoin miners. As long as you're visiting the website, you're mining coins for someone else."
China

Full Details Uncovered on Chinese Tianhe-2 Supercomputer 56

An anonymous reader writes "With help from a draft report (PDF) from Oak Ridge National Laboratory's Jack Dongarra, who also spearheads the process of verifying the top of the pack supercomputer, we get a detailed look at China's Tianhe-2 system. As noted previously, the system will be housed at the National Supercomputer Center in Guangzhou and has been aimed at providing an open platform for research and education and to provide a high performance computing service for southern China. From Jack's details: '... was sent results showing a run of HPL benchmark using 14,336 nodes, that run was made using 50 GB of the memory of each node and achieved 30.65 petaflops out of a theoretical peak of 49.19 petaflops, or an efficiency of 62.3% of theoretical peak performance taking a little over 5 hours to complete.The fastest result shown was using 90% of the machine. They are expecting to make improvements and increase the number of nodes used in the test.'"
Biotech

Researchers Determine Chemical Structure of HIV Capsid 90

adeelarshad82 writes "Researchers at the University of Illinois at Urbana-Champaign (UIUC) have determined the precise chemical structure of the HIV 'capsid,' a protein shell that protects the virus's genetic material and is a key to its virulence. The experiment involved mapping an incredible 64 million atoms to simulate the HIV capsid, pictured here. Interestingly no current HIV drugs target the HIV capsid and researchers believe that understanding the structure of the HIV capsid may hold the key to the development of new and more effective antiretroviral drugs. What makes this whole experiment even more fascinating is the use of Blue Waters, a Cray XK7 supercomputer with 3,000 Nvidia Tesla K20X GPU accelerators."
Handhelds

Motorola Building "Self-Aware" Smartphone 117

Nerval's Lobster writes "Back in the ancient days of 2009, Motorola Mobility earned considerable buzz with its Droid smartphone. Marketed as an iPhone alternative, the device featured a sliding QWERTY keyboard and a chunky black body that seemed positively Schwarzenegger-esque in comparison to its svelte Apple rival. But Motorola failed to translate that buzz into sustained momentum in the smartphone space. Instead, Samsung became the dominant Android smartphone manufacturer, battling toe-to-toe with Apple for market-share and profits. Even Google acquiring Motorola for the princely sum of $12.1 billion didn't really seem to alter the equation very much. Motorola CEO Dennis Woodside wants to change all that. In a May 29 talk at AllThingsD's D11 conference, he told the audience that Motorola has a 'hero phone' in the works, dubbed the Moto X—and that it's self-aware. 'It anticipates my needs,' he said, according to AllThingD's live blog of the event. But what does that actually mean? Thanks to embedded sensors, the phone knows when the user removes it from his or her pocket; in theory, that capability could serve broader applications, such as the phone recognizing where the user is located within a city and serving up content and applications accordingly. In fact, it sounds a bit like Google Now on steroids—or like the smartphone precursor to SkyNet, the supercomputer from the Terminator movies that's so intelligent, it decides that the world would be better off if it ruled over humanity."
Supercomputing

Supercomputers At TACC Getting a Speed Boost 14

Nerval's Lobster writes "The Texas Advanced Computing Center (TACC) at The University of Texas at Austin is going to get a major speed boost this summer, and it won't come from new CPUs. Internet2, the research project that acts as a test bed for new Internet technologies, will take TACC's massive computing system from 10GB to 100GB of Ethernet throughput. TACC supercomputers are regularly found near the top of the Top 500 supercomputer list, which ranks the world's fastest supercomputers. But while the supercomputers were fast, the connectivity wasn't quite up to snuff. So TACC began the emigration to the Internet2 network. TACC is a key partner in the UT Research Cyberinfrastructure, which provides a combination of advanced computing, high-bandwidth network connectivity, and large data storage to all 15 of the UT system schools. So not only is TACC upgraded to Internet2s 100GB and 8.8 terabit-per-second optical network, platform, services and technologies, so is the entire UT system. 'This Internet2 bandwidth upgrade will enable researchers to achieve a tenfold increase in moving data to/from TACC's supercomputing, visualization and data storage systems, greatly increasing their productivity and their ability to make new discoveries,' TACC director Jay Boisseau wrote in a statement."
Technology

Computer Network Piecing Together a Jigsaw of Ancient Jewish Lore 127

First time accepted submitter aravenwood writes "The New York Times and the Times of Israel report today that artificial intelligence and a network of 100 computers in a basement in Tel Aviv University are being used to match 320,000 fragments of documents dating as far back as the 9th century in an attempt to reassemble the original documents. Since the trove of documents from the Jewish community of Cairo was discovered in 1896 only about 4000 of them have been pieced together, and the hope is that the new technique, which involves taking photographs of the fragments and using image recognition and other algorithms to match the language, spacing, and handwriting style of the text along with the shape of the fragment to other fragments could revolutionize not only the study of this trove documents, which has been split up into 67 different collections around the world since its discovery, but also how humanities disciplines study documents like these. They expect to make 12 billion comparisons of different fragments before the project is completed — they have already performed 2.8 billion. Among the documents, some dating from 950, was the discovery of letters by Moses Maimonides and that Cairene Jews were involved in the import of flax, linen, and sheep cheese from Sicily."
Supercomputing

Some Scientists Question Whether Quantum Computer Really Is Quantum 170

gbrumfiel writes "Last week, Google and NASA announced a partnership to buy a new quantum computer from Canadian firm D-Wave Systems. But NPR news reports that many scientists are still questioning whether new machine really is quantum. Long-time critic and computer scientist Scott Aaronson has a long post detailing the current state of affairs. At issue is whether the 512 quantum bits at the processor's core are 'entangled' together. Measuring that entanglement directly destroys it, so D-Wave has had a hard time convincing skeptics. As with all things quantum mechanical, the devil is in the details. Still it may not matter: D-Wave's machine appears to be far faster at solving certain kinds of problems (PDF), regardless of how it works."
Earth

NWS Announces Big Computer Upgrade 161

riverat1 writes "After being embarrassed when the Europeans did a better job forecasting Sandy than the National Weather Service Congress allocated $25 million ($23.7 after sequestration) in the Sandy relief bill for upgrades to forecasting and supercomputer resources. The NWS announced that their main forecasting computer will be upgraded from the current 213 TeraFlops to 2,600 TFlops by fiscal year 2015, over a twelve-fold increase. The upgrade is expected to increase the horizontal grid scale by a factor of 3 allowing more precise forecasting of local features of weather. The some of the allocated funds will also be used to hire some contract scientists to improve the forecast model physics and enhance the collection and assimilation of data."
Supercomputing

Why We Should Build a Supercomputer Replica of the Human Brain 393

An anonymous reader sends this excerpt from Wired: "[Henry] Markram was proposing a project that has bedeviled AI researchers for decades, that most had presumed was impossible. He wanted to build a working mind from the ground up. ... The self-assured scientist claims that the only thing preventing scientists from understanding the human brain in its entirety — from the molecular level all the way to the mystery of consciousness — is a lack of ambition. If only neuroscience would follow his lead, he insists, his Human Brain Project could simulate the functions of all 86 billion neurons in the human brain, and the 100 trillion connections that link them. And once that's done, once you've built a plug-and-play brain, anything is possible. You could take it apart to figure out the causes of brain diseases. You could rig it to robotics and develop a whole new range of intelligent technologies. You could strap on a pair of virtual reality glasses and experience a brain other than your own."
Supercomputing

Japan Planning Exascale Computer For 2020 38

Nerval's Lobster writes "Japan has thrown its hat into the ring for exascale computing, reported the country's newspapers. The goal: achieve one exaFLOPS of performance by 2020. Japan's finance ministry has agreed to begin work next fiscal year on a supercomputer with a performance capability 100 times that of the K computer, a 10-petaFLOPS computer that debuted as the most powerful supercomputer in the world in 2011. The midterm report for the new supercomputer was concluded Thursday, the Asahi Shimbun business daily reported. The Japan Times was slightly more conservative, reporting that the Education, Culture, Sports, Science and Technology Ministry will seek funding to design the new machine in its fiscal 2014 budget request — implying that the project has not necessarily been approved. The science ministry is hoping to keep the cost of the new supercomputer below the ¥110 billion mark ($1.08 billion) that was required to develop the K computer, the paper reported. (Slashdot couldn't find any evidence that the project had been approved on the ministry Webpage, although the K computer was mentioned several times in a discussion of public-private partnerships.)"

Slashdot Top Deals