×
IT

HPE Announces World's Largest ARM-based Supercomputer (zdnet.com) 57

The race to exascale speed is getting a little more interesting with the introduction of HPE's Astra -- what will be the world's largest ARM-based supercomputer. From a report: HPE is building Astra for Sandia National Laboratories and the US Department of Energy's National Nuclear Security Administration (NNSA). The NNSA will use the supercomputer to run advanced modeling and simulation workloads for things like national security, energy, science and health care.

HPE is involved in building other ARM-based supercomputing installations, but when Astra is delivered later this year, "it will hands down be the world's largest ARM-based supercomputer ever built," Mike Vildibill, VP of Advanced Technologies Group at HPE, told ZDNet. The HPC system is comprised of 5,184 ARM-based processors -- the Thunder X2 processor, built by Cavium. Each processor has 28 cores and runs at 2 GHz. Astra will deliver over 2.3 theoretical peak petaflops of performance, which should put it well within the top 100 supercomputers ever built -- a milestone for an ARM-based machine, Vildibill said.

Hardware

US Once Again Boasts the World's Fastest Supercomputer (zdnet.com) 85

The US Department of Energy on Friday unveiled Summit, a supercomputer capable of performing 200 quadrillion calculations per second, or 200 petaflops. Its performance should put it at the top of the list of the world's fastest supercomputers, which is currently dominated by China. From a report (thanks to reader cb_abq for the tip): Summit, housed at the Oak Ridge National Laboratory (ORNL), was built for AI. IBM designed a new heterogeneous architecture for Summit, which combines IBM POWER9 CPUs with Nvidia GPUs. It has approximately 4,600 nodes, with six Nvidia Volta Tensor Core GPUs per node -- that's more than 27,000. The last US supercomputer to top the list of the world's fastest was Titan, in 2012. ORNL, which houses Titan as well, says Summit will deliver more than five times the computational performance of Titan's 18,688 nodes.
AI

NVIDIA Unveils 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100, NVSwitch Tech 41

bigwophh writes from a report via HotHardware: NVIDIA CEO Jensen Huang took to the stage at GTC today to unveil a number of GPU-powered innovations for machine learning, including a new AI supercomputer and an updated version of the company's powerful Tesla V100 GPU that now sports a hefty 32GB of on-board HBM2 memory. A follow-on to last year's DGX-1 AI supercomputer, the new NVIDIA DGX-2 can be equipped with double the number of Tesla V100 processing modules for double the GPU horsepower. The DGX-2 can also have four times the available memory space, thanks to the updated Tesla V100's larger 32GB of memory. NVIDIA's new NVSwitch technology is a fully crossbar GPU interconnect fabric that allows NVIDIA's platform to scale to up to 16 GPUs and utilize their memory space contiguously, where the previous DGX-1 NVIDIA platform was limited to 8 total GPU complexes and associated memory. NVIDIA claims NVSwitch is five times faster than the fastest PCI Express switch and offers an aggregate 2.4TB per second of bandwidth. A new Quadro card was also announced. Called the Quadro GV100, it too is being powered by Volta. The Quadro GV100 packs 32GB of memory and supports NVIDIA's recently announced RTX real-time ray tracing technology.
Google

Google Unveils 72-Qubit Quantum Computer With Low Error Rates (tomshardware.com) 76

An anonymous reader quotes a report from Tom's Hardware: Google announced a 72-qubit universal quantum computer that promises the same low error rates the company saw in its first 9-qubit quantum computer. Google believes that this quantum computer, called Bristlecone, will be able to bring us to an age of quantum supremacy. In a recent announcement, Google said: "If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives."

According to Google, a minimum error rate for quantum computers needs to be in the range of less than 1%, coupled with close to 100 qubits. Google seems to have achieved this so far with 72-qubit Bristlecone and its 1% error rate for readout, 0.1% for single-qubit gates, and 0.6% for two-qubit gates. Quantum computers will begin to become highly useful in solving real-world problems when we can achieve error rates of 0.1-1% coupled with hundreds of thousand to millions of qubits. According to Google, an ideal quantum computer would have at least hundreds of millions of qubits and an error rate lower than 0.01%. That may take several decades to achieve, even if we assume a "Moore's Law" of some kind for quantum computers (which so far seems to exist, seeing the progress of both Google and IBM in the past few years, as well as D-Wave).

Bitcoin

Russian Nuclear Scientists Arrested For 'Bitcoin Mining Plot' (bbc.com) 84

Russian security officers have arrested several scientists working at a top-secret Russian nuclear warhead facility for allegedly mining crypto-currencies, BBC reported Friday, citing local media. From the report: The suspects had tried to use one of Russia's most powerful supercomputers to mine Bitcoins, media reports say. The Federal Nuclear Centre in Sarov, western Russia, is a restricted area. The centre's press service said: "There has been an unsanctioned attempt to use computer facilities for private purposes including so-called mining." The supercomputer was not supposed to be connected to the internet -- to prevent intrusion -- and once the scientists attempted to do so, the nuclear centre's security department was alerted. They were handed over to the Federal Security Service (FSB), the Russian news service Mash says. "As far as we are aware, a criminal case has been launched against them," the press service told Interfax news agency.
Networking

There's A Cluster of 750 Raspberry Pi's at Los Alamos National Lab (insidehpc.com) 128

Slashdot reader overheardinpdx shares a video from the SC17 supercomputing conference where Bruce Tulloch from BitScope "describes a low-cost Rasberry Pi cluster that Los Alamos National Lab is using to simulate large-scale supercomputers." Slashdot reader mspohr describes them as "five rack-mount Bitscope Cluster Modules, each with 150 Raspberry Pi boards with integrated network switches." With each of the 750 chips packing four cores, it offers a 3,000-core highly parallelizable platform that emulates an ARM-based supercomputer, allowing researchers to test development code without requiring a power-hungry machine at significant cost to the taxpayer. The full 750-node cluster, running 2-3 W per processor, runs at 1000W idle, 3000W at typical and 4000W at peak (with the switches) and is substantially cheaper, if also computationally a lot slower. After development using the Pi clusters, frameworks can then be ported to the larger scale supercomputers available at Los Alamos National Lab, such as Trinity and Crossroads.
BitScope's Tulloch points out the cluster is fully integrated with the network switching infrastructure at Los Alamos National Lab, and applauds the Raspberry Bi cluster as "affordable, scalable, highly parallel testbed for high-performance-computing system-software developers."
China

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
China

China Overtakes US In Latest Top 500 Supercomputer List (enterprisecloudnews.com) 110

An anonymous reader quotes a report from Enterprise Cloud News: The release of the semiannual Top 500 Supercomputer List is a chance to gauge the who's who of countries that are pushing the boundaries of high-performance computing. The most recent list, released Monday, shows that China is now in a class by itself. China now claims 202 systems within the Top 500, while the United States -- once the dominant player -- tumbles to second place with 143 systems represented on the list. Only a few months ago, the U.S. had 169 systems within the Top 500 compared to China's 160. The growth of China and the decline of the United States within the Top 500 has prompted the U.S. Department of Energy to doll out $258 million in grants to several tech companies to develop exascale systems, the next great leap in HPC. These systems can handle a billion billion calculations a second, or 1 exaflop. However, even as these physical machines grow more and more powerful, a good portion of supercomputing power is moving to the cloud, where it can be accessed by more researchers and scientists, making the technology more democratic.
IBM

IBM Raises the Bar with a 50-Qubit Quantum Computer (technologyreview.com) 69

IBM said on Friday it has created a prototype 50 qubit quantum computer as it further increases the pressure on Google in the battle to commercialize quantum computing technology. The company is also making a 20-qubit system available through its cloud computing platform, it said. From a report: The announcement does not mean quantum computing is ready for common use. The system IBM has developed is still extremely finicky and challenging to use, as are those being built by others. In both the 50- and the 20-qubit systems, the quantum state is preserved for 90 microseconds -- a record for the industry, but still an extremely short period of time. Nonetheless, 50 qubits is a significant landmark in progress toward practical quantum computers. Other systems built so far have had limited capabilities and could perform only calculations that could also be done on a conventional supercomputer. A 50-qubit machine can do things that are extremely difficult to simulate without quantum technology. Whereas normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena -- entanglement and superposition -- to process information differently.
China

China Arms Upgraded Tianhe-2A Hybrid Supercomputer (nextplatform.com) 23

New submitter kipperstem77 shares an excerpt from a report via The Next Platform: The National University of Defense Technology (NUDT) has, according to James Lin, vice director for the Center of High Performance Computing (HPC) at Shanghai Jiao Tong University, who divulged the plans last year, is building one of the three pre-exascale machines [that China is currently investing in], in this case a kicker to the Tianhe-1A CPU-GPU hybrid that was deployed in 2010 and that put China on the HPC map. This exascale system will be installed at the National Supercomputer Center in Tianjin, not the one in Guangzhou, according to Lin. This machine is expected to use ARM processors, and we think it will very likely use Matrix2000 DSP accelerators, too, but this has not been confirmed. The second pre-exascale machine will be an upgrade to the TaihuLight system using a future Shenwei processor, but it will be installed at the National Supercomputing Center in Jinan. And the third pre-exascale machine being funded by China is being architected in conjunction with AMD, with licensed server processor technology, and which everyone now thinks is going to be based on Epyc processors and possibly with Radeon Instinct GPU coprocessors. The Next Platform has a slide embedded in its report "showing the comparison between Tianhe-2, which was the fastest supercomputer in the world for two years, and Tianhe-2A, which will be vying for the top spot when the next list comes out." Every part of this system shows improvements.
AI

IBM Pitched Its Watson Supercomputer as a Revolution in Cancer Care. It's Nowhere Close (statnews.com) 108

IBM began selling Watson to recommend the best cancer treatments to doctors around the world three years ago. But is it really doing its job? Not so much. An investigation by Stat found that the supercomputer isn't living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM's goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care. From the report: The interviews suggest that IBM, in its rush to bolster flagging revenue, unleashed a product without fully assessing the challenges of deploying it in hospitals globally. While it has emphatically marketed Watson for cancer care, IBM hasn't published any scientific papers demonstrating how the technology affects physicians and patients. As a result, its flaws are getting exposed on the front lines of care by doctors and researchers who say that the system, while promising in some respects, remains undeveloped. [...] Perhaps the most stunning overreach is in the company's claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, "even new approaches" to cancer care. STAT found that the system doesn't create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
AI

Leading Chinese Bitcoin Miner Wants To Cash In On AI (qz.com) 23

hackingbear writes: Bitmain, the most influential company in the bitcoin economy by the sheer amount of processing power, or hash rate, that it controls, plans to unleash its bitcoin mining ASIC technology to AI applications. The company designed a new deep learning processor Sophon, named after a alien-made, proton-sized supercomputer in China's seminal science-fiction novel, The Three-Body Problem . The idea is to etch in silicon in some of the most common deep learning algorithms, thus greatly boosting efficiency. Users will be able to apply their own datasets and build their own models on these ASICs, allowing the resulting neural networks to generate results and learn from those results at a far quicker pace. The company hopes that thousands of Bitmain Sophon units soon could be training neural networks in vast data centers around the world.
NASA

SpaceX Successfully Launches, Recovers Falcon 9 For CRS-12 (techcrunch.com) 71

Another SpaceX rocket has been successfully launched from NASA's Kennedy Space Center today, carrying a Dragon capsule loaded with over 6,400 pounds of cargo destined for the International Space Station. This marks an even dozen for ISS resupply missions launched by SpaceX under contract to NASA. TechCrunch reports: The rocket successfully launched from NASA's Kennedy Space Center at 12:31 PM EDT, and Dragon deployed from the second stage as planned. Dragon will rendezvous with the ISS on August 16 for capture by the station's Canadarm 2 robotic appendage, after which it'll be attached to the rocket. After roughly a month, it'll return to Earth after leaving the ISS with around 3,000 pounds of returned cargo on board, and splash down in the Pacific Ocean for recovery. There's another reason this launch was significant, aside from its experimental payload (which included a supercomputer designed to help humans travel to Mars): SpaceX will only use re-used Dragon capsules for all future CRS missions, the company has announced, meaning this is the last time a brand new Dragon will be used to resupply the ISS, if all goes to plan. Today's launch also included an attempt to recover the Falcon 9 first stage for re-use at SpaceX's land-based LZ-1 landing pad. The Falcon 9 first stage returned to Earth as planned, and touched down at Cape Canaveral roughly 9 minutes after launch.
ISS

SpaceX Will Deliver The First Supercomputer To The ISS (hpe.com) 98

Slashdot reader #16,185, Esther Schindler writes: "By NASA's rules, not just any computer can go into space. Their components must be radiation hardened, especially the CPUs," reports HPE Insights. "Otherwise, they tend to fail due to the effects of ionizing radiation. The customized processors undergo years of design work and then more years of testing before they are certified for spaceflight." As a result, the ISS runs the station using two sets of three Command and Control Multiplexer DeMultiplexer computers whose processors are 20MHz Intel 80386SX CPUs, right out of 1988. "The traditional way to radiation-harden a spacecraft computer is to add redundancy to its circuits or by using insulating substrates instead of the usual semiconductor wafers on chips. That's expensive and time consuming. HPE scientists believe that simply slowing down a system in adverse conditions can avoid glitches and keep the computer running."

So, assuming the August 15 SpaceX Falcon 9 rocket launch goes well, there will be a supercomputer headed into space -- using off-the-shelf hardware. Let's see if the idea pans out. "We may discover a set of parameters with which a supercomputer can successfully run for at least a year without errors," says Dr. Mark R. Fernandez, the mission's co-principal investigator for software and SGI's HPC technology officer. "Alternately, one or more components of the system will fail, in which case we will then do the typical failure analysis on Earth. That will let us learn what to change to make the systems more reliable in the future."

The article points out that the New Horizons spacecraft that just flew past Pluto has a 12MHz Mongoose-V CPU, based on the MIPS R3000 CPU. "You may remember its much faster ancestor: the chip that took you on adventures in the original Sony PlayStation, circa 1994."
Space

Can Primordial Black Holes Alone Account For Dark Matter? 135

thomst writes: Slashdot stories have reported extensively on the LIGO experiments' initial detection of gravity waves emanating from collisions of primordial black holes, beginning, on February 11, 2016, with the first (and most widely-reported) such detection. Other Slashdot articles have chronicled the second LIGO detection event and the third one. There's even been a Slashdot report on the Synthetic Universe supercomputer model that provided support for the conclusion that the first detection event was, indeed, of a collision between two primordial black holes, rather than the more familiar stellar remnant kind that result from more recent supernovae of large-mass stars.

What interests me is the possibility that black holes of all kinds -- and particularly primordial black holes -- are so commonplace that they may be all that's required to explain the effects of "dark matter." Dark matter, which, according to current models, makes up some 26% of the mass of our Universe, has been firmly established as real, both by calculation of the gravity necessary to hold spiral galaxies like our own together, and by direct observation of gravitational lensing effects produced by the "empty" space between recently-collided galaxies. There's no question that it exists. What is unknown, at this point, is what exactly it consists of.

The leading candidate has, for decades, been something called WIMPs (Weakly-Interacting Massive Particles), a theoretical notion that there are atomic-scale particles that interact with "normal" baryonic matter only via gravity. The problem with WIMPs is that, thus far, not a single one has been detected, despite years of searching for evidence that they exist via multiple, multi-billion-dollar detectors.

With the recent publication of a study of black hole populations in our galaxy (article paywalled, more layman-friendly press release at Phys.org) that indicates there may be as many as 100 million stellar-remnant-type black holes in the Milky Way alone, the question arises, "Is the number of primordial and stellar-remnant black holes in our Universe sufficient to account for the calculated mass of dark matter, without having to invoke WIMPs at all?"

I don't personally have the mathematical knowledge to even begin to answer that question, but I'm curious to find out what the professional cosmologists here think of the idea.
United States

Swiss Supercomputer Edges US Out of Top Spot (bbc.com) 64

There have only been two times in the last 24 years where the U.S. has been edged out of the top spot of the world's most powerful supercomputers. Now is one of those times. "An upgrade to a Swiss supercomputer has bumped the U.S. Department of Energy's Cray XK7 to number four on the list rating these machines," reports the BBC. "The only other time the U.S. fell out of the top three was in 1996." The top two slots are occupied by Chinese supercomputers. From the report. The U.S. machine has been supplanted by Switzerland's Piz Daint system, which is installed at the country's national supercomputer center. The upgrade boosted its performance from 9.8 petaflops to 19.6. The machine is named after a peak in the Grison region of Switzerland. One petaflop is equal to one thousand trillion operations per second. A "flop" (floating point operation) can be thought of as a step in a calculation. The performance improvement meant it surpassed the 17.6 petaflop capacity of the DoE machine, located at the Oak Ridge National Laboratory in Tennessee. The U.S. is well represented lower down in the list, as currently half of all the machines in the top 10 of the list are based in North America. And the Oak Ridge National Laboratory looks set to return to the top three later this year, when its Summit supercomputer comes online. This is expected to have a peak performance of more than 100 petaflops.
AMD

Six Companies Awarded $258 Million From US Government To Build Exascale Supercomputers (digitaltrends.com) 40

The U.S. Department of Energy will be investing $258 million to help six leading technology firms -- AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia -- research and build exascale supercomputers. Digital Trends reports: The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project. "Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation," U.S. Secretary of Energy Rick Perry said. "These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing -- exascale-capable systems." The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.
The Internet

NYU Accidentally Exposed Military Code-breaking Computer Project To Entire Internet (theintercept.com) 75

An anonymous reader writes: A confidential computer project designed to break military codes was accidentally made public by New York University engineers. An anonymous digital security researcher identified files related to the project while hunting for things on the internet that shouldn't be, The Intercept reported. He used a program called Shodan, a search engine for internet-connected devices, to locate the project. It is the product of a joint initiative by NYU's Institute for Mathematics and Advanced Supercomputing, headed by the world-renowned Chudnovsky brothers, David and Gregory, the Department of Defense, and IBM. Information on an exposed backup drive described the supercomputer, called -- WindsorGreen -- as a system capable of cracking passwords.
NASA

NASA Runs Competition To Help Make Old Fortran Code Faster (bbc.com) 205

NASA is seeking help from coders to speed up the software it uses to design experimental aircraft. From a report on BBC: It is running a competition that will share $55,000 between the top two people who can make its FUN3D software run up to 10,000 times faster. The FUN3D code is used to model how air flows around simulated aircraft in a supercomputer. The software was developed in the 1980s and is written in an older computer programming language called Fortran. "This is the ultimate 'geek' dream assignment," said Doug Rohn, head of NASA's transformative aeronautics concepts program that makes heavy use of the FUN3D code. In a statement, Mr Rohn said the software is used on the agency's Pleiades supercomputer to test early designs of futuristic aircraft. The software suite tests them using computational fluid dynamics, which make heavy use of complicated mathematical formulae and data structures to see how well the designs work.
Canada

'Breakthrough' LI-RAM Material Can Store Data With Light (ctvnews.ca) 104

A Vancouver researcher has patented a new material that uses light instead of electricity to store data. An anonymous reader writes: LI-RAM -- that's light induced magnetoresistive random-access memory -- promises supercomputer speeds for your cellphones and laptops, according to Natia Frank, the materials scientist at the University of Victoria who developed the new material as part of an international effort to reduce the heat and power consumption of modern processors. She envisions a world of LI-RAM mobile devices which are faster, thinner, and able to hold much more data -- all while consuming less power and producing less heat.

And best of all, they'd last twice as long on a single charge (while producing almost no heat), according to a report on CTV News, which describes this as "a breakthrough material" that will not only make smartphones faster and more durable, but also more energy-efficient. The University of Victoria calculates that's 10% of the world's electricity is consumed by "information communications technology," so LI-RAM phones could conceivably cut that figure in half.

They also report that the researcher is "working with international electronics manufacturers to optimize and commercialize the technology, and says it could be available on the market in the next 10 years."

Slashdot Top Deals