Hardware

SpiNNaker Powers Up World's Largest Supercomputer That Emulates a Human Brain 164

The world's largest neuromorphic supercomputer, the Spiking Neural Network Architecture (SpiNNaker), was just switched on for the first time yesterday, boasting one million processor cores and the ability to perform 200 trillion actions per second. HotHardware reports: SpiNNaker has been twenty years and nearly $19.5 million in the making. The project was originally supported by the Engineering and Physical Sciences Research Council (EPSRC), but has been most recently funded by the European Human Brain Project. The supercomputer was designed and built by the University of Manchester's School of Computer Science. Construction began in 2006 and the supercomputer was finally turned on yesterday.

SpiNNaker is not the first supercomputer to incorporate one million processor cores, but it is still incredibly unique since it is designed to mimic the human brain. Most computers send information from one point to another through a standard network. SpiNNaker sends small bits of information to thousands of points, similar to how the neurons pass chemicals and electrical signals through the brain. SpiNNaker uses electronic circuits to imitate neurons. SpiNNaker has so far been used to mimic the processing of more isolated brain networks like the cortex. It has also been used to control SpOmnibot, a robot that processes visual information and navigates towards its targets.
The Internet

'You Can See Almost Everything.' Antarctica Just Became the Best-Mapped Continent on Earth (fortune.com) 110

Antarctica has become the best-mapped continent on Earth with a new high-resolution terrain map showing the ice-covered landmass in unprecedented detail. From a report: According to the scientists at Ohio State University and the University of Minnesota who created the imagery, Antarctica is now the best-mapped continent on Earth. The Reference Elevation Model of Antarctica (REMA) was constructed using hundreds of thousands of satellite images taken between 2009 and 2017, Earther reports. A supercomputer assembled the massive amounts of data, including the elevation of the land over time, and created REMA, an immensely detailed topographical map, with a file size over 150 terabytes. The new map has a resolution of 2 to 8 meters, compared to the usual 1,000 meters, says an Ohio State press release. According to The New York Times, the detail of this new map is the equivalent of being able to see down to a car, or smaller, when before you could only see the whole of Central Park. Scientists now know the elevation of every point of Antarctica, with an error margin of just a few feet.
Programming

Is Julia the Next Big Programming Language? MIT Thinks So, as Version 1.0 Lands (techrepublic.com) 386

Julia, the MIT-created programming language for developers "who want it all", hit its milestone 1.0 release this month -- with MIT highlighting its rapid adoption in the six short years since its launch. From a report: Released in 2012, Julia is designed to combine the speed of C with the usability of Python, the dynamism of Ruby, the mathematical prowess of MatLab, and the statistical chops of R. "The release of Julia 1.0 signals that Julia is now ready to change the technical world by combining the high-level productivity and ease of use of Python and R with the lightning-fast speed of C++," says MIT professor Alan Edelman. The breadth of Julia's capabilities and ability to spread workloads across hundreds of thousands of processing cores have led to its use for everything from machine learning to large-scale supercomputer simulation. MIT says Julia is the only high-level dynamic programming language in the "petaflop club," having been used to simulate 188 million stars, galaxies, and other astronomical objects on Cori, the world's 10th-most powerful supercomputer. The simulation ran in just 14.6 minutes, using 650,000 Intel Knights Landing Xeon Phi cores to handle 1.5 petaflops (quadrillion floating-point operations per second).
Education

University of Texas is Getting a $60 Million Supercomputer (cnet.com) 88

The University of Texas at Austin, will soon be home to one of the most powerful supercomputers in the world. From a report: The National Science Foundation awarded a $60 million grant to the school's Texas Advanced Computing Center, UT Austin and NSF said Wednesday. The supercomputer, named Frontera, is set to become operational roughly a year from now in 2019, and will be "among the most powerful in the world," according to a statement. To be exact, it will be the fifth most powerful in the world, third most powerful in the US, and the most powerful at a university.
AI

IBM Watson Reportedly Recommended Cancer Treatments That Were 'Unsafe and Incorrect' 103

An anonymous reader quotes a report from Gizmodo: Internal company documents from IBM show that medical experts working with the company's Watson supercomputer found "multiple examples of unsafe and incorrect treatment recommendations" when using the software, according to a report from Stat News. According to Stat, those documents provided strong criticism of the Watson for Oncology system, and stated that the "often inaccurate" suggestions made by the product bring up "serious questions about the process for building content and the underlying technology." One example in the documents is the case of a 65-year-old man diagnosed with lung cancer, who also seemed to have severe bleeding. Watson reportedly suggested the man be administered both chemotherapy and the drug "Bevacizumab." But the drug can lead to "severe or fatal hemorrhage," according to a warning on the medication, and therefore shouldn't be given to people with severe bleeding, as Stat points out. A Memorial Sloan Kettering (MSK) Cancer Center spokesperson told Stat that they believed this recommendation was not given to a real patient, and was just a part of system testing.

According to the report, the documents blame the training provided by IBM engineers and on doctors at MSK, which partnered with IBM in 2012 to train Watson to "think" more like a doctor. The documents state that -- instead of feeding real patient data into the software -- the doctors were reportedly feeding Watson hypothetical patients data, or "synthetic" case data. This would mean it's possible that when other hospitals used the MSK-trained Watson for Oncology, doctors were receiving treatment recommendations guided by MSK doctors' treatment preferences, instead of an AI interpretation of actual patient data. And the results seem to be less than desirable for some doctors.
Communications

China's Quantum Radar Could Detect Stealth Planes, Missiles (popsci.com) 194

hackingbear shares a report from Popular Science: China Electronics Technology Group Corporation (CETC), China's foremost military electronics company, announced that its groundbreaking quantum radar has achieved capability of tracking high altitude objects, likely by increasing the coherence time entangled photons. CETC envisions that its quantum radar will be used in the stratosphere to track objects in "the upper atmosphere and beyond" (including space). Quantum can identify the position, radar cross section, speed, direction and even "observe" on the composition of the target such as differentiating between an actual nuclear warhead against inflatable decoys. [...] Importantly, attempts to spoof the quantum radar would be easily noticed since any attempt to alter or duplicate the entangled photons would be detected by the radar. The news is an important illustration of a larger trend of Chinese advancement in the new, crucial area of quantum research. Other notable projects in China's quantum technology include the Micius satellite, and advances by Alibaba and the Chinese University of Science and Technology in a world record of entangling 18 photons (a quantum supercomputer would require about 50 entangled photons), such that China arguably leads the world in quantum technologies.
Japan

Japan's Fujitsu and RIKEN Have Dropped the SPARC Processor in Favor of an ARM Design Chip Scaled Up For Supercomputer Performance (ieee.org) 40

Japan's computer giant Fujitsu and RIKEN, the country's largest research institute, have begun field-testing a prototype CPU for a next-generation supercomputer they believe will take the country back to the leading position in global rankings of supercomputer might. From a report: The next-generation machine, dubbed the Post-K supercomputer, follows the two collaborators' development of the 8 petaflops K supercomputer that commenced operations for RIKEN in 2012, and which has since been upgraded to 11 petaflops in application processing speed. Now the aim is to "create the world's highest performing supercomputer," with "up to one hundred times the application execution performance of the K computer," Fujitsu declared in a press release on 21 June. The plan is to install the souped-up machine at the government-affiliated RIKEN around 2021. If the partners achieve those execution speeds, that would place the Post-K machine in exascale territory (one exaflops being a billion billion floating point operations a second). To do this, they have replaced the SPARC64 VIIIfx CPU powering the K computer with the Arm8A-SVE (Scalable Vector Extension) 512-bit architecture that's been enhanced for supercomputer use, and which both Fujitsu and RIKEN had a hand in developing. The new design runs on CPUs with 48 cores plus 2 assistant cores for the computational nodes, and with 48 cores plus 4 assistant cores for the I/O and computational nodes. The system structure uses 1 CPU per node, and 384 nodes make up one rack.
Operating Systems

Finally, It's the Year of the Linux... Supercomputer (zdnet.com) 171

Beeftopia writes: From ZDNet: "The latest TOP500 Supercomputer list is out. What's not surprising is that Linux runs on every last one of the world's fastest supercomputers. Linux has dominated supercomputing for years. But, Linux only took over supercomputing lock, stock, and barrel in November 2017. That was the first time all of the TOP500 machines were running Linux. Before that IBM AIX, a Unix variant, was hanging on for dear life low on the list."

An interesting architectural note: "GPUs, not CPUs, now power most of supercomputers' speed."

IT

HPE Announces World's Largest ARM-based Supercomputer (zdnet.com) 57

The race to exascale speed is getting a little more interesting with the introduction of HPE's Astra -- what will be the world's largest ARM-based supercomputer. From a report: HPE is building Astra for Sandia National Laboratories and the US Department of Energy's National Nuclear Security Administration (NNSA). The NNSA will use the supercomputer to run advanced modeling and simulation workloads for things like national security, energy, science and health care.

HPE is involved in building other ARM-based supercomputing installations, but when Astra is delivered later this year, "it will hands down be the world's largest ARM-based supercomputer ever built," Mike Vildibill, VP of Advanced Technologies Group at HPE, told ZDNet. The HPC system is comprised of 5,184 ARM-based processors -- the Thunder X2 processor, built by Cavium. Each processor has 28 cores and runs at 2 GHz. Astra will deliver over 2.3 theoretical peak petaflops of performance, which should put it well within the top 100 supercomputers ever built -- a milestone for an ARM-based machine, Vildibill said.

Hardware

US Once Again Boasts the World's Fastest Supercomputer (zdnet.com) 85

The US Department of Energy on Friday unveiled Summit, a supercomputer capable of performing 200 quadrillion calculations per second, or 200 petaflops. Its performance should put it at the top of the list of the world's fastest supercomputers, which is currently dominated by China. From a report (thanks to reader cb_abq for the tip): Summit, housed at the Oak Ridge National Laboratory (ORNL), was built for AI. IBM designed a new heterogeneous architecture for Summit, which combines IBM POWER9 CPUs with Nvidia GPUs. It has approximately 4,600 nodes, with six Nvidia Volta Tensor Core GPUs per node -- that's more than 27,000. The last US supercomputer to top the list of the world's fastest was Titan, in 2012. ORNL, which houses Titan as well, says Summit will deliver more than five times the computational performance of Titan's 18,688 nodes.
AI

NVIDIA Unveils 2 Petaflop DGX-2 AI Supercomputer With 32GB Tesla V100, NVSwitch Tech 41

bigwophh writes from a report via HotHardware: NVIDIA CEO Jensen Huang took to the stage at GTC today to unveil a number of GPU-powered innovations for machine learning, including a new AI supercomputer and an updated version of the company's powerful Tesla V100 GPU that now sports a hefty 32GB of on-board HBM2 memory. A follow-on to last year's DGX-1 AI supercomputer, the new NVIDIA DGX-2 can be equipped with double the number of Tesla V100 processing modules for double the GPU horsepower. The DGX-2 can also have four times the available memory space, thanks to the updated Tesla V100's larger 32GB of memory. NVIDIA's new NVSwitch technology is a fully crossbar GPU interconnect fabric that allows NVIDIA's platform to scale to up to 16 GPUs and utilize their memory space contiguously, where the previous DGX-1 NVIDIA platform was limited to 8 total GPU complexes and associated memory. NVIDIA claims NVSwitch is five times faster than the fastest PCI Express switch and offers an aggregate 2.4TB per second of bandwidth. A new Quadro card was also announced. Called the Quadro GV100, it too is being powered by Volta. The Quadro GV100 packs 32GB of memory and supports NVIDIA's recently announced RTX real-time ray tracing technology.
Google

Google Unveils 72-Qubit Quantum Computer With Low Error Rates (tomshardware.com) 76

An anonymous reader quotes a report from Tom's Hardware: Google announced a 72-qubit universal quantum computer that promises the same low error rates the company saw in its first 9-qubit quantum computer. Google believes that this quantum computer, called Bristlecone, will be able to bring us to an age of quantum supremacy. In a recent announcement, Google said: "If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives."

According to Google, a minimum error rate for quantum computers needs to be in the range of less than 1%, coupled with close to 100 qubits. Google seems to have achieved this so far with 72-qubit Bristlecone and its 1% error rate for readout, 0.1% for single-qubit gates, and 0.6% for two-qubit gates. Quantum computers will begin to become highly useful in solving real-world problems when we can achieve error rates of 0.1-1% coupled with hundreds of thousand to millions of qubits. According to Google, an ideal quantum computer would have at least hundreds of millions of qubits and an error rate lower than 0.01%. That may take several decades to achieve, even if we assume a "Moore's Law" of some kind for quantum computers (which so far seems to exist, seeing the progress of both Google and IBM in the past few years, as well as D-Wave).

Bitcoin

Russian Nuclear Scientists Arrested For 'Bitcoin Mining Plot' (bbc.com) 84

Russian security officers have arrested several scientists working at a top-secret Russian nuclear warhead facility for allegedly mining crypto-currencies, BBC reported Friday, citing local media. From the report: The suspects had tried to use one of Russia's most powerful supercomputers to mine Bitcoins, media reports say. The Federal Nuclear Centre in Sarov, western Russia, is a restricted area. The centre's press service said: "There has been an unsanctioned attempt to use computer facilities for private purposes including so-called mining." The supercomputer was not supposed to be connected to the internet -- to prevent intrusion -- and once the scientists attempted to do so, the nuclear centre's security department was alerted. They were handed over to the Federal Security Service (FSB), the Russian news service Mash says. "As far as we are aware, a criminal case has been launched against them," the press service told Interfax news agency.
Networking

There's A Cluster of 750 Raspberry Pi's at Los Alamos National Lab (insidehpc.com) 128

Slashdot reader overheardinpdx shares a video from the SC17 supercomputing conference where Bruce Tulloch from BitScope "describes a low-cost Rasberry Pi cluster that Los Alamos National Lab is using to simulate large-scale supercomputers." Slashdot reader mspohr describes them as "five rack-mount Bitscope Cluster Modules, each with 150 Raspberry Pi boards with integrated network switches." With each of the 750 chips packing four cores, it offers a 3,000-core highly parallelizable platform that emulates an ARM-based supercomputer, allowing researchers to test development code without requiring a power-hungry machine at significant cost to the taxpayer. The full 750-node cluster, running 2-3 W per processor, runs at 1000W idle, 3000W at typical and 4000W at peak (with the switches) and is substantially cheaper, if also computationally a lot slower. After development using the Pi clusters, frameworks can then be ported to the larger scale supercomputers available at Los Alamos National Lab, such as Trinity and Crossroads.
BitScope's Tulloch points out the cluster is fully integrated with the network switching infrastructure at Los Alamos National Lab, and applauds the Raspberry Bi cluster as "affordable, scalable, highly parallel testbed for high-performance-computing system-software developers."
China

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 288

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
China

China Overtakes US In Latest Top 500 Supercomputer List (enterprisecloudnews.com) 110

An anonymous reader quotes a report from Enterprise Cloud News: The release of the semiannual Top 500 Supercomputer List is a chance to gauge the who's who of countries that are pushing the boundaries of high-performance computing. The most recent list, released Monday, shows that China is now in a class by itself. China now claims 202 systems within the Top 500, while the United States -- once the dominant player -- tumbles to second place with 143 systems represented on the list. Only a few months ago, the U.S. had 169 systems within the Top 500 compared to China's 160. The growth of China and the decline of the United States within the Top 500 has prompted the U.S. Department of Energy to doll out $258 million in grants to several tech companies to develop exascale systems, the next great leap in HPC. These systems can handle a billion billion calculations a second, or 1 exaflop. However, even as these physical machines grow more and more powerful, a good portion of supercomputing power is moving to the cloud, where it can be accessed by more researchers and scientists, making the technology more democratic.
IBM

IBM Raises the Bar with a 50-Qubit Quantum Computer (technologyreview.com) 69

IBM said on Friday it has created a prototype 50 qubit quantum computer as it further increases the pressure on Google in the battle to commercialize quantum computing technology. The company is also making a 20-qubit system available through its cloud computing platform, it said. From a report: The announcement does not mean quantum computing is ready for common use. The system IBM has developed is still extremely finicky and challenging to use, as are those being built by others. In both the 50- and the 20-qubit systems, the quantum state is preserved for 90 microseconds -- a record for the industry, but still an extremely short period of time. Nonetheless, 50 qubits is a significant landmark in progress toward practical quantum computers. Other systems built so far have had limited capabilities and could perform only calculations that could also be done on a conventional supercomputer. A 50-qubit machine can do things that are extremely difficult to simulate without quantum technology. Whereas normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena -- entanglement and superposition -- to process information differently.
China

China Arms Upgraded Tianhe-2A Hybrid Supercomputer (nextplatform.com) 23

New submitter kipperstem77 shares an excerpt from a report via The Next Platform: The National University of Defense Technology (NUDT) has, according to James Lin, vice director for the Center of High Performance Computing (HPC) at Shanghai Jiao Tong University, who divulged the plans last year, is building one of the three pre-exascale machines [that China is currently investing in], in this case a kicker to the Tianhe-1A CPU-GPU hybrid that was deployed in 2010 and that put China on the HPC map. This exascale system will be installed at the National Supercomputer Center in Tianjin, not the one in Guangzhou, according to Lin. This machine is expected to use ARM processors, and we think it will very likely use Matrix2000 DSP accelerators, too, but this has not been confirmed. The second pre-exascale machine will be an upgrade to the TaihuLight system using a future Shenwei processor, but it will be installed at the National Supercomputing Center in Jinan. And the third pre-exascale machine being funded by China is being architected in conjunction with AMD, with licensed server processor technology, and which everyone now thinks is going to be based on Epyc processors and possibly with Radeon Instinct GPU coprocessors. The Next Platform has a slide embedded in its report "showing the comparison between Tianhe-2, which was the fastest supercomputer in the world for two years, and Tianhe-2A, which will be vying for the top spot when the next list comes out." Every part of this system shows improvements.
AI

IBM Pitched Its Watson Supercomputer as a Revolution in Cancer Care. It's Nowhere Close (statnews.com) 108

IBM began selling Watson to recommend the best cancer treatments to doctors around the world three years ago. But is it really doing its job? Not so much. An investigation by Stat found that the supercomputer isn't living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM's goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care. From the report: The interviews suggest that IBM, in its rush to bolster flagging revenue, unleashed a product without fully assessing the challenges of deploying it in hospitals globally. While it has emphatically marketed Watson for cancer care, IBM hasn't published any scientific papers demonstrating how the technology affects physicians and patients. As a result, its flaws are getting exposed on the front lines of care by doctors and researchers who say that the system, while promising in some respects, remains undeveloped. [...] Perhaps the most stunning overreach is in the company's claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, "even new approaches" to cancer care. STAT found that the system doesn't create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
AI

Leading Chinese Bitcoin Miner Wants To Cash In On AI (qz.com) 23

hackingbear writes: Bitmain, the most influential company in the bitcoin economy by the sheer amount of processing power, or hash rate, that it controls, plans to unleash its bitcoin mining ASIC technology to AI applications. The company designed a new deep learning processor Sophon, named after a alien-made, proton-sized supercomputer in China's seminal science-fiction novel, The Three-Body Problem . The idea is to etch in silicon in some of the most common deep learning algorithms, thus greatly boosting efficiency. Users will be able to apply their own datasets and build their own models on these ASICs, allowing the resulting neural networks to generate results and learn from those results at a far quicker pace. The company hopes that thousands of Bitmain Sophon units soon could be training neural networks in vast data centers around the world.

Slashdot Top Deals