China

All 500 of the World's Top 500 Supercomputers Are Running Linux (zdnet.com) 287

Freshly Exhumed shares a report from ZDnet: Linux rules supercomputing. This day has been coming since 1998, when Linux first appeared on the TOP500 Supercomputer list. Today, it finally happened: All 500 of the world's fastest supercomputers are running Linux. The last two non-Linux systems, a pair of Chinese IBM POWER computers running AIX, dropped off the November 2017 TOP500 Supercomputer list. When the first TOP500 supercomputer list was compiled in June 1993, Linux was barely more than a toy. It hadn't even adopted Tux as its mascot yet. It didn't take long for Linux to start its march on supercomputing.

From when it first appeared on the TOP500 in 1998, Linux was on its way to the top. Before Linux took the lead, Unix was supercomputing's top operating system. Since 2003, the TOP500 was on its way to Linux domination. By 2004, Linux had taken the lead for good. This happened for two reasons: First, since most of the world's top supercomputers are research machines built for specialized tasks, each machine is a standalone project with unique characteristics and optimization requirements. To save costs, no one wants to develop a custom operating system for each of these systems. With Linux, however, research teams can easily modify and optimize Linux's open-source code to their one-off designs.
The semiannual TOP500 Supercomputer List was released yesterday. It also shows that China now claims 202 systems within the TOP500, while the United States claims 143 systems.
China

China Overtakes US In Latest Top 500 Supercomputer List (enterprisecloudnews.com) 109

An anonymous reader quotes a report from Enterprise Cloud News: The release of the semiannual Top 500 Supercomputer List is a chance to gauge the who's who of countries that are pushing the boundaries of high-performance computing. The most recent list, released Monday, shows that China is now in a class by itself. China now claims 202 systems within the Top 500, while the United States -- once the dominant player -- tumbles to second place with 143 systems represented on the list. Only a few months ago, the U.S. had 169 systems within the Top 500 compared to China's 160. The growth of China and the decline of the United States within the Top 500 has prompted the U.S. Department of Energy to doll out $258 million in grants to several tech companies to develop exascale systems, the next great leap in HPC. These systems can handle a billion billion calculations a second, or 1 exaflop. However, even as these physical machines grow more and more powerful, a good portion of supercomputing power is moving to the cloud, where it can be accessed by more researchers and scientists, making the technology more democratic.
IBM

IBM Raises the Bar with a 50-Qubit Quantum Computer (technologyreview.com) 69

IBM said on Friday it has created a prototype 50 qubit quantum computer as it further increases the pressure on Google in the battle to commercialize quantum computing technology. The company is also making a 20-qubit system available through its cloud computing platform, it said. From a report: The announcement does not mean quantum computing is ready for common use. The system IBM has developed is still extremely finicky and challenging to use, as are those being built by others. In both the 50- and the 20-qubit systems, the quantum state is preserved for 90 microseconds -- a record for the industry, but still an extremely short period of time. Nonetheless, 50 qubits is a significant landmark in progress toward practical quantum computers. Other systems built so far have had limited capabilities and could perform only calculations that could also be done on a conventional supercomputer. A 50-qubit machine can do things that are extremely difficult to simulate without quantum technology. Whereas normal computers store information as either a 1 or a 0, quantum computers exploit two phenomena -- entanglement and superposition -- to process information differently.
China

China Arms Upgraded Tianhe-2A Hybrid Supercomputer (nextplatform.com) 23

New submitter kipperstem77 shares an excerpt from a report via The Next Platform: The National University of Defense Technology (NUDT) has, according to James Lin, vice director for the Center of High Performance Computing (HPC) at Shanghai Jiao Tong University, who divulged the plans last year, is building one of the three pre-exascale machines [that China is currently investing in], in this case a kicker to the Tianhe-1A CPU-GPU hybrid that was deployed in 2010 and that put China on the HPC map. This exascale system will be installed at the National Supercomputer Center in Tianjin, not the one in Guangzhou, according to Lin. This machine is expected to use ARM processors, and we think it will very likely use Matrix2000 DSP accelerators, too, but this has not been confirmed. The second pre-exascale machine will be an upgrade to the TaihuLight system using a future Shenwei processor, but it will be installed at the National Supercomputing Center in Jinan. And the third pre-exascale machine being funded by China is being architected in conjunction with AMD, with licensed server processor technology, and which everyone now thinks is going to be based on Epyc processors and possibly with Radeon Instinct GPU coprocessors. The Next Platform has a slide embedded in its report "showing the comparison between Tianhe-2, which was the fastest supercomputer in the world for two years, and Tianhe-2A, which will be vying for the top spot when the next list comes out." Every part of this system shows improvements.
AI

IBM Pitched Its Watson Supercomputer as a Revolution in Cancer Care. It's Nowhere Close (statnews.com) 108

IBM began selling Watson to recommend the best cancer treatments to doctors around the world three years ago. But is it really doing its job? Not so much. An investigation by Stat found that the supercomputer isn't living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer. Only a few dozen hospitals have adopted the system, which is a long way from IBM's goal of establishing dominance in a multibillion-dollar market. And at foreign hospitals, physicians complained its advice is biased toward American patients and methods of care. From the report: The interviews suggest that IBM, in its rush to bolster flagging revenue, unleashed a product without fully assessing the challenges of deploying it in hospitals globally. While it has emphatically marketed Watson for cancer care, IBM hasn't published any scientific papers demonstrating how the technology affects physicians and patients. As a result, its flaws are getting exposed on the front lines of care by doctors and researchers who say that the system, while promising in some respects, remains undeveloped. [...] Perhaps the most stunning overreach is in the company's claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, "even new approaches" to cancer care. STAT found that the system doesn't create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
AI

Leading Chinese Bitcoin Miner Wants To Cash In On AI (qz.com) 23

hackingbear writes: Bitmain, the most influential company in the bitcoin economy by the sheer amount of processing power, or hash rate, that it controls, plans to unleash its bitcoin mining ASIC technology to AI applications. The company designed a new deep learning processor Sophon, named after a alien-made, proton-sized supercomputer in China's seminal science-fiction novel, The Three-Body Problem . The idea is to etch in silicon in some of the most common deep learning algorithms, thus greatly boosting efficiency. Users will be able to apply their own datasets and build their own models on these ASICs, allowing the resulting neural networks to generate results and learn from those results at a far quicker pace. The company hopes that thousands of Bitmain Sophon units soon could be training neural networks in vast data centers around the world.
NASA

SpaceX Successfully Launches, Recovers Falcon 9 For CRS-12 (techcrunch.com) 71

Another SpaceX rocket has been successfully launched from NASA's Kennedy Space Center today, carrying a Dragon capsule loaded with over 6,400 pounds of cargo destined for the International Space Station. This marks an even dozen for ISS resupply missions launched by SpaceX under contract to NASA. TechCrunch reports: The rocket successfully launched from NASA's Kennedy Space Center at 12:31 PM EDT, and Dragon deployed from the second stage as planned. Dragon will rendezvous with the ISS on August 16 for capture by the station's Canadarm 2 robotic appendage, after which it'll be attached to the rocket. After roughly a month, it'll return to Earth after leaving the ISS with around 3,000 pounds of returned cargo on board, and splash down in the Pacific Ocean for recovery. There's another reason this launch was significant, aside from its experimental payload (which included a supercomputer designed to help humans travel to Mars): SpaceX will only use re-used Dragon capsules for all future CRS missions, the company has announced, meaning this is the last time a brand new Dragon will be used to resupply the ISS, if all goes to plan. Today's launch also included an attempt to recover the Falcon 9 first stage for re-use at SpaceX's land-based LZ-1 landing pad. The Falcon 9 first stage returned to Earth as planned, and touched down at Cape Canaveral roughly 9 minutes after launch.
ISS

SpaceX Will Deliver The First Supercomputer To The ISS (hpe.com) 98

Slashdot reader #16,185, Esther Schindler writes: "By NASA's rules, not just any computer can go into space. Their components must be radiation hardened, especially the CPUs," reports HPE Insights. "Otherwise, they tend to fail due to the effects of ionizing radiation. The customized processors undergo years of design work and then more years of testing before they are certified for spaceflight." As a result, the ISS runs the station using two sets of three Command and Control Multiplexer DeMultiplexer computers whose processors are 20MHz Intel 80386SX CPUs, right out of 1988. "The traditional way to radiation-harden a spacecraft computer is to add redundancy to its circuits or by using insulating substrates instead of the usual semiconductor wafers on chips. That's expensive and time consuming. HPE scientists believe that simply slowing down a system in adverse conditions can avoid glitches and keep the computer running."

So, assuming the August 15 SpaceX Falcon 9 rocket launch goes well, there will be a supercomputer headed into space -- using off-the-shelf hardware. Let's see if the idea pans out. "We may discover a set of parameters with which a supercomputer can successfully run for at least a year without errors," says Dr. Mark R. Fernandez, the mission's co-principal investigator for software and SGI's HPC technology officer. "Alternately, one or more components of the system will fail, in which case we will then do the typical failure analysis on Earth. That will let us learn what to change to make the systems more reliable in the future."

The article points out that the New Horizons spacecraft that just flew past Pluto has a 12MHz Mongoose-V CPU, based on the MIPS R3000 CPU. "You may remember its much faster ancestor: the chip that took you on adventures in the original Sony PlayStation, circa 1994."
Space

Can Primordial Black Holes Alone Account For Dark Matter? 135

thomst writes: Slashdot stories have reported extensively on the LIGO experiments' initial detection of gravity waves emanating from collisions of primordial black holes, beginning, on February 11, 2016, with the first (and most widely-reported) such detection. Other Slashdot articles have chronicled the second LIGO detection event and the third one. There's even been a Slashdot report on the Synthetic Universe supercomputer model that provided support for the conclusion that the first detection event was, indeed, of a collision between two primordial black holes, rather than the more familiar stellar remnant kind that result from more recent supernovae of large-mass stars.

What interests me is the possibility that black holes of all kinds -- and particularly primordial black holes -- are so commonplace that they may be all that's required to explain the effects of "dark matter." Dark matter, which, according to current models, makes up some 26% of the mass of our Universe, has been firmly established as real, both by calculation of the gravity necessary to hold spiral galaxies like our own together, and by direct observation of gravitational lensing effects produced by the "empty" space between recently-collided galaxies. There's no question that it exists. What is unknown, at this point, is what exactly it consists of.

The leading candidate has, for decades, been something called WIMPs (Weakly-Interacting Massive Particles), a theoretical notion that there are atomic-scale particles that interact with "normal" baryonic matter only via gravity. The problem with WIMPs is that, thus far, not a single one has been detected, despite years of searching for evidence that they exist via multiple, multi-billion-dollar detectors.

With the recent publication of a study of black hole populations in our galaxy (article paywalled, more layman-friendly press release at Phys.org) that indicates there may be as many as 100 million stellar-remnant-type black holes in the Milky Way alone, the question arises, "Is the number of primordial and stellar-remnant black holes in our Universe sufficient to account for the calculated mass of dark matter, without having to invoke WIMPs at all?"

I don't personally have the mathematical knowledge to even begin to answer that question, but I'm curious to find out what the professional cosmologists here think of the idea.
United States

Swiss Supercomputer Edges US Out of Top Spot (bbc.com) 64

There have only been two times in the last 24 years where the U.S. has been edged out of the top spot of the world's most powerful supercomputers. Now is one of those times. "An upgrade to a Swiss supercomputer has bumped the U.S. Department of Energy's Cray XK7 to number four on the list rating these machines," reports the BBC. "The only other time the U.S. fell out of the top three was in 1996." The top two slots are occupied by Chinese supercomputers. From the report. The U.S. machine has been supplanted by Switzerland's Piz Daint system, which is installed at the country's national supercomputer center. The upgrade boosted its performance from 9.8 petaflops to 19.6. The machine is named after a peak in the Grison region of Switzerland. One petaflop is equal to one thousand trillion operations per second. A "flop" (floating point operation) can be thought of as a step in a calculation. The performance improvement meant it surpassed the 17.6 petaflop capacity of the DoE machine, located at the Oak Ridge National Laboratory in Tennessee. The U.S. is well represented lower down in the list, as currently half of all the machines in the top 10 of the list are based in North America. And the Oak Ridge National Laboratory looks set to return to the top three later this year, when its Summit supercomputer comes online. This is expected to have a peak performance of more than 100 petaflops.
AMD

Six Companies Awarded $258 Million From US Government To Build Exascale Supercomputers (digitaltrends.com) 40

The U.S. Department of Energy will be investing $258 million to help six leading technology firms -- AMD, Cray Inc., Hewlett Packard Enterprise, IBM, Intel, and Nvidia -- research and build exascale supercomputers. Digital Trends reports: The funding will be allocated to them over the course of a three-year period, with each company providing 40 percent of the overall project cost, contributing to an overall investment of $430 million in the project. "Continued U.S. leadership in high performance computing is essential to our security, prosperity, and economic competitiveness as a nation," U.S. Secretary of Energy Rick Perry said. "These awards will enable leading U.S. technology firms to marshal their formidable skills, expertise, and resources in the global race for the next stage in supercomputing -- exascale-capable systems." The funding will finance research and development in three key areas; hardware technology, software technology, and application development. There are hopes that one of the companies involved in the initiative will be able to deliver an exascale-capable supercomputer by 2021.
The Internet

NYU Accidentally Exposed Military Code-breaking Computer Project To Entire Internet (theintercept.com) 75

An anonymous reader writes: A confidential computer project designed to break military codes was accidentally made public by New York University engineers. An anonymous digital security researcher identified files related to the project while hunting for things on the internet that shouldn't be, The Intercept reported. He used a program called Shodan, a search engine for internet-connected devices, to locate the project. It is the product of a joint initiative by NYU's Institute for Mathematics and Advanced Supercomputing, headed by the world-renowned Chudnovsky brothers, David and Gregory, the Department of Defense, and IBM. Information on an exposed backup drive described the supercomputer, called -- WindsorGreen -- as a system capable of cracking passwords.
NASA

NASA Runs Competition To Help Make Old Fortran Code Faster (bbc.com) 205

NASA is seeking help from coders to speed up the software it uses to design experimental aircraft. From a report on BBC: It is running a competition that will share $55,000 between the top two people who can make its FUN3D software run up to 10,000 times faster. The FUN3D code is used to model how air flows around simulated aircraft in a supercomputer. The software was developed in the 1980s and is written in an older computer programming language called Fortran. "This is the ultimate 'geek' dream assignment," said Doug Rohn, head of NASA's transformative aeronautics concepts program that makes heavy use of the FUN3D code. In a statement, Mr Rohn said the software is used on the agency's Pleiades supercomputer to test early designs of futuristic aircraft. The software suite tests them using computational fluid dynamics, which make heavy use of complicated mathematical formulae and data structures to see how well the designs work.
Canada

'Breakthrough' LI-RAM Material Can Store Data With Light (ctvnews.ca) 104

A Vancouver researcher has patented a new material that uses light instead of electricity to store data. An anonymous reader writes: LI-RAM -- that's light induced magnetoresistive random-access memory -- promises supercomputer speeds for your cellphones and laptops, according to Natia Frank, the materials scientist at the University of Victoria who developed the new material as part of an international effort to reduce the heat and power consumption of modern processors. She envisions a world of LI-RAM mobile devices which are faster, thinner, and able to hold much more data -- all while consuming less power and producing less heat.

And best of all, they'd last twice as long on a single charge (while producing almost no heat), according to a report on CTV News, which describes this as "a breakthrough material" that will not only make smartphones faster and more durable, but also more energy-efficient. The University of Victoria calculates that's 10% of the world's electricity is consumed by "information communications technology," so LI-RAM phones could conceivably cut that figure in half.

They also report that the researcher is "working with international electronics manufacturers to optimize and commercialize the technology, and says it could be available on the market in the next 10 years."
AI

Japan Unveils Next-Generation, Pascal-Based AI Supercomputer (nextplatform.com) 121

The Tokyo Institute of Technology has announced plans to launch Japan's "fastest AI supercomputer" this summer. The supercomputer is called Tsubame 3.0 and will use Nvidia's latest Pascal-based Tesla P100 GPU accelerators to double its performance over its predecessor, the Tsubame 2.5. Slashdot reader kipperstem77 shares an excerpt from a report via The Next Platform: With all of those CPUs and GPUs, Tsubame 3.0 will have 12.15 petaflops of peak double precision performance, and is rated at 24.3 petaflops single precision and, importantly, is rated at 47.2 petaflops at the half precision that is important for neural networks employed in deep learning applications. When added to the existing Tsubame 2.5 machine and the experimental immersion-cooled Tsubame-KFC system, TiTech will have a total of 6,720 GPUs to bring to bear on workloads, adding up to a total of 64.3 aggregate petaflops at half precision. (This is interesting to us because that means Nvidia has worked with TiTech to get half precision working on Kepler GPUs, which did not formally support half precision.)
AI

World's Largest Hedge Fund To Replace Managers With Artificial Intelligence (theguardian.com) 209

An anonymous reader quotes a report from The Guardian: The world's largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making. Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when he's not there, the Wall Street Journal reported. The firm, which manages $160 billion, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBM's development of Watson, the supercomputer that beat humans at Jeopardy! in 2011. The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called "dots." The Systematized Intelligence Lab has built a tool that incorporates these ratings into "Baseball Cards" that show employees' strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through. These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when there's a disagreement about how to proceed. The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.
IBM

IBM On Track To Get More Than 7,000 US Patents In 2016 (venturebeat.com) 34

IBM wants to put the patent war in perspective. Big Blue said that it is poised to get the most U.S. patents of any tech company for the 24th year in a row. From a report on VentureBeat: In 2015, IBM received more than 7,355 patents, down slightly from 7,534 in 2014. A spokesperson for IBM said the company is on track to receive well over 7,000 patents in 2016. In 2016, IBM is also hitting another interesting milestone, with more than 1,000 patents for artificial intelligence and cognitive computing. IBM has been at it for more than a century, and it is seeking patents in key strategic areas -- such as AI and cognitive computing. In fact, one-third of IBM's researchers are dedicated to cognitive computing. IBM CEO Ginni Rometty said during the World of Watson conference in October that the company expects to reach more than 1 billion consumers via Watson by the end of 2017. (Watson is the supercomputer that beat the world's best Jeopardy player in 2011.)
IBM

Erich Bloch, Who Helped Develop IBM Mainframe, Dies At 91 (google.com) 40

shadowknot writes: The New York Times is reporting (Warning: may be paywalled; alternate source) that Erich Bloch who helped to develop the IBM Mainframe has died at the age of 91 as a result of complications from Alzheimer's disease. From the article: "In the 1950s, he developed the first ferrite-core memory storage units to be used in computers commercially and worked on the IBM 7030, known as Stretch, the first transistorized supercomputer. 'Asked what job each of us had, my answer was very simple and very direct,' Mr. Bloch said in 2002. 'Getting that sucker working.' Mr. Bloch's role was to oversee the development of Solid Logic Technology -- half-inch ceramic modules for the microelectronic circuitry that provided the System/360 with superior power, speed and memory, all of which would become fundamental to computing."
Japan

Japan Eyes World's Fastest-Known Supercomputer, To Spend Over $150M On It (reuters.com) 35

Japan plans to build the world's fastest-known supercomputer in a bid to arm the country's manufacturers with a platform for research that could help them develop and improve driverless cars, robotics and medical diagnostics. From a Reuters report: The Ministry of Economy, Trade and Industry will spend 19.5 billion yen ($173 million) on the previously unreported project, a budget breakdown shows, as part of a government policy to get back Japan's mojo in the world of technology. The country has lost its edge in many electronic fields amid intensifying competition from South Korea and China, home to the world's current best-performing machine. In a move that is expected to vault Japan to the top of the supercomputing heap, its engineers will be tasked with building a machine that can make 130 quadrillion calculations per second -- or 130 petaflops in scientific parlance -- as early as next year, sources involved in the project told Reuters. At that speed, Japan's computer would be ahead of China's Sunway Taihulight that is capable of 93 petaflops. "As far as we know, there is nothing out there that is as fast," said Satoshi Sekiguchi, a director general at Japan's âZNational Institute of Advanced Industrial Science and Technology, where the computer will be built.
China

China's New Policing Computer Is Frontend Cattle Prod, Backend Supercomputer (computerworld.com) 69

Earlier this year, we learned about China's first "intelligent security robot," which was said to include "electrically charged riot control tool." We now know what this robot is up to, and what its developed unit looks like. Reader dcblogs writes: China recently deployed what it calls a "security robot" in a Shenzhen airport. It's named AnBot and patrols around the clock. It is a cone-shaped robot that includes a cattle prod. The U.S.-China Economic and Security Review Commission, which look at autonomous system deployments in a report last week, said AnBot, which has facial recognition capability, is designed to be linked with China's latest supercomputers. AnBot may seem like a 'Saturday Night Live' prop, but it's far from it. The back end of this "intelligent security robot" is linked to China's Tianhe-2 supercomputer, where it has access to cloud services. AnBot conducts patrols, recognizes threats and has multiple cameras that use facial recognition. These cloud services give the robots petascale processing power, well beyond onboard processing capabilities in the robot. The supercomputer connection is there "to enhance the intelligent learning capabilities and human-machine interface of these devices," said the U.S.-China Economic and Security Review.

Slashdot Top Deals