Japan

ARM-Based Japanese Supercomputer is Now the Fastest in the World (theverge.com) 72

A Japanese supercomputer has taken the top spot in the biannual Top500 supercomputer speed ranking. Fugaku, a computer in Kobe co-developed by Riken and Fujitsu, makes use of Fujitsu's 48-core A64FX system-on-chip. It's the first time a computer based on ARM processors has topped the list. From a report: Fugaku turned in a Top500 HPL result of 415.5 petaflops, 2.8 times as fast as IBM's Summit, the nearest competitor. Fugaku also attained top spots in other rankings that test computers on different workloads, including Graph 500, HPL-AI, and HPCG. No previous supercomputer has ever led all four rankings at once. While fastest supercomputer rankings normally bounce between American- and Chinese-made systems, this is Japan's first system to rank first on the Top500 in nine years since Fugaku's predecessor, Riken's K computer. Overall there are 226 Chinese supercomputers on the list, 114 from America, and 30 from Japan. US-based systems contribute the most aggregate performance with 644 petaflops.
AI

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus (bloomberg.com) 29

Over the past few months, OpenAI has vacuumed an incredible amount of data into its artificial intelligence language systems. It sucked up Wikipedia, a huge swath of the rest of the internet and tons of books. This mass of text -- trillions of words -- was then analyzed and manipulated by a supercomputer to create what the research group bills as a major AI breakthrough and the heart of its first commercial product, which came out on Thursday. From a report: The product name -- OpenAI calls it "the API" -- might not be magical, but the things it can accomplish do seem to border on wizardry at times. The software can perform a broad set of language tasks, including translating between languages, writing news stories and poems and answering everyday questions. Ask it, for example, if you should keep reading a story, and you might be told, "Definitely. The twists and turns keep coming." OpenAI wants to build the most flexible, general purpose AI language system of all time. Typically, companies and researchers will tune their AI systems to handle one, limited task. The API, by contrast, can crank away at a broad set of jobs and, in many cases, at levels comparable with specialized systems.

While the product is in a limited test phase right now, it will be released broadly as something that other companies can use at the heart of their own offerings such as customer support chat systems, education products or games, OpenAI Chief Executive Officer Sam Altman said. [...] The API product builds on years of research in which OpenAI has compiled ever larger text databases with which to feed its AI algorithms and neural networks. At its core, OpenAI API looks over all the examples of language it has seen and then uses those examples to predict, say, what word should come next in a sentence or how best to answer a particular question. "It almost gets to the point where it assimilates all of human knowledge because it has seen everything before," said Eli Chen, CEO of startup Veriph.ai, who tried out an earlier version of OpenAI's product. "Very few other companies would be able to afford what it costs to build this type of huge model."

AI

MIT's Tiny Artificial Brain Chip Could Bring Supercomputer Smarts To Mobile Devices (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Researchers at MIT have published a new paper that describes a new type of artificial brain synapse that offers performance improvements versus other existing versions, and which can be combined in volumes of tens of thousands on a chip that's smaller physically than a single piece of confetti. The results could help create devices that can handle complex AI computing locally, while remaining small and power-efficient, and without having to connect to a data center. The research team created what are known as "memristors" -- essentially simulated brain synapses created using silicon, but also using alloys of silver and copper in their construction. The result was a chip that could effectively "remember" and recall images in very high detail, repeatedly, with much crisper and more detailed "remembered" images than in other types of simulated brain circuits that have come before. What the team wants to ultimately do is recreate large, complex artificial neural networks that are currently based in software that require significant GPU computing power to run -- but as dedicated hardware, so that it can be localized in small devices, including potentially your phone, or a camera.

Unlike traditional transistors, which can switch between only two states (0 or 1) and which form the basis of modern computers, memsistors offer a gradient of values, much more like your brain, the original analog computer. They also can "remember" these states so they can easily recreate the same signal for the same received current multiple times over. What the researchers did here was borrow a concept from metallurgy: When metallurgists want to change the properties of a metal, they combine it with another that has that desired property, to create an alloy. Similarly, the researchers here found an element they could combine with the silver they use as the memristor's positive electrode, in order to make it better able to consistently and reliably transfer ions along even a very thin conduction channel. That's what enabled the team to create super small chips that contain tens of thousands of memristors that can nonetheless not only reliably recreate images from "memory," but also perform inference tasks like improving the detail of, or blurring the original image on command, better than other, previous memristors created by other scientists.

Earth

Supercomputer Simulates the Impact of the Asteroid That Wiped Out Dinosaurs (zdnet.com) 61

An anonymous reader quotes a report from ZDNet: Some 66 million years ago, an asteroid hit the Earth on the eastern coast of modern Mexico, resulting in up to three quarters of plant and animal species living on the planet going extinct -- including the dinosaurs. Now, a team of researchers equipped with a supercomputer have managed to simulate the entire event, shedding light on the reasons that the impact led to a mass extinction of life. The simulations were carried out by scientists at Imperial College in London, using high performance computing (HPC) facilities provided by Hewlett Packard Enterprise. The research focused on establishing as precise an impact angle and trajectory as possible, which in turn can help determine precisely how the asteroid's hit affected the surrounding environment.

Various impact angles and speeds were considered, and 3D simulations for each were fed into the supercomputer. These simulations were then compared with the geophysical features that have been observed in the 110-mile wide Chicxulub crater, located in Mexico's Yucatan Peninsula, where the impact happened. The simulations that turned out to be the most consistent with the structure of the Chicxulub crater showed an impact angle of about 60 degrees. Such a strike had the strength of about ten billion Hiroshima bombs, and this particular angle meant that rocks and sediments were ejected almost symmetrically. This, in turn, caused a greater amount of climate-changing gases to be released, including billions of tonnes of sulphur that blocked the sun. The rest is history: firestorms, hurricanes, tsunamis and earthquakes rocked the planet, and most species disappeared from the surface of the Earth.
The 60-degree angle constituted "the worse-case scenario for the lethality of the impact" because it maximized the ejection of rock and therefore, the production of gases, the scientists wrote.

"The researchers carried out almost 300 3D simulations before they were able to reach their conclusions, which was processed by the HPE Apollo 6000 Gen10 supercomputer located at the University of Leicester," adds ZDNet. "The 14,000-cores system, powered by Intel's Skylake chips, is supported by a 6TB server to accommodate large, in-memory calculations."
Security

Supercomputers Breached Across Europe To Mine Cryptocurrency (zdnet.com) 43

An anonymous reader quotes ZDNet: Multiple supercomputers across Europe have been infected this week with cryptocurrency mining malware and have shut down to investigate the intrusions. Security incidents have been reported in the UK, Germany, and Switzerland, while a similar intrusion is rumored to have also happened at a high-performance computing center located in Spain.

Cado Security, a US-based cyber-security firm, said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials... Once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero cryptocurrency.

AI

NVIDIA Ampere A100 GPU For AI Unveiled, Largest 7nm Chip Ever Produced (hothardware.com) 35

MojoKid writes: NVIDIA CEO Jensen Huang unveiled the company's new Ampere A100 GPU architecture for machine learning and HPC markets today. Jensen claims the 54B transistor A100 is the biggest, most powerful GPU NVIDIA has ever made, and it's also the largest chip ever produced on 7nm semiconductor process. There are a total of 6,912 FP32 CUDA cores, 432 Tensor cores, and 108 SMs (Streaming Multiprocessors) in the A100, paired to 40GB of HBM2e memory with maximum memory bandwidth of 1.6TB/sec. FP32 compute comes in at a staggering 19.5 TLFLOPs, compared to 16.4 TFLOPs for NVIDIA's previous gen Tesla V100. In addition, its Tensor Cores employ FP32 precision that allows for a 20x uplift in AI performance gen-over-gen. When it comes to FP64 performance, these Tensor Cores also provide a 2.5x performance boost, versus its predecessor, Volta. Additional features include Multi-Instance GPU, aka MIG, which allows an A100 GPU to be sliced up into up to seven discrete instances, so it can be provisioned for multiple discrete specialized workloads. Mulitple A100 GPUs will also make their way into NVIDIA's third-generation DGX AI supercomputer that packs a whopping 5 PFLOPs of AI performance. According to NVIDIA, its Ampere-based A100 GPU and DGX AI systems are already in full production and shipping to customers now. Gamers are of course looking forward to what the company has in store with Ampere for the enthusiast PC market, as expectations for its rumored GeForce RTX 30 family are incredibly high.
Supercomputing

NVIDIA Is Contributing Its AI Smarts To Help Fight COVID-19 (engadget.com) 12

NVIDIA is using its background in AI and optimizing supercomputer throughput to the COVID-19 High Performance Computing Consortium group, which plans to support researchers by giving them time with 30 supercomputers offering a combined 400 petaflops of performance. Engadget reports: NVIDIA will add to this by providing expertise in AI, biology and large-scale computing optimizations. The company likened the Consortium's efforts to the Moon race. Ideally, this will speed up work for scientists who need modelling and other demanding tasks that would otherwise take a long time. NVIDIA has a number of existing contributions to coronavirus research, including the 27,000 GPUs inside the Summit supercomputer and those inside many of the computers from the crowdsourced Folding@Home project. This is still a significant step forward, though, and might prove lifesaving if it leads to a vaccine or more effective containment.
Technology

Scientists Turn To Tech To Prevent Second Wave of Locusts in East Africa (theguardian.com) 37

Scientists monitoring the movements of the worst locust outbreak in Kenya in 70 years are hopeful that a new tracking program they will be able to prevent a second surge of the crop-ravaging insects. From a report: The UN has described the locust outbreak in the Horn of Africa, and the widespread breeding of the insects in Kenya, Ethiopia and Somalia that has followed, as "extremely alarming." The UN's Food and Agriculture Organization has warned that an imminent second hatch of the insects could threaten the food security of 25 million people across the region as it enters the cropping season. Kenneth Mwangi, a satellite information scientist, based at the Intergovernmental Authority on Development climate prediction and applications centre, based in Nairobi, said researchers were running a supercomputer model to predict breeding areas that may have been missed by ground monitoring. These areas could become sources of new swarms if not sprayed.

"The model will be able to tell us the areas in which hoppers are emerging," said Mwangi. "We will also get ground information. These areas can become a source of an upsurge, or a new generation of hoppers. It becomes very difficult and expensive to control, which is why we are looking to prevent an upsurge. The focus will be on stopping hoppers becoming adults, as that leads to another cycle of infestation. We want to avoid that. We want to advise governments early, before an upsurge happens." So far, the supercomputer, funded by $45 million of UK aid as part of its Weather and Climate Information Services for Africa programme, has successfully forecast the movement of locusts using data such as wind speed and direction, temperature, and humidity. The model has achieved 90% accuracy in forecasting the future locations of the swarms, Mwangi said.

AI

Defeated Chess Champ Garry Kasparov Has Made Peace With AI (wired.com) 106

Last week, Garry Kasparov, perhaps the greatest chess player in history, returned to the scene of his famous IBM supercomputer Deep Blue defeat -- the ballroom of a New York hotel -- for a debate with AI experts organized by the Association for the Advancement of Artificial Intelligence. He met with WIRED senior writer Will Knight there to discuss chess, AI, and a strategy for staying a step ahead of machines. From the report: WIRED: What was it like to return to the venue where you lost to Deep Blue?
Garry Kasparov: I've made my peace with it. At the end of the day, the match was not a curse but a blessing, because I was a part of something very important. Twenty-two years ago, I would have thought differently. But things happen. We all make mistakes. We lose. What's important is how we deal with our mistakes, with negative experience. 1997 was an unpleasant experience, but it helped me understand the future of human-machine collaboration. We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side [by increasingly powerful AI programs]. But it doesn't mean that life is over. We have to find out how we can turn it to our advantage. I always say I was the first knowledge worker whose job was threatened by a machine. But that helps me to communicate a message back to the public. Because, you know, nobody can suspect me of being pro-computers.

What message do you want to give people about the impact of AI?
I think it's important that people recognize the element of inevitability. When I hear outcry that AI is rushing in and destroying our lives, that it's so fast, I say no, no, it's too slow. Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. They're dead, they just don't know it. For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are. We have to look for opportunities to create jobs that will emphasize our strengths. Technology is the main reason why so many of us are still alive to complain about technology. It's a coin with two sides. I think it's important that, instead of complaining, we look at how we can move forward faster. When these jobs start disappearing, we need new industries, we need to build foundations that will help. Maybe it's universal basic income, but we need to create a financial cushion for those who are left behind. Right now it's a very defensive reaction, whether it comes from the general public or from big CEOs who are looking at AI and saying it can improve the bottom line but it's a black box. I think it's we still struggling to understand how AI will fit in.
Further reading: Fast-and-Loose Culture of Esports is Upending Once Staid World of Chess; and Kramnik and AlphaZero: How To Rethink Chess.
United Kingdom

UK To Spend $1.6 Billion on World's Best Climate Supercomputer (bloomberg.com) 126

The U.K. said it will spend 1.2 billion pounds ($1.6 billion) on developing the most powerful weather and climate supercomputer in the world. From a report: The program aims to improve weather and climate modeling by the government forecaster, the Met Office, Business Secretary Alok Sharma said in a statement Monday. The machine will replace the U.K.'s existing supercomputer, which is already one of the 50 most powerful in the world. "Come rain or shine, our significant investment for a new supercomputer will further speed up weather predictions, helping people be more prepared for weather disruption from planning travel journeys to deploying flood defenses," said Sharma, who will preside over the annual round of United Nations climate talks in Glasgow, Scotland, in November. With Britain hosting the year-end climate summit, Prime Minister Boris Johnson is seeking to showcase the U.K.'s leadership in both studying the climate and reducing global greenhouse gas emissions. His government plans to use data generated by the new computer to inform policy as it seeks to spearhead the fight against climate change.
Technology

Toshiba Touts Algorithm That's Faster Than a Supercomputer (bloomberg.com) 35

It's a tantalizing prospect for traders whose success often hinges on microseconds: a desktop PC algorithm that crunches market data faster than today's most advanced supercomputers. Japan's Toshiba says it has the technology to make such rapid-fire calculations a reality -- not quite quantum computing, but perhaps the next best thing. From a report: The claim is being met with a mix of intrigue and skepticism at financial firms in Tokyo and around the world. Toshiba's "Simulated Bifurcation Algorithm" is designed to harness the principles behind quantum computers without requiring the use of such machines, which currently have limited applications and can cost millions of dollars to build and keep near absolute zero temperature. Toshiba says its technology, which may also have uses outside finance, runs on PCs made from off-the-shelf components.

"You can just plug it into a server and run it at room temperature," Kosuke Tatsumura, a senior research scientist at Toshiba's Computer & Network Systems Laboratory, said in an interview. The Tokyo-based conglomerate, while best known for its consumer electronics and nuclear reactors, has long conducted research into advanced technologies. Toshiba has said it needs a partner to adopt the algorithm for real-world use, and financial firms have taken notice as they grapple for an edge in markets increasingly dominated by machines. Banks, brokerages and asset managers have all been experimenting with quantum computing, although viable applications are generally considered to be some time away.

Robotics

Scientists Use Stems Cells From Frogs To Build First Living Robots (theguardian.com) 37

Cy Guy writes: Having not learned the lessons of Jurassic Park and the Terminator, scientists from the University of Vermont and Tufts have created "reconfigurable organisms" using stem cells from frogs. But don't worry, the research was funded by the Department of Defense, so I'm sure nothing could possibly go wrong this time. "The robots, which are less than 1mm long, are designed by an 'evolutionary algorithm' that runs on a supercomputer," reports The Guardian. "The program starts by generating random 3D configurations of 500 to 1,000 skin and heart cells. Each design is then tested in a virtual environment, to see, for example, how far it moves when the heart cells are set beating. The best performers are used to spawn more designs, which themselves are then put through their paces."

"Because heart cells spontaneously contract and relax, they behave like miniature engines that drive the robots along until their energy reserves run out," the report adds. "The cells have enough fuel inside them for the robots to survive for a week to 10 days before keeling over."

The findings have been published in the Proceedings of the National Academy of Sciences.
Television

The BBC's 1992 TV Show About VR, 3D TVs With Glasses, and Holographic 3D Screens (youtu.be) 54

dryriver writes: 27 years ago, the BBC's "Tomorrow's World" show broadcasted this little gem of a program [currently available on YouTube]. After showing old Red-Cyan Anaglyph movies, Victorian Stereoscopes, lenticular-printed holograms and a monochrome laser hologram projected into a sheet of glass, the presenter shows off a stereoscopic 3D CRT computer display with active shutter glasses. The program then takes us to a laboratory at Massachusetts Institute Of Technology, where a supercomputer is feeding 3D wireframe graphics into the world's first glasses-free holographic 3D display prototype using a Tellurium Dioxide crystal. One of the researchers at the lab predicts that "years from now, advances in LCD technology may make this kind of display cheap enough to use in the home."

A presenter then shows a bulky plastic VR headset larger than an Oculus Rift and explains how VR will let you experience completely computer-generated worlds as if you are there. The presenter notes that 1992 VR headsets may be "too bulky" for the average user, and shows a mockup of much smaller VR glasses about the size of Magic Leap's AR glasses, noting that "these are already in development." What is astonishing about watching this 27-year-old TV broadcast is a) the realization that much of today's stereo stereo 3D tech was already around in some form or another in the early 1990s; b) VR headsets took an incredibly long time to reach the consumer and are still too bulky; and that c) almost three decades later, MIT's prototype holographic glasses-free 3D display technology never made its way into consumer hands or households.

Hardware

Russia Joins Race To Make Quantum Dreams a Reality (nature.com) 18

Russia has launched an effort to build a working quantum computer, in a bid to catch up to other countries in the race for practical quantum technologies. From a report: The government will inject around 50 billion roubles (US$790 million) over the next 5 years into basic and applied quantum research carried out at leading Russian laboratories, the country's deputy prime minister, Maxim Akimov, announced on 6 December at a technology forum in Sochi. The windfall is part of a 258-billion-rouble programme for research and development in digital technologies, which the Kremlin has deemed vital for modernizing and diversifying the Russian economy. "This is a real boost," says Aleksey Fedorov, a quantum physicist at the Russian Quantum Center (RQC), a private research facility in Skolkovo near Moscow. "If things work out as planned, this initiative will be a major step towards bringing Russian quantum science to a world-class standard."

[...] The race is on to create quantum computers that outperform classical machines in specific tasks. Prototypes developed by Google and IBM, headquartered in Mountain View, California, and Armonk, New York, respectively, are approaching the limit of classical computer simulation. In October, scientists at Google announced that a quantum processor working on a specific calculation had achieved such a quantum advantage. Russia is far from this milestone. "We're 5 to 10 years behind," says Fedorov. "But there's a lot of potential here, and we follow very closely what's happening abroad." Poor funding has excluded Russian quantum scientists from competing with Google, says Ilya Besedin, an engineer at the National University of Science and Technology in Moscow.

PlayStation (Games)

The Rise and Fall of the PlayStation Supercomputers (theverge.com) 50

"On the 25th anniversary of the original Sony PlayStation, The Verge shares the story of the PlayStation supercomputers," writes Slashdot reader jimminy_cricket. From the report: Dozens of PlayStation 3s sit in a refrigerated shipping container on the University of Massachusetts Dartmouth's campus, sucking up energy and investigating astrophysics. It's a popular stop for tours trying to sell the school to prospective first-year students and their parents, and it's one of the few living legacies of a weird science chapter in PlayStation's history. Those squat boxes, hulking on entertainment systems or dust-covered in the back of a closet, were once coveted by researchers who used the consoles to build supercomputers. With the racks of machines, the scientists were suddenly capable of contemplating the physics of black holes, processing drone footage, or winning cryptography contests. It only lasted a few years before tech moved on, becoming smaller and more efficient. But for that short moment, some of the most powerful computers in the world could be hacked together with code, wire, and gaming consoles. "The game consoles entered the supercomputing scene in 2002 when Sony released a kit called Linux for the PlayStation 2," reports The Verge. Craig Steffen, senior research scientist at the National Center for Supercomputing Applications, and his group hooked up between 60 and 70 PlayStation 2s, wrote some code, and built out a library.

"The PS3 entered the scene in late 2006 with powerful hardware and an easier way to load Linux onto the devices," the report adds. "Researchers would still need to link the systems together, but suddenly, it was possible for them to imagine linking together all of those devices into something that was a game-changer instead of just a proof-of-concept prototype."
Intel

Intel Unveils 7nm Ponte Vecchio GPU Architecture For Supercomputers and AI (hothardware.com) 28

MojoKid writes: Intel has unveiled its first discrete GPU solution that will hit the market in 2020, code name Ponte Vecchio. Based on 7nm silicon manufacturing and stack chiplet design with Intel's Foveros tech, Ponte Vecchio will target HPC markets for supercomputers and AI training in the datacenter. According to HotHardware, Ponte Vecchio will employ a combination of both its Foveros 3D packaging and EMIB (Embedded Multi-die Interconnect Bridge) technologies, along with High Bandwidth Memory (HBM) and Compute Express Link (CXL), which will operate over the newly ratified PCIe 5.0 interface and serve as Ponte Vecchio's high-speed switch fabric connecting all GPU resources. Intel is billing Ponte Vecchio as its first exascale GPU, proving its meddle in the U.S. Department of Energy's (DOE) Aurora supercomputer. Aurora will employ a topology of six Ponte Vecchio GPUs and two Intel Xeon Scalable processors based on Intel's next generation Sapphire Rapids architecture, along with Optane DC Persistent Memory on a single blade. The new supercomputer is schedule to arrive sometime in 2021.
United States

The World's Fastest Supercomputers Hit Higher Speeds Than Ever With Linux (zdnet.com) 124

An anonymous reader quotes a report from ZDNet: In the latest Top 500 supercomputer ratings, the average speed of these Linux-powered racers is now an astonishing 1.14 petaflops. The fastest of the fast machines haven't changed since the June 2019 Top 500 supercomputer list. Leading the way is Oak Ridge National Laboratory's Summit system, which holds top honors with an HPL result of 148.6 petaflops. This is an IBM-built supercomputer using Power9 CPUs and NVIDIA Tesla V100 GPUs. In a rather distant second place is another IBM machine: Lawrence Livermore National Laboratory's Sierra system. It uses the same chips, but it "only" hit a speed of 94.6 petaflops.

Close behind at No. 3 is the Sunway TaihuLight supercomputer, with an HPL mark of 93.0 petaflops. TaihuLight was developed by China's National Research Center of Parallel Computer Engineering and Technology (NRCPC) and is installed at the National Supercomputing Center in Wuxi. It is powered exclusively by Sunway's SW26010 processors. Sunway's followed by the Tianhe-2A (Milky Way-2A). This is a system developed by China's National University of Defense Technology (NUDT). It's deployed at the National Supercomputer Center in China. Powered by Intel Xeon CPUs and Matrix-2000 accelerators, it has a top speed of 61.4 petaflops. Coming at No. 5, the Dell-built, Frontera, a Dell C6420 system is powered by Intel Xeon Platinum processors. It speeds along at 23.5 petaflops. It lives at the Texas Advanced Computing Center of the University of Texas. The most powerful new supercomputer on the list is Rensselaer Polytechnic Institute Center for Computational Innovations (CCI)'s AiMOS. It made the list in the 25th position with 8.0 petaflops. The IBM-built system, like Summit and Sierra, is powered by Power9 CPUs and NVIDIA V100 GPUs.
In closing, ZDNet's Steven J. Vaughan-Nichols writes: "Regardless of the hardware, all 500 of the world's fastest supercomputers have one thing in common: They all run Linux."
Advertising

Does Linux Have a Marketing Problem? (hackaday.com) 263

On Hackaday's hosting site Hackaday.io, an electrical engineer with a background in semiconductor physics argues that Linux's small market share is due to a lack of marketing: Not only does [Linux] have dominance when raw computing ability is needed, either in a supercomputer or a webserver, but it must have some ability to effectively work as a personal computer as well, otherwise Android wouldn't be so popular on smartphones and tablets. From there it follows that the only reason that Microsoft and Apple dominate the desktop world is because they have a marketing group behind their products, which provides customers with a comfortable customer service layer between themselves and the engineers and programmers at those companies, and also drowns out the message that Linux even exists in the personal computing realm...

Part of the problem too is that Linux and most of its associated software is free and open source. What is often a strength when it comes to the quality of software and its flexibility and customizablity becomes a weakness when there's no revenue coming in to actually fund a marketing group that would be able to address this core communications issue between potential future users and the creators of the software. Canonical, Red Hat, SUSE and others all had varying successes, but this illistrates another problem: the splintered nature of open-source software causes a fragmenting not just in the software itself but the resources. Imagine if there were hundreds of different versions of macOS that all Apple users had to learn about and then decide which one was the best for their needs...

I have been using Linux exclusively since I ditched XP for 5.10 Breezy Badger and would love to live in a world where I'm not forced into the corporate hellscape of a Windows environment every day for no other reason than most people already know how to use Windows. With a cohesive marketing strategy, I think this could become a reality, but it won't happen through passionate essays on "free as in freedom" or the proper way to pronounce "GNU" or the benefits of using Gentoo instead of Arch. It'll only come if someone can unify all the splintered groups around a cohesive, simple message and market it to the public.

Google

Quantum Supremacy From Google? Not So Fast, Says IBM. (technologyreview.com) 80

IBM is disputing the much-vaunted claim that Google has hit a new milestone. From a report: A month ago, news broke that Google had reportedly achieved "quantum supremacy": it had gotten a quantum computer to run a calculation that would take a classical computer an unfeasibly long time. While the calculation itself -- essentially, a very specific technique for outputting random numbers -- is about as useful as the Wright brothers' 12-second first flight, it would be a milestone of similar significance, marking the dawn of an entirely new era of computing. But in a blog post published this week, IBM disputes Google's claim. The task that Google says might take the world's fastest classical supercomputer 10,000 years can actually, says IBM, be done in just days.

As John Preskill, the CalTech physicist who coined the term "quantum supremacy," wrote in an article for Quanta magazine, Google specifically chose a very narrow task that a quantum computer would be good at and a classical computer is bad at. "This quantum computation has very little structure, which makes it harder for the classical computer to keep up, but also means that the answer is not very informative," he wrote. Google's research paper hasn't been published, but a draft was leaked online last month. In it, researchers say they got a machine with 53 quantum bits, or qubits, to do the calculation in 200 seconds. They also estimated that it would take the world's most powerful supercomputer, the Summit machine at Oak Ridge National Laboratory, 10,000 years to repeat it with equal "fidelity," or the same level of uncertainty as the inherently uncertain quantum system.

Oracle

Oracle's New Supercomputer Has 1,060 Raspberry Pis (tomshardware.com) 71

An anonymous reader quotes Tom's Hardware: One Raspberry Pi can make a nice web server, but what happens if you put more than 1,000 of them together? At Oracle's OpenWorld convention on Monday, the company showed off a Raspberry Pi Supercomputer that combines 1,060 Raspberry Pis into one powerful cluster.

According to ServeTheHome, which first reported the story, the supercomputer features scores of racks with 21 Raspberry Pi 3 B+ boards each. To make everything run well together, the system runs on Oracle Autonomous Linux... Every unit connects to a single rebranded Supermicro 1U Xeon server, which functions as a central storage server for the whole supercomputer. The Oracle team also created custom, 3D printed brackets to help support all the Pis and connecting components...

ServeTheHome asked Oracle why it chose to create a cluster of Raspberry Pis instead of using a virtualized Arm server and one company rep said simply that "...a big cluster is cool."

Slashdot Top Deals