Supercomputing

Japan's Fugaku Retains Title As World's Fastest Supercomputer (datacenterdynamics.com) 13

According to a report from Nikkei Asia (paywalled), "The Japanese-made Fugaku captured its fourth consecutive title as the world's fastest supercomputer on Tuesday, although a rival from the U.S. or China is poised to steal the crown as soon as next year." From a report: But while Fugaku is the world's most powerful public supercomputer, at 442 petaflops, China is believed to secretly operate two exascale (1,000 petaflops) supercomputers, which were launched earlier this year. The top 10 list did not change much since the last report six months ago, with only one new addition -- a Microsoft Azure system called Voyager-EUS2. Voyager, featuring AMD Epyc CPUs and Nvidia A100 GPUs, achieved 30.05 petaflops, making it the tenth most powerful supercomputer in the world.

The other systems remained in the same position - after Japan's Arm-based Fugaku comes the US Summit system, an IBM Power and Nvidia GPU supercomputer capable of 148 petaflops. The similarly-architected 94 petaflops US Sierra system is next. Then comes what is officially China's most powerful supercomputer, the 93 petaflops Sunway TaihuLight, which features Sunway chips. The Biden administration sanctioned the company earlier this year.
You can read a summary of the systems in the Top10 here.
Software

'If Apple Keeps Letting Its Software Slip, the Next Big Thing Won't Matter' (macworld.com) 116

If Apple can't improve the reliability of its software, the next big thing won't matter, argues Dan Moren in an opinion piece for Macworld. From the report: Uneven distribution: As sci-fi writer William Gibson famously said, "the future is already here -- it's just not evenly distributed." While Gibson's comment resonates mostly on a socio-economic level that is borne out by Apple's not inexpensive technology, it's also embodied geographically by the company's work: if you're interested, you can see which Apple features are available in which regions. Many of these, of course, are due to restrictions and laws in specific regions or places where, say, Apple has not prioritized language localization. But some of them are cases where features have been rolled out only slowly to certain places. [...] It's surely less exciting for Apple to think about rolling out these (in some cases years old) features, especially those which might require a large degree of legwork, to various places than it is for the company to demonstrate its latest shiny feature, but it also means that sometimes these features don't make it to many, if not most of the users of its devices. Uneven distribution, indeed.

To error is machine: It's happened to pretty much any Apple device user: You go to use a feature and it just doesn't work. Sometimes there's no explanation as to why; other times, there's just a cryptic error message that provides no help at all. [...]

Shooting trouble: Sometimes what we're dealing with in the aforementioned situations are what we call "edge cases." Apple engineers surely do their best to test their features with a variety of hardware, in different places, with different settings. [...] Nobody expects Apple to catch everything, but the question remains: when these problems do arise, what do we do about them? One thing Apple could improve is the ease for users to report issues they encounter. Too often, I see missives posted on Apple discussion boards that encourage people to get in touch with Apple support... which often means a lengthy reiteration of the old troubleshooting canards. While these can sometimes solve problems, if not actually explain them, it's not a process that most consumers are likely to go through. And when those steps don't resolve the issues, users are often left with a virtual shrug.

Likewise, while Apple does provide a place to send feedback about products, it's explicitly not a way to report problems. Making it easier for users to report bugs and unexpected behavior would go a long way to helping owners of Apple products feel like they're not simply shouting their frustrations into a void (aka Twitter). If Apple can't improve the reliability of its software [...] it at least owes it to its users to create more robust resources for helping them help themselves. Because there's nothing more frustrating than not understanding why a miraculous device that can contact people around the world instantaneously, run incredibly powerful games, and crunch data faster than a supercomputer of yesteryear sometimes can't do something as simple as export a video of a vacation.
While Moren focuses primarily on unfinished features to help make his case, "there is also a huge problem with things being touched for no reason and making them worse," says HN reader makecheck. "When handed what must be a mountain of bugs and unfinished items, why the hell did they prioritize things like breaking notifications and Safari tabs, for instance? They're in a position where engineering resources desperately need to be closing gaps, not creating huge new ones."

An example of this would be the current UX of notifications. "A notification comes up, I hover and wait for the cross to appear and click it," writes noneeeed. "But then some time later I unlock my machine or something happens and apparently all my notifications are still there for some reason and I have to clear them again, only this time they are in groups and I have to clear multiple groups."

"Don't get me started on the new iOS podcast app," adds another reader.
China

Have Scientists Disproven Google's Quantum Supremacy Claim? (scmp.com) 35

Slashdot reader AltMachine writes: In October 2019, Google said its Sycamore processor was the first to achieve quantum supremacy by completing a task in three minutes and 20 seconds that would have taken the best classical supercomputer, IBM's Summit, 10,000 years. That claim — particularly how Google scientists arrived at the "10,000 years" conclusion — has been questioned by some researchers, but the counterclaim itself was not definitive.

Now though, in a paper to be submitted to a scientific journal for peer review, scientists at the Institute of Theoretical Physics under the Chinese Academy of Sciences said their algorithm on classical computers completed the simulation for the Sycamore quantum circuits [possibly paywalled; alternative source of the same article] "in about 15 hours using 512 graphics processing units (GPUs)" at a higher fidelity than Sycamore's. Further, the team said "if our simulation of the quantum supremacy circuits can be implemented in an upcoming exaflop supercomputer with high efficiency, in principle, the overall simulation time can be reduced to a few dozens of seconds, which is faster than Google's hardware experiments".

As China unveiled a photonic quantum computer which solved a Gaussian boson sampling problem in 200 seconds that would have taken 600 million years on classical computer, in December 2020, disproving Sycamore's claim would place China being the first country to achieve quantum supremacy.

Supercomputing

China's New Quantum Computer Has 1 Million Times the Power of Google's (interestingengineering.com) 143

Physicists in China claim they've constructed two quantum computers with performance speeds that outrival competitors in the U.S., debuting a superconducting machine, in addition to an even speedier one that uses light photons to obtain unprecedented results, according to a recent study published in the peer-reviewed journals Physical Review Letters and Science Bulletin. Interesting Engineering reports: The supercomputer, called Jiuzhang 2, can calculate in a single millisecond a task that the fastest conventional computer in the world would take a mind-numbing 30 trillion years to do. The breakthrough was revealed during an interview with the research team, which was broadcast on China's state-owned CCTV on Tuesday, which could make the news suspect. But with two peer-reviewed papers, it's important to take this seriously. Pan Jianwei, lead researcher of the studies, said that Zuchongzhi 2, which is a 66-qubit programmable superconducting quantum computer is an incredible 10 million times faster than Google's 55-qubit Sycamore, making China's new machine the fastest in the world, and the first to beat Google's in two years.

The Zuchongzhi 2 is an improved version of a previous machine, completed three months ago. The Jiuzhang 2, a different quantum computer that runs on light, has fewer applications but can run at blinding speeds of 100 sextillion times faster than the biggest conventional computers of today. In case you missed it, that's a one with 23 zeroes behind it. But while the features of these new machines hint at a computing revolution, they won't hit the marketplace anytime soon. As things stand, the two machines can only operate in pristine environments, and only for hyper-specific tasks. And even with special care, they still make lots of errors. "In the next step we hope to achieve quantum error correction with four to five years of hard work," said Professor Pan of the University of Science and Technology of China, in Hefei, which is in the southeastern province of Anhui.

Supercomputing

Scientists Develop the Next Generation of Reservoir Computing (phys.org) 48

An anonymous reader quotes a report from Phys.Org: A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems. Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed. In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer. Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University. The study was published today in the journal Nature Communications.

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the "hardest of the hard" computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said. Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said. It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a "reservoir" of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future. The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the network of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

In this study, Gauthier and his colleagues [...] found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time. They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect. Their next-generation reservoir computing was a clear winner over today's state-of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model. But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said. An important reason for the speed-up is that the "brain" behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results. Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

Operating Systems

Happy Birthday, Linux: From a Bedroom Project To Billions of Devices in 30 Years (theregister.com) 122

On August 25, 1991, Linus Torvalds, then a student at the University of Helsinki in Finland, sent a message to the comp.os.minix newsgroup soliciting feature suggestions for a free Unix-like operating system he was developing as a hobby. Thirty years later, that software, now known as Linux, is everywhere. From a report: It dominates the supercomputer world, with 100 per cent market share. According to Google, the Linux kernel is at the heart of more than three billion active devices running Android, the most-used operating system in the world. Linux also powers the vast majority of web-facing servers Netcraft surveyed. It is even used more than Microsoft Windows on Microsoft's own Azure cloud. And then there are the embedded electronics and Internet-of-Things spaces, and other areas.

Linux has failed to gain traction among mainstream desktop users, where it has a market share of about 2.38 per cent, or 3.59 per cent if you include ChromeOS, compared to Windows (73.04 per cent) and macOS (15.43 per cent). But the importance of Linux has more to do with the triumph of an idea: of free, open-source software. "It cannot be overstated how critical Linux is to today's internet ecosystem," Kees Cook, security and Linux kernel engineer at Google, told The Register via email. "Linux currently runs on everything from the smartphone we rely on everyday to the International Space Station. To rely on the internet is to rely on Linux." The next 30 years of Linux, Cook contends, will require the tech industry to work together on security and to provide more resources for maintenance and testing.

Intel

45 Teraflops: Intel Unveils Details of Its 100-Billion Transistor AI Chip (siliconangle.com) 16

At its annual Architecture Day semiconductor event Thursday, Intel revealed new details about its powerful Ponte Vecchio chip for data centers, reports SiliconANGLE: Intel is looking to take on Nvidia Corp. in the AI silicon market with Ponte Vecchio, which the company describes as its most complex system-on-chip or SOC to date. Ponte Vecchio features some 100 billion transistors, nearly twice as many as Nvidia's flagship A100 data center graphics processing unit. The chip's 100 billion transistors are divided among no fewer than 47 individual processing modules made using five different manufacturing processes. Normally, an SOC's processing modules are arranged side by side in a flat two-dimensional design. Ponte Vecchio, however, stacks the modules on one another in a vertical, three-dimensional structure created using Intel's Foveros technology.

The bulk of Ponte Vecchio's processing power comes from a set of modules aptly called the Compute Tiles. Each Compute Tile has eight Xe cores, GPU cores specifically optimized to run AI workloads. Every Xe core, in turn, consists of eight vector engines and eight matrix engines, processing modules specifically built to run the narrow set of mathematical operations that AI models use to turn data into insights... Intel shared early performance data about the chip in conjunction with the release of the technical details. According to the company, early Ponte Vecchio silicon has demonstrated performance of more than 45 teraflops, or about 45 trillion operations per second.

The article adds that it achieved those speeds while processing 32-bit single-precision floating-point values floating point values — and that at least one customer has already signed up to use Ponte Vecchio. The Argonne National Laboratory will include Ponte Vecchio chips in its upcoming $500 million Aurora supercomputer. Aurora will provide one exaflop of performance when it becomes fully operational, the equivalent of a quintillion calculations per second.
AI

What Does It Take to Build the World's Largest Computer Chip? (newyorker.com) 23

The New Yorker looks at Cerebras, a startup which has raised nearly half a billion dollars to build massive plate-sized chips targeted at AI applications — the largest computer chip in the world. In the end, said Cerebras's co-founder Andrew Feldman, the mega-chip design offers several advantages. Cores communicate faster when they're on the same chip: instead of being spread around a room, the computer's brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that's ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home...

A typical, large computer chip might draw three hundred and fifty watts of power, but Cerebras's giant chip draws fifteen kilowatts — enough to run a small house. "Nobody ever delivered that much power to a chip," Feldman said. "Nobody ever had to cool a chip like that." In the end, three-quarters of the CS-1, the computer that Cerebras built around its WSE-1 chip, is dedicated to preventing the motherboard from melting. Most computers use fans to blow cool air over their processors, but the CS-1 uses water, which conducts heat better; connected to piping and sitting atop the silicon is a water-cooled plate, made of a custom copper alloy that won't expand too much when warmed, and polished to perfection so as not to scratch the chip. On most chips, data and power flow in through wires at the edges, in roughly the same way that they arrive at a suburban house; for the more metropolitan Wafer-Scale Engines, they needed to come in perpendicularly, from below. The engineers had to invent a new connecting material that could withstand the heat and stress of the mega-chip environment. "That took us more than a year," Feldman said...

[I]n a rack in a data center, it takes up the same space as fifteen of the pizza-box-size machines powered by G.P.U.s. Custom-built machine-learning software works to assign tasks to the chip in the most efficient way possible, and even distributes work in order to prevent cold spots, so that the wafer doesn't crack.... According to Cerebras, the CS-1 is being used in several world-class labs — including the Lawrence Livermore National Laboratory, the Pittsburgh Supercomputing Center, and E.P.C.C., the supercomputing centre at the University of Edinburgh — as well as by pharmaceutical companies, industrial firms, and "military and intelligence customers." Earlier this year, in a blog post, an engineer at the pharmaceutical company AstraZeneca wrote that it had used a CS-1 to train a neural network that could extract information from research papers; the computer performed in two days what would take "a large cluster of G.P.U.s" two weeks.

The U.S. National Energy Technology Laboratory reported that its CS-1 solved a system of equations more than two hundred times faster than its supercomputer, while using "a fraction" of the power consumption. "To our knowledge, this is the first ever system capable of faster-than real-time simulation of millions of cells in realistic fluid-dynamics models," the researchers wrote. They concluded that, because of scaling inefficiencies, there could be no version of their supercomputer big enough to beat the CS-1.... Bronis de Supinski, the C.T.O. for Livermore Computing, told me that, in initial tests, the CS-1 had run neural networks about five times as fast per transistor as a cluster of G.P.U.s, and had accelerated network training even more.

It all suggests one possible work-around for Moore's Law: optimizing chips for specific applications. "For now," Feldman tells the New Yorker, "progress will come through specialization."
AI

Tesla Unveils Dojo Supercomputer: World's New Most Powerful AI Training Machine (electrek.co) 32

New submitter Darth Technoid shares a report from Electrek: At its AI Day, Tesla unveiled its Dojo supercomputer technology while flexing its growing in-house chip design talent. The automaker claims to have developed the fastest AI training machine in the world. For years now, Tesla has been teasing the development of a new supercomputer in-house optimized for neural net video training. Tesla is handling an insane amount of video data from its fleet of over 1 million vehicles, which it uses to train its neural nets.

The automaker found itself unsatisfied with current hardware options to train its computer vision neural nets and believed it could do better internally. Over the last two years, CEO Elon Musk has been teasing the development of Tesla's own supercomputer called "Dojo." Last year, he even teased that Tesla's Dojo would have a capacity of over an exaflop, which is one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS. It could potentially makes Dojo the new most powerful supercomputer in the world.

Ganesh Venkataramanan, Tesla's senior director of Autopilot hardware and the leader of the Dojo project, led the presentation. The engineer started by unveiling Dojo's D1 chip, which is using 7 nanometer technology and delivers breakthrough bandwidth and compute performance. Tesla designed the chip to "seamlessly connect without any glue to each other," and the automaker took advantage of that by connecting 500,000 nodes together. It adds the interface, power, and thermal management, and it results in what it calls a training tile. The result is a 9 PFlops training tile with 36TB per second of bandwight in a less than 1 cubic foot format. But now it still has to form a compute cluster using those training tiles in order to truly build the first Dojo supercomputer. Tesla hasn't put that system together yet, but CEO Elon Musk claimed that it will be operational next year.

Open Source

Libre-SOC's Open Hardware 180nm ASIC Submitted To IMEC for Fabrication (openpowerfoundation.org) 38

"We're building a chip. A fast chip. A safe chip. A trusted chip," explains the web page at Libre-SOC.org. "A chip with lots of peripherals. And it's VPU. And it's a 3D GPU... Oh and here, have the source code."

And now there's big news, reports long-time Slashdot reader lkcl: Libre-SOC's entirely Libre 180nm ASIC, which can be replicated down to symbolic level GDS-II with no NDAs of any kind, has been submitted to IMEC for fabrication.

It is the first wholly-independent Power ISA ASIC outside of IBM to go Silicon in 12 years. Microwatt went to Skywater 130nm in March; however, it is also developed by IBM, as an exceptionally well-made Reference Design, which Libre-SOC used for verification.

Whilst it would seem that Libre-SOC is jumping on the chip-shortage era's innovation bandwagon, Libre-SOC has actually been in development for over three and a half years so far. It even pre-dates the OpenLane initiative, and has the same objectives: fully automated HDL to GDS-II, full transparency and auditability with Libre VLSI tools Coriolis2 and Libre Cell Libraries from Chips4Makers.

With €400,000 in funding from the NLNet Foundation [a long-standing non-profit supporting privacy, security, and the "open internet"], plus an application to NGI Pointer under consideration, the next steps are to continue development of Draft Cray-style Vectors (SVP64) to the already supercomputer-level Power ISA, under the watchful eye of the upcoming OpenPOWER ISA Workgroup.

United Kingdom

UK Supercomputer Cambridge-1 To Hunt For Medical Breakthroughs 23

The UK's most powerful supercomputer, which its creators hope will make the process of preventing, diagnosing and treating disease better, faster and cheaper, is operational. The Guardian reports: Christened Cambridge-1, the supercomputer represents a $100m investment by US-based computing company Nvidia. The idea capitalizes on artificial intelligence (AI) -- which combines big data with computer science to facilitate problem-solving -- in healthcare. [...] Cambridge-1's first projects will be with AstraZeneca, GSK, Guy's and St Thomas' NHS foundation trust, King's College London and Oxford Nanopore. They will seek to develop a deeper understanding of diseases such as dementia, design new drugs, and improve the accuracy of finding disease-causing variations in human genomes.

A key way the supercomputer can help, said Dr Kim Branson, global head of artificial intelligence and machine learning at GSK, is in patient care. In the field of immuno-oncology, for instance, existing medicines harness the patient's own immune system to fight cancer. But it isn't always apparent which patients will gain the most benefit from these drugs -- some of that information is hidden in the imaging of the tumors and in numerical clues found in blood. Cambridge-1 can be key to helping fuse these different datasets, and building large models to help determine the best course of treatment for patients, Branson said.
Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Australia

Ancient Australian 'Superhighways' Suggested By Massive Supercomputing Study (sciencemag.org) 56

sciencehabit shares a report from Science Magazine: When humans first set foot in Australia more than 65,000 years ago, they faced the perilous task of navigating a landscape they'd never seen. Now, researchers have used supercomputers to simulate 125 billion possible travel routes and reconstruct the most likely "superhighways" these ancient immigrants used as they spread across the continent. The project offers new insight into how landmarks and water supplies shape human migrations, and provides archaeologists with clues for where to look for undiscovered ancient settlements.

It took weeks to run the complex simulations on a supercomputer operated by the U.S. government. But the number crunching ultimately revealed a network of "optimal superhighways" that had the most attractive combinations of easy walking, water, and landmarks. Optimal road map in hand, the researchers faced a fundamental question, says lead author Stefani Crabtree, an archaeologist at Utah State University, Logan, and the Santa Fe Institute: Was there any evidence that real people had once used these computer-identified corridors? To find out, the researchers compared their routes to the locations of the roughly three dozen archaeological sites in Australia known to be at least 35,000 years old. Many sites sat on or near the superhighways. Some corridors also coincided with ancient trade routes known from indigenous oral histories, or aligned with genetic and linguistic studies used to trace early human migrations. "I think all of us were surprised by the goodness of the fit," says archaeologist Sean Ulm of James Cook University, Cairns.

The map has also highlighted little-studied migration corridors that could yield future archaeological discoveries. For example, some early superhighways sat on coastal lands that are now submerged, giving marine researchers a guide for exploration. Even more intriguing, the authors and others say, are major routes that cut across several arid areas in Australia's center and in the northeastern state of Queensland. Those paths challenge a "long-standing view that the earliest people avoided the deserts," Ulm says. The Queensland highway, in particular, presents "an excellent focus point" for future archaeological surveys, says archaeologist Shimona Kealy of the Australian National University.
The study has been published in the journal Nature Human Behavior.
Microsoft

Met Office and Microsoft To Build Climate Supercomputer (bbc.com) 27

The Met Office is working with Microsoft to build a weather forecasting supercomputer in the UK. From a report: They say it will provide more accurate weather forecasting and a better understanding of climate change. The UK government said in February 2020 it would invest $1.6bn in the project. It is expected to be one of the top 25 supercomputers in the world when it is up and running in the summer of 2022. Microsoft plans to update it over the next decade as computing improves. "This partnership is an impressive public investment in the basic and applied sciences of weather and climate," said Morgan O'Neill, assistant professor at Stanford University, who is independent of the project. "Such a major investment in a state-of-the-art weather and climate prediction system by the UK is great news globally, and I look forward to the scientific advances that will follow." The Met Office said the technology would increase their understanding of the weather -- and will allow people to better plan activities, prepare for inclement weather and get a better understanding of climate change.
Social Networks

MyPillow CEO Mike Lindell Is Trying To Launch a Social Media Site, and It's Already Resulted In a Legal Threat (thedailybeast.com) 229

An anonymous reader quotes a report from The Daily Beast: MyPillow founder and staunch Trump ally Mike Lindell plans to launch a social network of his own in the next few weeks, creating a haven for the kind of pro-Trump conspiracy theories that have been banned on more prominent social-media sites. On Lindell's "Vocl" social media platform, users will be free to claim that a supercomputer stole the election from Donald Trump, or that vaccines are a tool of the devil. Any new social media network faces serious challenges. But Vocl must grapple with a daunting problem before it even launches: a website called "Vocal," spelled with an "A," already exists.

On Thursday, lawyers for Vocal's publicly traded parent company, Creatd, Inc., warned Lindell, in a letter reviewed by The Daily Beast, to change his social media network's name and surrender ownership of the Vocl.com domain name. If Lindell refuses to change the name, he could face a lawsuit. While Lindell has promised to turn Vocl into a "cross between Twitter and YouTube," Vocal is a publishing platform similar to Medium where writers can post and monetize articles. "It is clear that you are acting with bad faith and with intent to profit from Creatd's mark," the letter reads, claiming Lindell's Vocl would "tarnish" the Vocal brand. Creatd owns the trademark for using "Vocal" in a number of ways related to social networking, including creating "virtual communities" and "online networking services." Along with surrendering ownership of the Vocl.com domain name, Creatd wants Lindell to destroy any products with Vocl branding and never use the name again. "Creatd is prepared to take all steps necessary to protect Creatd's valuable intellectual property rights, without further notice to you," the letter reads.
On Friday morning, the MyPillow CEO said: "It has nothing to do with their trademark. I haven't even launched yet. But it has nothing to do with us." He claims Vocl is also an acronym that stands for "Victory of Christ's Love."

Early Friday afternoon, Lindell told The Daily Beast to say, "We looked into it, and we believe it would be confusing, so we are going to announce a different name and URL by Monday."
Japan

Japan's Fugaku Supercomputer Goes Fully Live To Aid COVID-19 Research (japantimes.co.jp) 19

Japan's Fugaku supercomputer, the world's fastest in terms of computing speed, went into full operation this week, earlier than initially scheduled, in the hope that it can be used for research related to the novel coronavirus. From a report: The supercomputer, named after an alternative word for Mount Fuji, became partially operational in April last year to visualize how droplets that could carry the virus spread from the mouth and to help explore possible treatments for COVID-19. "I hope Fugaku will be cherished by the people as it can do what its predecessor K couldn't, including artificial intelligence (applications) and big data analytics," said Hiroshi Matsumoto, president of the Riken research institute that developed the machine, in a ceremony held at the Riken Center for Computational Science in Kobe, where it is installed. Fugaku, which can perform over 442 quadrillion computations per second, was originally scheduled to start operating fully in the fiscal year from April. It will eventually be used in fields such as climate and artificial intelligence applications, and will be used in more than 100 projects, according to state-sponsored Riken.
Transportation

SoftBank Expects Mass Production of Driverless Cars in Two Years (reuters.com) 38

SoftBank Group Chief Executive Masayoshi Son said on Friday he expects mass production of self-driving vehicles to start in two years. From a report: While in the first year the production of units won't be in millions, in the next several years the cost per mile in fully autonomous cars will become very cheap, Son said, speaking at a virtual meeting of the World Economic Forum. "The AI is driving for you. The automobile will become a real supercomputer with four wheels." SoftBank has stake in self-driving car maker Cruise, which is majority owned by General Motors, and has been testing self-driving cars in California. It has also funded the autonomous driving business of China's Didi Chuxing.
Science

Simulating 800,000 Years of California Earthquake History To Pinpoint Risks (utexas.edu) 19

aarondubrow shares a report from the Texas Advanced Computing Center: A new study in the Bulletin of the Seismological Society of America presents results from a new earthquake simulator, RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. [The framework makes use of two of the most powerful supercomputers on the planet: Frontera, at the Texas Advanced Computing Center, and Summit, at Oak Ridge National Laboratory].

The new approach improves [seismologists'] ability to pinpoint how big an earthquake might occur at a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

Hardware

Light-Based Quantum Computer Exceeds Fastest Classical Supercomputers (scientificamerican.com) 60

An anonymous reader quotes a report from Scientific American: For the first time, a quantum computer made from photons -- particles of light -- has outperformed even the fastest classical supercomputers. Physicists led by Chao-Yang Lu and Jian-Wei Pan of the University of Science and Technology of China (USTC) in Shanghai performed a technique called Gaussian boson sampling with their quantum computer, named Jiuzhang. The result, reported in the journal Science, was 76 detected photons -- far above and beyond the previous record of five detected photons and the capabilities of classical supercomputers.

Unlike a traditional computer built from silicon processors, Jiuzhangis an elaborate tabletop setup of lasers, mirrors, prisms and photon detectors. It is not a universal computer that could one day send e-mails or store files, but it does demonstrate the potential of quantum computing. Last year, Google captured headlines when its quantum computer Sycamore took roughly three minutes to do what would take a supercomputer three days (or 10,000 years, depending on your estimation method). In their paper, the USTC team estimates that it would take the Sunway TaihuLight, the third most powerful supercomputer in the world, a staggering 2.5 billion years to perform the same calculation as Jiuzhang. [...] This latest demonstration of quantum computing's potential from the USTC group is critical because it differs dramatically from Google's approach. Sycamore uses superconducting loops of metal to form qubits; in Jiuzhang, the photons themselves are the qubits. Independent corroboration that quantum computing principles can lead to primacy even on totally different hardware "gives us confidence that in the long term, eventually, useful quantum simulators and a fault-tolerant quantum computer will become feasible," Lu says.

... [T]he USTC setup is dauntingly complicated. Jiuzhang begins with a laser that is split so it strikes 25 crystals made of potassium titanyl phosphate. After each crystal is hit, it reliably spits out two photons in opposite directions. The photons are then sent through 100 inputs, where they race through a track made of 300 prisms and 75 mirrors. Finally, the photons land in 100 slots where they are detected. Averaging over 200 seconds of runs, the USTC group detected about 43 photons per run. But in one run, they observed 76 photons -- more than enough to justify their quantum primacy claim. It is difficult to estimate just how much time would be needed for a supercomputer to solve a distribution with 76 detected photons -- in large part because it is not exactly feasible to spend 2.5 billion years running a supercomputer to directly check it. Instead, the researchers extrapolate from the time it takes to classically calculate for smaller numbers of detected photons. At best, solving for 50 photons, the researchers claim, would take a supercomputer two days, which is far slower than the 200-second run time of Jiuzhang.

Graphics

Cerebras' Wafer-Size Chip Is 10,000 Times Faster Than a GPU (venturebeat.com) 123

An anonymous reader quotes a report from VentureBeat: Cerebras Systems and the federal Department of Energy's National Energy Technology Laboratory today announced that the company's CS-1 system is more than 10,000 times faster than a graphics processing unit (GPU). On a practical level, this means AI neural networks that previously took months to train can now train in minutes on the Cerebras system.

Cerebras makes the world's largest computer chip, the WSE. Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware. But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one. [...] A single Cerebras CS-1 is 26 inches tall, fits in one-third of a rack, and is powered by the industry's only wafer-scale processing engine, Cerebras' WSE. It combines memory performance with massive bandwidth, low latency interprocessor communication, and an architecture optimized for high bandwidth computing.

Cerebras's CS-1 system uses the WSE wafer-size chip, which has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel's first 4004 processor in 1971 had 2,300 transistors, and the Nvidia A100 80GB chip, announced yesterday, has 54 billion transistors. Feldman said in an interview with VentureBeat that the CS-1 was also 200 times faster than the Joule Supercomputer, which is No. 82 on a list of the top 500 supercomputers in the world. [...] In this demo, the Joule Supercomputer used 16,384 cores, and the Cerebras computer was 200 times faster, according to energy lab director Brian Anderson. Cerebras costs several million dollars and uses 20 kilowatts of power.

Slashdot Top Deals