×
AI

What Does It Take to Build the World's Largest Computer Chip? (newyorker.com) 23

The New Yorker looks at Cerebras, a startup which has raised nearly half a billion dollars to build massive plate-sized chips targeted at AI applications — the largest computer chip in the world. In the end, said Cerebras's co-founder Andrew Feldman, the mega-chip design offers several advantages. Cores communicate faster when they're on the same chip: instead of being spread around a room, the computer's brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that's ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home...

A typical, large computer chip might draw three hundred and fifty watts of power, but Cerebras's giant chip draws fifteen kilowatts — enough to run a small house. "Nobody ever delivered that much power to a chip," Feldman said. "Nobody ever had to cool a chip like that." In the end, three-quarters of the CS-1, the computer that Cerebras built around its WSE-1 chip, is dedicated to preventing the motherboard from melting. Most computers use fans to blow cool air over their processors, but the CS-1 uses water, which conducts heat better; connected to piping and sitting atop the silicon is a water-cooled plate, made of a custom copper alloy that won't expand too much when warmed, and polished to perfection so as not to scratch the chip. On most chips, data and power flow in through wires at the edges, in roughly the same way that they arrive at a suburban house; for the more metropolitan Wafer-Scale Engines, they needed to come in perpendicularly, from below. The engineers had to invent a new connecting material that could withstand the heat and stress of the mega-chip environment. "That took us more than a year," Feldman said...

[I]n a rack in a data center, it takes up the same space as fifteen of the pizza-box-size machines powered by G.P.U.s. Custom-built machine-learning software works to assign tasks to the chip in the most efficient way possible, and even distributes work in order to prevent cold spots, so that the wafer doesn't crack.... According to Cerebras, the CS-1 is being used in several world-class labs — including the Lawrence Livermore National Laboratory, the Pittsburgh Supercomputing Center, and E.P.C.C., the supercomputing centre at the University of Edinburgh — as well as by pharmaceutical companies, industrial firms, and "military and intelligence customers." Earlier this year, in a blog post, an engineer at the pharmaceutical company AstraZeneca wrote that it had used a CS-1 to train a neural network that could extract information from research papers; the computer performed in two days what would take "a large cluster of G.P.U.s" two weeks.

The U.S. National Energy Technology Laboratory reported that its CS-1 solved a system of equations more than two hundred times faster than its supercomputer, while using "a fraction" of the power consumption. "To our knowledge, this is the first ever system capable of faster-than real-time simulation of millions of cells in realistic fluid-dynamics models," the researchers wrote. They concluded that, because of scaling inefficiencies, there could be no version of their supercomputer big enough to beat the CS-1.... Bronis de Supinski, the C.T.O. for Livermore Computing, told me that, in initial tests, the CS-1 had run neural networks about five times as fast per transistor as a cluster of G.P.U.s, and had accelerated network training even more.

It all suggests one possible work-around for Moore's Law: optimizing chips for specific applications. "For now," Feldman tells the New Yorker, "progress will come through specialization."
AI

Tesla Unveils Dojo Supercomputer: World's New Most Powerful AI Training Machine (electrek.co) 32

New submitter Darth Technoid shares a report from Electrek: At its AI Day, Tesla unveiled its Dojo supercomputer technology while flexing its growing in-house chip design talent. The automaker claims to have developed the fastest AI training machine in the world. For years now, Tesla has been teasing the development of a new supercomputer in-house optimized for neural net video training. Tesla is handling an insane amount of video data from its fleet of over 1 million vehicles, which it uses to train its neural nets.

The automaker found itself unsatisfied with current hardware options to train its computer vision neural nets and believed it could do better internally. Over the last two years, CEO Elon Musk has been teasing the development of Tesla's own supercomputer called "Dojo." Last year, he even teased that Tesla's Dojo would have a capacity of over an exaflop, which is one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS. It could potentially makes Dojo the new most powerful supercomputer in the world.

Ganesh Venkataramanan, Tesla's senior director of Autopilot hardware and the leader of the Dojo project, led the presentation. The engineer started by unveiling Dojo's D1 chip, which is using 7 nanometer technology and delivers breakthrough bandwidth and compute performance. Tesla designed the chip to "seamlessly connect without any glue to each other," and the automaker took advantage of that by connecting 500,000 nodes together. It adds the interface, power, and thermal management, and it results in what it calls a training tile. The result is a 9 PFlops training tile with 36TB per second of bandwight in a less than 1 cubic foot format. But now it still has to form a compute cluster using those training tiles in order to truly build the first Dojo supercomputer. Tesla hasn't put that system together yet, but CEO Elon Musk claimed that it will be operational next year.

Open Source

Libre-SOC's Open Hardware 180nm ASIC Submitted To IMEC for Fabrication (openpowerfoundation.org) 38

"We're building a chip. A fast chip. A safe chip. A trusted chip," explains the web page at Libre-SOC.org. "A chip with lots of peripherals. And it's VPU. And it's a 3D GPU... Oh and here, have the source code."

And now there's big news, reports long-time Slashdot reader lkcl: Libre-SOC's entirely Libre 180nm ASIC, which can be replicated down to symbolic level GDS-II with no NDAs of any kind, has been submitted to IMEC for fabrication.

It is the first wholly-independent Power ISA ASIC outside of IBM to go Silicon in 12 years. Microwatt went to Skywater 130nm in March; however, it is also developed by IBM, as an exceptionally well-made Reference Design, which Libre-SOC used for verification.

Whilst it would seem that Libre-SOC is jumping on the chip-shortage era's innovation bandwagon, Libre-SOC has actually been in development for over three and a half years so far. It even pre-dates the OpenLane initiative, and has the same objectives: fully automated HDL to GDS-II, full transparency and auditability with Libre VLSI tools Coriolis2 and Libre Cell Libraries from Chips4Makers.

With €400,000 in funding from the NLNet Foundation [a long-standing non-profit supporting privacy, security, and the "open internet"], plus an application to NGI Pointer under consideration, the next steps are to continue development of Draft Cray-style Vectors (SVP64) to the already supercomputer-level Power ISA, under the watchful eye of the upcoming OpenPOWER ISA Workgroup.

United Kingdom

UK Supercomputer Cambridge-1 To Hunt For Medical Breakthroughs 23

The UK's most powerful supercomputer, which its creators hope will make the process of preventing, diagnosing and treating disease better, faster and cheaper, is operational. The Guardian reports: Christened Cambridge-1, the supercomputer represents a $100m investment by US-based computing company Nvidia. The idea capitalizes on artificial intelligence (AI) -- which combines big data with computer science to facilitate problem-solving -- in healthcare. [...] Cambridge-1's first projects will be with AstraZeneca, GSK, Guy's and St Thomas' NHS foundation trust, King's College London and Oxford Nanopore. They will seek to develop a deeper understanding of diseases such as dementia, design new drugs, and improve the accuracy of finding disease-causing variations in human genomes.

A key way the supercomputer can help, said Dr Kim Branson, global head of artificial intelligence and machine learning at GSK, is in patient care. In the field of immuno-oncology, for instance, existing medicines harness the patient's own immune system to fight cancer. But it isn't always apparent which patients will gain the most benefit from these drugs -- some of that information is hidden in the imaging of the tumors and in numerical clues found in blood. Cambridge-1 can be key to helping fuse these different datasets, and building large models to help determine the best course of treatment for patients, Branson said.
Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Australia

Ancient Australian 'Superhighways' Suggested By Massive Supercomputing Study (sciencemag.org) 56

sciencehabit shares a report from Science Magazine: When humans first set foot in Australia more than 65,000 years ago, they faced the perilous task of navigating a landscape they'd never seen. Now, researchers have used supercomputers to simulate 125 billion possible travel routes and reconstruct the most likely "superhighways" these ancient immigrants used as they spread across the continent. The project offers new insight into how landmarks and water supplies shape human migrations, and provides archaeologists with clues for where to look for undiscovered ancient settlements.

It took weeks to run the complex simulations on a supercomputer operated by the U.S. government. But the number crunching ultimately revealed a network of "optimal superhighways" that had the most attractive combinations of easy walking, water, and landmarks. Optimal road map in hand, the researchers faced a fundamental question, says lead author Stefani Crabtree, an archaeologist at Utah State University, Logan, and the Santa Fe Institute: Was there any evidence that real people had once used these computer-identified corridors? To find out, the researchers compared their routes to the locations of the roughly three dozen archaeological sites in Australia known to be at least 35,000 years old. Many sites sat on or near the superhighways. Some corridors also coincided with ancient trade routes known from indigenous oral histories, or aligned with genetic and linguistic studies used to trace early human migrations. "I think all of us were surprised by the goodness of the fit," says archaeologist Sean Ulm of James Cook University, Cairns.

The map has also highlighted little-studied migration corridors that could yield future archaeological discoveries. For example, some early superhighways sat on coastal lands that are now submerged, giving marine researchers a guide for exploration. Even more intriguing, the authors and others say, are major routes that cut across several arid areas in Australia's center and in the northeastern state of Queensland. Those paths challenge a "long-standing view that the earliest people avoided the deserts," Ulm says. The Queensland highway, in particular, presents "an excellent focus point" for future archaeological surveys, says archaeologist Shimona Kealy of the Australian National University.
The study has been published in the journal Nature Human Behavior.
Microsoft

Met Office and Microsoft To Build Climate Supercomputer (bbc.com) 27

The Met Office is working with Microsoft to build a weather forecasting supercomputer in the UK. From a report: They say it will provide more accurate weather forecasting and a better understanding of climate change. The UK government said in February 2020 it would invest $1.6bn in the project. It is expected to be one of the top 25 supercomputers in the world when it is up and running in the summer of 2022. Microsoft plans to update it over the next decade as computing improves. "This partnership is an impressive public investment in the basic and applied sciences of weather and climate," said Morgan O'Neill, assistant professor at Stanford University, who is independent of the project. "Such a major investment in a state-of-the-art weather and climate prediction system by the UK is great news globally, and I look forward to the scientific advances that will follow." The Met Office said the technology would increase their understanding of the weather -- and will allow people to better plan activities, prepare for inclement weather and get a better understanding of climate change.
Social Networks

MyPillow CEO Mike Lindell Is Trying To Launch a Social Media Site, and It's Already Resulted In a Legal Threat (thedailybeast.com) 229

An anonymous reader quotes a report from The Daily Beast: MyPillow founder and staunch Trump ally Mike Lindell plans to launch a social network of his own in the next few weeks, creating a haven for the kind of pro-Trump conspiracy theories that have been banned on more prominent social-media sites. On Lindell's "Vocl" social media platform, users will be free to claim that a supercomputer stole the election from Donald Trump, or that vaccines are a tool of the devil. Any new social media network faces serious challenges. But Vocl must grapple with a daunting problem before it even launches: a website called "Vocal," spelled with an "A," already exists.

On Thursday, lawyers for Vocal's publicly traded parent company, Creatd, Inc., warned Lindell, in a letter reviewed by The Daily Beast, to change his social media network's name and surrender ownership of the Vocl.com domain name. If Lindell refuses to change the name, he could face a lawsuit. While Lindell has promised to turn Vocl into a "cross between Twitter and YouTube," Vocal is a publishing platform similar to Medium where writers can post and monetize articles. "It is clear that you are acting with bad faith and with intent to profit from Creatd's mark," the letter reads, claiming Lindell's Vocl would "tarnish" the Vocal brand. Creatd owns the trademark for using "Vocal" in a number of ways related to social networking, including creating "virtual communities" and "online networking services." Along with surrendering ownership of the Vocl.com domain name, Creatd wants Lindell to destroy any products with Vocl branding and never use the name again. "Creatd is prepared to take all steps necessary to protect Creatd's valuable intellectual property rights, without further notice to you," the letter reads.
On Friday morning, the MyPillow CEO said: "It has nothing to do with their trademark. I haven't even launched yet. But it has nothing to do with us." He claims Vocl is also an acronym that stands for "Victory of Christ's Love."

Early Friday afternoon, Lindell told The Daily Beast to say, "We looked into it, and we believe it would be confusing, so we are going to announce a different name and URL by Monday."
Japan

Japan's Fugaku Supercomputer Goes Fully Live To Aid COVID-19 Research (japantimes.co.jp) 19

Japan's Fugaku supercomputer, the world's fastest in terms of computing speed, went into full operation this week, earlier than initially scheduled, in the hope that it can be used for research related to the novel coronavirus. From a report: The supercomputer, named after an alternative word for Mount Fuji, became partially operational in April last year to visualize how droplets that could carry the virus spread from the mouth and to help explore possible treatments for COVID-19. "I hope Fugaku will be cherished by the people as it can do what its predecessor K couldn't, including artificial intelligence (applications) and big data analytics," said Hiroshi Matsumoto, president of the Riken research institute that developed the machine, in a ceremony held at the Riken Center for Computational Science in Kobe, where it is installed. Fugaku, which can perform over 442 quadrillion computations per second, was originally scheduled to start operating fully in the fiscal year from April. It will eventually be used in fields such as climate and artificial intelligence applications, and will be used in more than 100 projects, according to state-sponsored Riken.
Transportation

SoftBank Expects Mass Production of Driverless Cars in Two Years (reuters.com) 38

SoftBank Group Chief Executive Masayoshi Son said on Friday he expects mass production of self-driving vehicles to start in two years. From a report: While in the first year the production of units won't be in millions, in the next several years the cost per mile in fully autonomous cars will become very cheap, Son said, speaking at a virtual meeting of the World Economic Forum. "The AI is driving for you. The automobile will become a real supercomputer with four wheels." SoftBank has stake in self-driving car maker Cruise, which is majority owned by General Motors, and has been testing self-driving cars in California. It has also funded the autonomous driving business of China's Didi Chuxing.
Science

Simulating 800,000 Years of California Earthquake History To Pinpoint Risks (utexas.edu) 19

aarondubrow shares a report from the Texas Advanced Computing Center: A new study in the Bulletin of the Seismological Society of America presents results from a new earthquake simulator, RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. [The framework makes use of two of the most powerful supercomputers on the planet: Frontera, at the Texas Advanced Computing Center, and Summit, at Oak Ridge National Laboratory].

The new approach improves [seismologists'] ability to pinpoint how big an earthquake might occur at a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

Hardware

Light-Based Quantum Computer Exceeds Fastest Classical Supercomputers (scientificamerican.com) 60

An anonymous reader quotes a report from Scientific American: For the first time, a quantum computer made from photons -- particles of light -- has outperformed even the fastest classical supercomputers. Physicists led by Chao-Yang Lu and Jian-Wei Pan of the University of Science and Technology of China (USTC) in Shanghai performed a technique called Gaussian boson sampling with their quantum computer, named Jiuzhang. The result, reported in the journal Science, was 76 detected photons -- far above and beyond the previous record of five detected photons and the capabilities of classical supercomputers.

Unlike a traditional computer built from silicon processors, Jiuzhangis an elaborate tabletop setup of lasers, mirrors, prisms and photon detectors. It is not a universal computer that could one day send e-mails or store files, but it does demonstrate the potential of quantum computing. Last year, Google captured headlines when its quantum computer Sycamore took roughly three minutes to do what would take a supercomputer three days (or 10,000 years, depending on your estimation method). In their paper, the USTC team estimates that it would take the Sunway TaihuLight, the third most powerful supercomputer in the world, a staggering 2.5 billion years to perform the same calculation as Jiuzhang. [...] This latest demonstration of quantum computing's potential from the USTC group is critical because it differs dramatically from Google's approach. Sycamore uses superconducting loops of metal to form qubits; in Jiuzhang, the photons themselves are the qubits. Independent corroboration that quantum computing principles can lead to primacy even on totally different hardware "gives us confidence that in the long term, eventually, useful quantum simulators and a fault-tolerant quantum computer will become feasible," Lu says.

... [T]he USTC setup is dauntingly complicated. Jiuzhang begins with a laser that is split so it strikes 25 crystals made of potassium titanyl phosphate. After each crystal is hit, it reliably spits out two photons in opposite directions. The photons are then sent through 100 inputs, where they race through a track made of 300 prisms and 75 mirrors. Finally, the photons land in 100 slots where they are detected. Averaging over 200 seconds of runs, the USTC group detected about 43 photons per run. But in one run, they observed 76 photons -- more than enough to justify their quantum primacy claim. It is difficult to estimate just how much time would be needed for a supercomputer to solve a distribution with 76 detected photons -- in large part because it is not exactly feasible to spend 2.5 billion years running a supercomputer to directly check it. Instead, the researchers extrapolate from the time it takes to classically calculate for smaller numbers of detected photons. At best, solving for 50 photons, the researchers claim, would take a supercomputer two days, which is far slower than the 200-second run time of Jiuzhang.

Graphics

Cerebras' Wafer-Size Chip Is 10,000 Times Faster Than a GPU (venturebeat.com) 123

An anonymous reader quotes a report from VentureBeat: Cerebras Systems and the federal Department of Energy's National Energy Technology Laboratory today announced that the company's CS-1 system is more than 10,000 times faster than a graphics processing unit (GPU). On a practical level, this means AI neural networks that previously took months to train can now train in minutes on the Cerebras system.

Cerebras makes the world's largest computer chip, the WSE. Chipmakers normally slice a wafer from a 12-inch-diameter ingot of silicon to process in a chip factory. Once processed, the wafer is sliced into hundreds of separate chips that can be used in electronic hardware. But Cerebras, started by SeaMicro founder Andrew Feldman, takes that wafer and makes a single, massive chip out of it. Each piece of the chip, dubbed a core, is interconnected in a sophisticated way to other cores. The interconnections are designed to keep all the cores functioning at high speeds so the transistors can work together as one. [...] A single Cerebras CS-1 is 26 inches tall, fits in one-third of a rack, and is powered by the industry's only wafer-scale processing engine, Cerebras' WSE. It combines memory performance with massive bandwidth, low latency interprocessor communication, and an architecture optimized for high bandwidth computing.

Cerebras's CS-1 system uses the WSE wafer-size chip, which has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel's first 4004 processor in 1971 had 2,300 transistors, and the Nvidia A100 80GB chip, announced yesterday, has 54 billion transistors. Feldman said in an interview with VentureBeat that the CS-1 was also 200 times faster than the Joule Supercomputer, which is No. 82 on a list of the top 500 supercomputers in the world. [...] In this demo, the Joule Supercomputer used 16,384 cores, and the Cerebras computer was 200 times faster, according to energy lab director Brian Anderson. Cerebras costs several million dollars and uses 20 kilowatts of power.

Japan

Japan's ARM-Based Supercomputer Leads World In Top500 List; Exascale Expected In 2021 (techtarget.com) 25

dcblogs writes: Japan's Fugaku ARM-based supercomputer is the world's most powerful in the latest Top500 list, setting a world record of 442 petaflops. But this was otherwise an unremarkable year for supercomputers, with a "flattening performance curve," said Jack Dongarra, one of the academics behind the twice-a-year ranking and director of the Innovative Computing Laboratory at the University of Tennessee. This is a result of Moore's Law slowing down as well as a slowdown in the replacement of older systems, he said. But the U.S. is set to deliver an exascale system -- 1,000 petaflops -- next year and China as well. Meanwhile, the EU has a 550 petaflop system in development in Finland. "On the Top500 list, the second-ranked system was IBM Power Systems at nearly 149 petaflops using its Power9 CPUs and Nvidia Tesla GPUs. It is at the Oak Ridge National Lab in Tennessee," adds TechTarget.

"Third place went to Sierra supercomputer, which also uses Power9 and Nvidia GPUs, at about 95 petaflops. It is at Lawrence Livermore National Laboratory in Livermore, Calif."
Medicine

Folding@Home Exascale Supercomputer Finds Potential Targets For COVID-19 Cure (networkworld.com) 38

An anonymous reader quotes a report from Network World: The Folding@home project has shared new results of its efforts to simulate proteins from the SARS-CoV-2 virus to better understand how they function and how to stop them. Folding@home is a distributed computing effort that uses small clients to run simulations for biomedical research when users' PCs are idle. The clients operate independently of each other to perform their own unique simulation and send in the results to the F@h servers. In its SARS-CoV-2 simulations, F@h first targeted the spike, the cone-shaped appendages on the surface of the virus consisting of three proteins. The spike must open to attach itself to a human cell to infiltrate and replicate. F@h's mission was to simulate this opening process to gain unique insight into what the open state looks like and find a way to inhibit the connection between the spike and human cells.

And it did so. In a newly published paper, the Folding@home team said it was able to simulate an "unprecedented" 0.1 seconds of the viral proteome. They captured dramatic opening of the spike complex, as well as shape-shifting in other proteins that revealed more than 50 "cryptic" pockets that expand targeting options for the design of antivirals. [...] The model derived from the F@h simulations shows that the spike opens up and exposes buried surfaces. These surfaces are necessary for infecting a human cell and can also be targeted with antibodies or antivirals that bind to the surface to neutralize the virus and prevent it from infecting someone.
"And the tech sector played a big role in helping the find," adds the anonymous Slashdot reader. "Microsoft, Nvidia, AMD, Intel, AWS, Oracle, and Cisco all helped with hardware and cloud services. Pure Storage donated a one petabyte all-flash storage array. Linus Tech Tips, a hobbyist YouTube channel for home system builders with 12 million followers, set up a 100TB server to take the load off."
HP

Hewlett Packard Enterprise Will Build a $160 Million Supercomputer in Finland (venturebeat.com) 9

Hewlett Packard Enterprise (HPE) today announced it has been awarded over $160 million to build a supercomputer called LUMI in Finland. LUMI will be funded by the European Joint Undertaking EuroHPC, a joint supercomputing collaboration between national governments and the European Union. From a report: The supercomputer will have a theoretical peak performance of more than 550 petaflops and is expected to best the RIKEN Center for Computational Science's top-performing Fugaku petascale computer, which reached 415.5 petaflops in June 2020.
United Kingdom

Nvidia Pledges To Built Britain's Largest Supercomputer Following $40 Billion Bid For Arm (cnbc.com) 29

U.S. chipmaker Nvidia pledged Monday to build a $52 million supercomputer in Cambridge, England, weeks after announcing it intends to buy British rival Arm for $40 billion. CNBC reports: The supercomputer -- named "Cambridge-1" and intended for artificial intelligence (AI) research in health care -- is being unveiled by Nvidia founder and Chief Executive Jensen Huang at the company's GTC 2020 conference on Monday. "Tackling the world's most pressing challenges in health care requires massively powerful computing resources to harness the capabilities of AI," Huang will say in his keynote. "The Cambridge-1 supercomputer will serve as a hub of innovation for the U.K., and further the groundbreaking work being done by the nation's researchers in critical healthcare and drug discovery."

Expected to launch by the end of the year, the Cambridge-1 machine will be the 29th most powerful computer in the world and the most powerful in Britain, Nvidia said. Researchers at GSK, AstraZeneca, Guy's and St Thomas' NHS (National Health Service) Foundation Trust, King's College London and Oxford Nanopore will be able to use the supercomputer to try to solve medical challenges, including those presented by the coronavirus. Nvidia said Cambridge-1 will have 400 petaflops of "AI performance" and that it will rank in the top three most energy-efficient supercomputers in the world. A petaflop is a measure of a computer's processing speed.

Science

Face Shields Ineffective at Trapping Aerosols, Says Japanese Supercomputer (theguardian.com) 112

Plastic face shields are almost totally ineffective at trapping respiratory aerosols, according to modelling in Japan, casting doubt on their effectiveness in preventing the spread of coronavirus. From a report: A simulation using Fugaku, the world's fastest supercomputer, found that almost 100% of airborne droplets of less than 5 micrometres in size escaped through plastic visors of the kind often used by people working in service industries. One micrometre is one millionth of a metre. In addition, about half of larger droplets measuring 50 micrometres found their way into the air, according to Riken, a government-backed research institute in the western city of Kobe.

This week, senior scientists in Britain criticised the government for stressing the importance of hand-washing while placing insufficient emphasis on aerosol transmission and ventilation, factors that Japanese authorities have outlined in public health advice throughout the pandemic. As some countries have attempted to open up their economies, face shields are becoming a common sight in sectors that emphasise contact with the public, such as shops and beauty salons. Makoto Tsubokura, team leader at Riken's centre for computational science, said the simulation combined air flow with the reproduction of tens of thousand of droplets of different sizes, from under 1 micrometre to several hundred micrometres.

Power

Researchers Use Supercomputer to Design New Molecule That Captures Solar Energy (liu.se) 36

A reader shares some news from Sweden's Linköping University: The Earth receives many times more energy from the sun than we humans can use. This energy is absorbed by solar energy facilities, but one of the challenges of solar energy is to store it efficiently, such that the energy is available when the sun is not shining. This led scientists at Linköping University to investigate the possibility of capturing and storing solar energy in a new molecule.

"Our molecule can take on two different forms: a parent form that can absorb energy from sunlight, and an alternative form in which the structure of the parent form has been changed and become much more energy-rich, while remaining stable. This makes it possible to store the energy in sunlight in the molecule efficiently", says Bo Durbeej, professor of computational physics in the Department of Physics, Chemistry and Biology at LinkÃping University, and leader of the study...

It's common in research that experiments are done first and theoretical work subsequently confirms the experimental results, but in this case the procedure was reversed. Bo Durbeej and his group work in theoretical chemistry, and conduct calculations and simulations of chemical reactions. This involves advanced computer simulations, which are performed on supercomputers at the National Supercomputer Centre, NSC, in Linköping. The calculations showed that the molecule the researchers had developed would undergo the chemical reaction they required, and that it would take place extremely fast, within 200 femtoseconds. Their colleagues at the Research Centre for Natural Sciences in Hungary were then able to build the molecule, and perform experiments that confirmed the theoretical prediction...

"Most chemical reactions start in a condition where a molecule has high energy and subsequently passes to one with a low energy. Here, we do the opposite — a molecule that has low energy becomes one with high energy. We would expect this to be difficult, but we have shown that it is possible for such a reaction to take place both rapidly and efficiently", says Bo Durbeej.

The researchers will now examine how the stored energy can be released from the energy-rich form of the molecule in the best way...

Medicine

A Supercomputer Analyzed COVID-19, and an Interesting New Hypothesis Has Emerged (medium.com) 251

Thelasko shares a report from Medium: Earlier this summer, the Summit supercomputer at Oak Ridge National Lab in Tennessee set about crunching data on more than 40,000 genes from 17,000 genetic samples in an effort to better understand Covid-19. Summit is the second-fastest computer in the world, but the process -- which involved analyzing 2.5 billion genetic combinations -- still took more than a week. When Summit was done, researchers analyzed the results. It was, in the words of Dr. Daniel Jacobson, lead researcher and chief scientist for computational systems biology at Oak Ridge, a 'eureka moment.' The computer had revealed a new theory about how Covid-19 impacts the body: the bradykinin hypothesis. The hypothesis provides a model that explains many aspects of Covid-19, including some of its most bizarre symptoms. It also suggests 10-plus potential treatments, many of which are already FDA approved. Jacobson's group published their results in a paper in the journal eLife in early July.

According to the team's findings, a Covid-19 infection generally begins when the virus enters the body through ACE2 receptors in the nose, (The receptors, which the virus is known to target, are abundant there.) The virus then proceeds through the body, entering cells in other places where ACE2 is also present: the intestines, kidneys, and heart. This likely accounts for at least some of the disease's cardiac and GI symptoms. But once Covid-19 has established itself in the body, things start to get really interesting. According to Jacobson's group, the data Summit analyzed shows that Covid-19 isn't content to simply infect cells that already express lots of ACE2 receptors. Instead, it actively hijacks the body's own systems, tricking it into upregulating ACE2 receptors in places where they're usually expressed at low or medium levels, including the lungs.

The renin-angiotensin system (RAS) controls many aspects of the circulatory system, including the body's levels of a chemical called bradykinin, which normally helps to regulate blood pressure. According to the team's analysis, when the virus tweaks the RAS, it causes the body's mechanisms for regulating bradykinin to go haywire. Bradykinin receptors are resensitized, and the body also stops effectively breaking down bradykinin. (ACE normally degrades bradykinin, but when the virus downregulates it, it can't do this as effectively.) The end result, the researchers say, is to release a bradykinin storm -- a massive, runaway buildup of bradykinin in the body. According to the bradykinin hypothesis, it's this storm that is ultimately responsible for many of Covid-19's deadly effects.
Several drugs target aspects of the RAS and are already FDA approved, including danazol, stanozolol, and ecallantide, which reduce bradykinin production and could potentially stop a deadly bradykinin storm.

Interestingly, the researchers suggest vitamin D as a potentially useful Covid-19 drug. "The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN," the report says. "Again, this could stop potentially deadly bradykinin storms from forming." Other compounds could treat symptoms associated with bradykinin storms, such as Hymecromone and timbetasin.

Slashdot Top Deals