×
Science

Face Shields Ineffective at Trapping Aerosols, Says Japanese Supercomputer (theguardian.com) 112

Plastic face shields are almost totally ineffective at trapping respiratory aerosols, according to modelling in Japan, casting doubt on their effectiveness in preventing the spread of coronavirus. From a report: A simulation using Fugaku, the world's fastest supercomputer, found that almost 100% of airborne droplets of less than 5 micrometres in size escaped through plastic visors of the kind often used by people working in service industries. One micrometre is one millionth of a metre. In addition, about half of larger droplets measuring 50 micrometres found their way into the air, according to Riken, a government-backed research institute in the western city of Kobe.

This week, senior scientists in Britain criticised the government for stressing the importance of hand-washing while placing insufficient emphasis on aerosol transmission and ventilation, factors that Japanese authorities have outlined in public health advice throughout the pandemic. As some countries have attempted to open up their economies, face shields are becoming a common sight in sectors that emphasise contact with the public, such as shops and beauty salons. Makoto Tsubokura, team leader at Riken's centre for computational science, said the simulation combined air flow with the reproduction of tens of thousand of droplets of different sizes, from under 1 micrometre to several hundred micrometres.

Power

Researchers Use Supercomputer to Design New Molecule That Captures Solar Energy (liu.se) 36

A reader shares some news from Sweden's Linköping University: The Earth receives many times more energy from the sun than we humans can use. This energy is absorbed by solar energy facilities, but one of the challenges of solar energy is to store it efficiently, such that the energy is available when the sun is not shining. This led scientists at Linköping University to investigate the possibility of capturing and storing solar energy in a new molecule.

"Our molecule can take on two different forms: a parent form that can absorb energy from sunlight, and an alternative form in which the structure of the parent form has been changed and become much more energy-rich, while remaining stable. This makes it possible to store the energy in sunlight in the molecule efficiently", says Bo Durbeej, professor of computational physics in the Department of Physics, Chemistry and Biology at LinkÃping University, and leader of the study...

It's common in research that experiments are done first and theoretical work subsequently confirms the experimental results, but in this case the procedure was reversed. Bo Durbeej and his group work in theoretical chemistry, and conduct calculations and simulations of chemical reactions. This involves advanced computer simulations, which are performed on supercomputers at the National Supercomputer Centre, NSC, in Linköping. The calculations showed that the molecule the researchers had developed would undergo the chemical reaction they required, and that it would take place extremely fast, within 200 femtoseconds. Their colleagues at the Research Centre for Natural Sciences in Hungary were then able to build the molecule, and perform experiments that confirmed the theoretical prediction...

"Most chemical reactions start in a condition where a molecule has high energy and subsequently passes to one with a low energy. Here, we do the opposite — a molecule that has low energy becomes one with high energy. We would expect this to be difficult, but we have shown that it is possible for such a reaction to take place both rapidly and efficiently", says Bo Durbeej.

The researchers will now examine how the stored energy can be released from the energy-rich form of the molecule in the best way...

Medicine

A Supercomputer Analyzed COVID-19, and an Interesting New Hypothesis Has Emerged (medium.com) 251

Thelasko shares a report from Medium: Earlier this summer, the Summit supercomputer at Oak Ridge National Lab in Tennessee set about crunching data on more than 40,000 genes from 17,000 genetic samples in an effort to better understand Covid-19. Summit is the second-fastest computer in the world, but the process -- which involved analyzing 2.5 billion genetic combinations -- still took more than a week. When Summit was done, researchers analyzed the results. It was, in the words of Dr. Daniel Jacobson, lead researcher and chief scientist for computational systems biology at Oak Ridge, a 'eureka moment.' The computer had revealed a new theory about how Covid-19 impacts the body: the bradykinin hypothesis. The hypothesis provides a model that explains many aspects of Covid-19, including some of its most bizarre symptoms. It also suggests 10-plus potential treatments, many of which are already FDA approved. Jacobson's group published their results in a paper in the journal eLife in early July.

According to the team's findings, a Covid-19 infection generally begins when the virus enters the body through ACE2 receptors in the nose, (The receptors, which the virus is known to target, are abundant there.) The virus then proceeds through the body, entering cells in other places where ACE2 is also present: the intestines, kidneys, and heart. This likely accounts for at least some of the disease's cardiac and GI symptoms. But once Covid-19 has established itself in the body, things start to get really interesting. According to Jacobson's group, the data Summit analyzed shows that Covid-19 isn't content to simply infect cells that already express lots of ACE2 receptors. Instead, it actively hijacks the body's own systems, tricking it into upregulating ACE2 receptors in places where they're usually expressed at low or medium levels, including the lungs.

The renin-angiotensin system (RAS) controls many aspects of the circulatory system, including the body's levels of a chemical called bradykinin, which normally helps to regulate blood pressure. According to the team's analysis, when the virus tweaks the RAS, it causes the body's mechanisms for regulating bradykinin to go haywire. Bradykinin receptors are resensitized, and the body also stops effectively breaking down bradykinin. (ACE normally degrades bradykinin, but when the virus downregulates it, it can't do this as effectively.) The end result, the researchers say, is to release a bradykinin storm -- a massive, runaway buildup of bradykinin in the body. According to the bradykinin hypothesis, it's this storm that is ultimately responsible for many of Covid-19's deadly effects.
Several drugs target aspects of the RAS and are already FDA approved, including danazol, stanozolol, and ecallantide, which reduce bradykinin production and could potentially stop a deadly bradykinin storm.

Interestingly, the researchers suggest vitamin D as a potentially useful Covid-19 drug. "The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN," the report says. "Again, this could stop potentially deadly bradykinin storms from forming." Other compounds could treat symptoms associated with bradykinin storms, such as Hymecromone and timbetasin.
Supercomputing

ARM Not Just For Macs: Might Make Weather Forecasting Cheaper Too (nag.com) 41

An anonymous reader writes: The fact that Apple is moving away from Intel to ARM has been making a lot of headlines recently — but that's not the only new place where ARM CPUs have been making a splash.

ARM has also been turning heads in High Performance Computing (HPC), and an ARM-based system is now the world's most powerful supercomputer (Fugaku). AWS recently made their 2nd generation ARM Graviton chips available which allows everyone to test HPC workloads on ARM silicon. A company called The Numerical Algorithms Group recently published a small benchmark study that compared weather simulations on Intel, AMD and ARM instances on AWS and reported that although the ARM silicon is slowest, it is also the cheapest for this benchmark.

The benchmark test concludes the ARM processor provides "a very cost-efficient solution...and performance is competitive to other, more traditional HPC processors."
Japan

Japan's Longest-Serving PM, Shinzo Abe, Resigns For Health Reasons (apnews.com) 22

Late last night, it was rumored that Japan's longest-serving prime minister, Shinzo Abe, would step down due to his struggle with ulcerative colitis. Abe confirmed the reports this morning, telling reporters that it was "gut wrenching" to leave many of his goals unfinished. He also apologized for stepping down during the pandemic. The Associated Press reports: Abe has had ulcerative colitis since he was a teenager and has said the condition was controlled with treatment. Concerns about his health began this summer and grew this month when he visited a Tokyo hospital two weeks in a row for unspecified health checkups. He is now on a new treatment that requires IV injections, he said. While there is some improvement, there is no guarantee that it will cure his condition and so he decided to step down after treatment Monday, he said.

"It is gut wrenching to have to leave my job before accomplishing my goals," Abe said Friday, mentioning his failure to resolve the issue of Japanese abducted years ago by North Korea, a territorial dispute with Russia and a revision of Japan's war-renouncing constitution. He said his health problem was under control until earlier this year but was found to have worsened in June when he had an annual checkup. "Faced with the illness and treatment, as well as the pain of lacking physical strength ... I decided I should not stay on as prime minister when I'm no longer capable of living up to the people's expectations with confidence," Abe said at a news conference.
Slashdot reader shanen writes: [...] In theory, [Shinzo Abe] was the supreme leader of one of the most important countries in the technological world. In practice, not so much?

At a minimum, the New Akiba is far different from the Akihabara of yore, but maybe it's just a chronological coincidence? They are making quite pretty COVID-19 sneeze pictures with the new Japanese supercomputer. I have to admit that either Abe hasn't accomplished that much or he's pretty bad at tooting his own horn. I would be surprised if anyone could articulate what Abe actually stood for even after all these years in the spotlight.

Perhaps the funny part is that Abe was apparently just clinging to power to set a new endurance record as Prime Minister. He passed the old number one just a few days ago. But looking forward, I'm actually more interested in trigger effects. My current speculation is that Kishida will snag the ring and he's liable to come out much stronger against China. Xi was already annoyed and I am still expecting stock market turmoil in October, but this may make it worse.
Further reading: Japan's Longest-Serving PM, Shinzo Abe, Quits In Bid To 'Escape' Potential Prosecution
Intel

Intel Slips, and a High-Profile Supercomputer Is Delayed (nytimes.com) 77

The chip maker was selected for an Energy Department project meant to show American tech independence. But problems at Intel have thrown a wrench into the effort. From a report: When it selected Intel to help build a $500 million supercomputer last year, the Energy Department bet that computer chips made in the United States could help counter a technology challenge from China. Officials at the department's Argonne National Laboratory predicted that the machine, called Aurora and scheduled to be installed at facilities near Chicago in 2021, would be the first U.S. system to reach a technical pinnacle known as exascale computing. Intel pledged to supply three kinds of chips for the system from its factories in Oregon, Arizona and New Mexico. But a technology delay by the Silicon Valley giant has thrown a wrench into that plan, the latest sign of headwinds facing government and industry efforts to reverse America's dependence on foreign-made semiconductors. It was also an indication of the challenges ahead for U.S. hopes to regain a lead in critical semiconductor manufacturing technology.

Intel, which supplies electronic brains for most personal computers and web services, has long driven miniaturization advances that make electronic devices smaller, faster and cheaper. But Robert Swan, its chief executive, warned last month that the next production advance would be 12 months late and suggested that some chips for Aurora might be made outside Intel factories. Intel's problems make it close to impossible that Aurora will be installed on schedule, researchers and analysts said. And shifting a key component to foreign factories would undermine company and government hopes of an all-American design. "That is part of the story they were trying to sell," said Jack Dongarra, a computer scientist at the University of Tennessee who tracks supercomputer installations around the world. "Now they stumbled."

Mars

Will More Powerful Processors Super-Charge NASA's Mars Rovers? (utexas.edu) 27

The Texas Advanced Computer Center talks to Masahiro (Hiro) Ono, who leads the Robotic Surface Mobility Group at NASA's Jet Propulsion Laboratory which led all the Mars rover missions (also one of the researchers who developed the software that allows the current rover to operate): The Perseverance rover, which launched this summer, computes using RAD 750s — radiation-hardened single board computers manufactured by BAE Systems Electronics. Future missions, however, would potentially use new high-performance, multi-core radiation hardened processors designed through the High Performance Spaceflight Computing project. (Qualcomm's Snapdragon processor is also being tested for missions.) These chips will provide about one hundred times the computational capacity of current flight processors using the same amount of power. "All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop" — meaning it requires human interaction to operate, according to Chris Mattmann, the deputy chief technology and innovation officer at JPL. "Part of the reason for that is the limits of the processors that are running on them. One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board. What are the killer apps given that new computing environment...?"

Training machine learning models on the Maverick2 supercomputer at the Texas Advanced Computing Center (TACC), as well as on Amazon Web Services and JPL clusters, Ono, Mattmann and their team have been developing two novel capabilities for future Mars rovers, which they call Drive-By Science and Energy-Optimal Autonomous Navigation.... "We'd like future rovers to have a human-like ability to see and understand terrain," Ono said. "For rovers, energy is very important. There's no paved highway on Mars. The drivability varies substantially based on the terrain — for instance beach versus bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that's the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we're going to need to change the paradigm a little bit."

Ono explains that new paradigm as commanding by policy, a middle ground between the human-dictated: "Go from A to B and do C," and the purely autonomous: "Go do science."

Commanding by policy involves pre-planning for a range of scenarios, and then allowing the rover to determine what conditions it is encountering and what it should do. "We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that," Ono explained. "We'll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we'll use the increased power of the rover to de-compress the policy and execute it." The pre-planned list is generated using machine learning-derived optimizations. The on-board chip can then use those plans to perform inference: taking the inputs from its environment and plugging them into the pre-trained model. The inference tasks are computationally much easier and can be computed on a chip like those that may accompany future rovers to Mars.

"The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options," Ono said. "This is important in case something bad happens or it finds something interesting...." The efforts to develop a new AI-based paradigm for future autonomous missions can be applied not just to rovers but to any autonomous space mission, from orbiters to fly-bys to interstellar probes, Ono says.

Medicine

Fastest US Supercomputer Enlisted in Fight Against Coronavirus (bloomberg.com) 46

The fastest supercomputer in the U.S. is being put to work in the search for a vaccine to prevent the coronavirus and treat those infected by it. From a report: The Summit, housed in the U.S. Energy Department's Oak Ridge National Laboratory in Tennessee, is capable of 200,000 trillion calculations per second. It is being used to analyze health data as part of the Covid-19 Insights Partnership announced Tuesday by the agency as well as the departments of Veterans affairs and Health and Human Services. "Summit's unmatched capacity to analyze massive integrated datasets and divine insights will help researchers identify and advance potential treatments and enhance outcomes for Covid-19 patients with unprecedented speed," the agencies said in a statement. The Energy Department said earlier this year that its computers were being used to help the Centers for Disease Control and Prevention and the World Heath Organization conduct modeling on the virus.
Earth

From Rocks To Icebergs, the Natural World Tends To Break Into Cubes 34

sciencehabit shares a report from Science Magazine: Researchers have found that when everything from icebergs to rocks breaks apart, their pieces tend to resemble cubes. The finding suggests a universal rule of fragmentation at scales ranging from the microscopic to the planetary. The scientists started their study "fragmenting" an abstract cube in a computer simulation by slicing it with 50 two-dimensional planes inserted at random angles. The planes cut the cube into 600,000 fragments, which were, on average, cubic themselves -- meaning that, on average, the fragments had six sides that were quadrangles, although any individual fragment need not be a cube. The result led the researchers to suspect that cubes might be a common feature of fragmentation.

The researchers tried to confirm this hunch using real-world measurements. They headed to an outcrop of the mineral dolomite on the mountain Harmashatarhegy in Budapest, Hungary, and counted the number of vertices in cracks in the stone face. Most of these cracks formed squarish shapes, which is one of the faces of a cube, regardless of if they had been weathered naturally or had been created by humans dynamiting the mountain. Finally, the team created more-powerful supercomputer simulations modeling the breakup of 3D materials under idealized conditions -- like a rock being pulled equally in all directions. Such cases formed polyhedral pieces that were, in an average sense, cubes.
The researchers reported their findings in Proceedings of the National Academy of Sciences.
Supercomputing

A Volunteer Supercomputer Team is Hunting for Covid Clues (defenseone.com) 91

The world's fastest computer is now part of "a vast supercomputer-powered search for new findings pertaining to the novel coronavirus' spread" and "how to effectively treat and mitigate it," according to an emerging tech journalist at Nextgov.

It's part of a consortium currently facilitating over 65 active research projects, for which "Dozens of national and international members are volunteering free compute time...providing at least 485 petaflops of capacity and steadily growing, to more rapidly generate new solutions against COVID-19."

"What started as a simple concept has grown to span three continents with over 40 supercomputer providers," Dario Gil, director of IBM Research and consortium co-chair, told Nextgov last week. "In the face of a global pandemic like COVID-19, hopefully a once-in-a-lifetime event, the speed at which researchers can drive discovery is a critical factor in the search for a cure and it is essential that we combine forces...."

[I]ts resources have been used to sort through billions of molecules to identify promising compounds that can be manufactured quickly and tested for potency to target the novel coronavirus, produce large data sets to study variations in patient responses, perform airflow simulations on a new device that will allow doctors to use one ventilator to support multiple patients — and more. The complex systems are powering calculations, simulations and results in a matter of days that several scientists have noted would take a matter of months on traditional computers.

The Undersecretary for Science at America's Energy Department said "What's really interesting about this from an organizational point of view is that it's basically a volunteer organization."

The article identifies some of the notable participants:
  • IBM was part of the joint launch with America's Office of Science and Technology Policy and its Energy Department.
  • The chief of NASA's Advanced Supercomputing says they're "making the full reserve portion of NASA supercomputing resources available to researchers working on the COVID-19 response, along with providing our expertise and support to port and run their applications on NASA systems."
  • Amazon Web Services "saw a clear opportunity to bring the benefits of cloud... to bear in the race for treatments and a vaccine," according to a company executive.
  • Japan's Fugaku — "which surpassed leading U.S. machines on the Top 500 list of global supercomputers in late June" — also joined the consortium in June.

Other consortium members:

  • Google Cloud
  • Microsoft
  • Massachusetts Institute of Technology
  • Rensselaer Polytechnic Institute
  • The National Science Foundation
  • Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia National laboratories.
  • National Center for Atmospheric Research's Wyoming Supercomputing Center
  • AMD
  • NVIDIA
  • Dell Technologies. ("The company is now donating cycles from the Zenith supercomputer and other resources.")

Math

Mathematician Ronald Graham Dies At 84 (ams.org) 14

The American Mathematical Society has announced the passing of Ronald Graham, "one of the principal architects of the rapid development worldwide of discrete mathematics in recent years." He died July 6th at the age of 84. From the report: Graham published more than 350 papers and books with many collaborators, including more than 90 with his wife, Fan Chung, and more than 30 with Paul Erdos. In addition to writing articles with Paul Erdos, Graham had a room in his house reserved for Erdos's frequent visits, he administered the cash prizes that Erdos created for various problems, and he created the Erdos number, which is the collaboration distance between a mathematician and Erdo's. He also created Graham's number in a 1971 paper on Ramsey theory written with Bruce Rothschild, which was for a time the largest number used in a proof.

Graham received his PhD from the University of California, Berkeley in 1962 under the direction of D.H. Lehmer. He worked at Bell Laboratories until 1999, starting as director of information sciences and ending his tenure there as chief scientist. Graham then joined the faculty at the University of California, San Diego and later became chief scientist at the California Institute for Telecommunications and Information Technology, a joint operation between the university and the University of California, Irvine. [...] Graham was an AMS member since 1961. For more information, see his "special page," these video interviews by the Simons Foundation, an audio interview about the mathematics of juggling, and his page at the MacTutor website.
Graham's most recent appearance on Slashdot was in 2016, when a trio of researchers used a supercomputer to generate the largest math proof ever at 200 terabytes in size. The math problem was named the boolean Pythagorean Triples problem and was first proposed back in the 1980's by mathematician Ronald Graham.
Japan

ARM-Based Japanese Supercomputer is Now the Fastest in the World (theverge.com) 72

A Japanese supercomputer has taken the top spot in the biannual Top500 supercomputer speed ranking. Fugaku, a computer in Kobe co-developed by Riken and Fujitsu, makes use of Fujitsu's 48-core A64FX system-on-chip. It's the first time a computer based on ARM processors has topped the list. From a report: Fugaku turned in a Top500 HPL result of 415.5 petaflops, 2.8 times as fast as IBM's Summit, the nearest competitor. Fugaku also attained top spots in other rankings that test computers on different workloads, including Graph 500, HPL-AI, and HPCG. No previous supercomputer has ever led all four rankings at once. While fastest supercomputer rankings normally bounce between American- and Chinese-made systems, this is Japan's first system to rank first on the Top500 in nine years since Fugaku's predecessor, Riken's K computer. Overall there are 226 Chinese supercomputers on the list, 114 from America, and 30 from Japan. US-based systems contribute the most aggregate performance with 644 petaflops.
AI

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus (bloomberg.com) 29

Over the past few months, OpenAI has vacuumed an incredible amount of data into its artificial intelligence language systems. It sucked up Wikipedia, a huge swath of the rest of the internet and tons of books. This mass of text -- trillions of words -- was then analyzed and manipulated by a supercomputer to create what the research group bills as a major AI breakthrough and the heart of its first commercial product, which came out on Thursday. From a report: The product name -- OpenAI calls it "the API" -- might not be magical, but the things it can accomplish do seem to border on wizardry at times. The software can perform a broad set of language tasks, including translating between languages, writing news stories and poems and answering everyday questions. Ask it, for example, if you should keep reading a story, and you might be told, "Definitely. The twists and turns keep coming." OpenAI wants to build the most flexible, general purpose AI language system of all time. Typically, companies and researchers will tune their AI systems to handle one, limited task. The API, by contrast, can crank away at a broad set of jobs and, in many cases, at levels comparable with specialized systems.

While the product is in a limited test phase right now, it will be released broadly as something that other companies can use at the heart of their own offerings such as customer support chat systems, education products or games, OpenAI Chief Executive Officer Sam Altman said. [...] The API product builds on years of research in which OpenAI has compiled ever larger text databases with which to feed its AI algorithms and neural networks. At its core, OpenAI API looks over all the examples of language it has seen and then uses those examples to predict, say, what word should come next in a sentence or how best to answer a particular question. "It almost gets to the point where it assimilates all of human knowledge because it has seen everything before," said Eli Chen, CEO of startup Veriph.ai, who tried out an earlier version of OpenAI's product. "Very few other companies would be able to afford what it costs to build this type of huge model."

AI

MIT's Tiny Artificial Brain Chip Could Bring Supercomputer Smarts To Mobile Devices (techcrunch.com) 15

An anonymous reader quotes a report from TechCrunch: Researchers at MIT have published a new paper that describes a new type of artificial brain synapse that offers performance improvements versus other existing versions, and which can be combined in volumes of tens of thousands on a chip that's smaller physically than a single piece of confetti. The results could help create devices that can handle complex AI computing locally, while remaining small and power-efficient, and without having to connect to a data center. The research team created what are known as "memristors" -- essentially simulated brain synapses created using silicon, but also using alloys of silver and copper in their construction. The result was a chip that could effectively "remember" and recall images in very high detail, repeatedly, with much crisper and more detailed "remembered" images than in other types of simulated brain circuits that have come before. What the team wants to ultimately do is recreate large, complex artificial neural networks that are currently based in software that require significant GPU computing power to run -- but as dedicated hardware, so that it can be localized in small devices, including potentially your phone, or a camera.

Unlike traditional transistors, which can switch between only two states (0 or 1) and which form the basis of modern computers, memsistors offer a gradient of values, much more like your brain, the original analog computer. They also can "remember" these states so they can easily recreate the same signal for the same received current multiple times over. What the researchers did here was borrow a concept from metallurgy: When metallurgists want to change the properties of a metal, they combine it with another that has that desired property, to create an alloy. Similarly, the researchers here found an element they could combine with the silver they use as the memristor's positive electrode, in order to make it better able to consistently and reliably transfer ions along even a very thin conduction channel. That's what enabled the team to create super small chips that contain tens of thousands of memristors that can nonetheless not only reliably recreate images from "memory," but also perform inference tasks like improving the detail of, or blurring the original image on command, better than other, previous memristors created by other scientists.

Earth

Supercomputer Simulates the Impact of the Asteroid That Wiped Out Dinosaurs (zdnet.com) 61

An anonymous reader quotes a report from ZDNet: Some 66 million years ago, an asteroid hit the Earth on the eastern coast of modern Mexico, resulting in up to three quarters of plant and animal species living on the planet going extinct -- including the dinosaurs. Now, a team of researchers equipped with a supercomputer have managed to simulate the entire event, shedding light on the reasons that the impact led to a mass extinction of life. The simulations were carried out by scientists at Imperial College in London, using high performance computing (HPC) facilities provided by Hewlett Packard Enterprise. The research focused on establishing as precise an impact angle and trajectory as possible, which in turn can help determine precisely how the asteroid's hit affected the surrounding environment.

Various impact angles and speeds were considered, and 3D simulations for each were fed into the supercomputer. These simulations were then compared with the geophysical features that have been observed in the 110-mile wide Chicxulub crater, located in Mexico's Yucatan Peninsula, where the impact happened. The simulations that turned out to be the most consistent with the structure of the Chicxulub crater showed an impact angle of about 60 degrees. Such a strike had the strength of about ten billion Hiroshima bombs, and this particular angle meant that rocks and sediments were ejected almost symmetrically. This, in turn, caused a greater amount of climate-changing gases to be released, including billions of tonnes of sulphur that blocked the sun. The rest is history: firestorms, hurricanes, tsunamis and earthquakes rocked the planet, and most species disappeared from the surface of the Earth.
The 60-degree angle constituted "the worse-case scenario for the lethality of the impact" because it maximized the ejection of rock and therefore, the production of gases, the scientists wrote.

"The researchers carried out almost 300 3D simulations before they were able to reach their conclusions, which was processed by the HPE Apollo 6000 Gen10 supercomputer located at the University of Leicester," adds ZDNet. "The 14,000-cores system, powered by Intel's Skylake chips, is supported by a 6TB server to accommodate large, in-memory calculations."
Security

Supercomputers Breached Across Europe To Mine Cryptocurrency (zdnet.com) 43

An anonymous reader quotes ZDNet: Multiple supercomputers across Europe have been infected this week with cryptocurrency mining malware and have shut down to investigate the intrusions. Security incidents have been reported in the UK, Germany, and Switzerland, while a similar intrusion is rumored to have also happened at a high-performance computing center located in Spain.

Cado Security, a US-based cyber-security firm, said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials... Once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero cryptocurrency.

AI

NVIDIA Ampere A100 GPU For AI Unveiled, Largest 7nm Chip Ever Produced (hothardware.com) 35

MojoKid writes: NVIDIA CEO Jensen Huang unveiled the company's new Ampere A100 GPU architecture for machine learning and HPC markets today. Jensen claims the 54B transistor A100 is the biggest, most powerful GPU NVIDIA has ever made, and it's also the largest chip ever produced on 7nm semiconductor process. There are a total of 6,912 FP32 CUDA cores, 432 Tensor cores, and 108 SMs (Streaming Multiprocessors) in the A100, paired to 40GB of HBM2e memory with maximum memory bandwidth of 1.6TB/sec. FP32 compute comes in at a staggering 19.5 TLFLOPs, compared to 16.4 TFLOPs for NVIDIA's previous gen Tesla V100. In addition, its Tensor Cores employ FP32 precision that allows for a 20x uplift in AI performance gen-over-gen. When it comes to FP64 performance, these Tensor Cores also provide a 2.5x performance boost, versus its predecessor, Volta. Additional features include Multi-Instance GPU, aka MIG, which allows an A100 GPU to be sliced up into up to seven discrete instances, so it can be provisioned for multiple discrete specialized workloads. Mulitple A100 GPUs will also make their way into NVIDIA's third-generation DGX AI supercomputer that packs a whopping 5 PFLOPs of AI performance. According to NVIDIA, its Ampere-based A100 GPU and DGX AI systems are already in full production and shipping to customers now. Gamers are of course looking forward to what the company has in store with Ampere for the enthusiast PC market, as expectations for its rumored GeForce RTX 30 family are incredibly high.
Supercomputing

NVIDIA Is Contributing Its AI Smarts To Help Fight COVID-19 (engadget.com) 12

NVIDIA is using its background in AI and optimizing supercomputer throughput to the COVID-19 High Performance Computing Consortium group, which plans to support researchers by giving them time with 30 supercomputers offering a combined 400 petaflops of performance. Engadget reports: NVIDIA will add to this by providing expertise in AI, biology and large-scale computing optimizations. The company likened the Consortium's efforts to the Moon race. Ideally, this will speed up work for scientists who need modelling and other demanding tasks that would otherwise take a long time. NVIDIA has a number of existing contributions to coronavirus research, including the 27,000 GPUs inside the Summit supercomputer and those inside many of the computers from the crowdsourced Folding@Home project. This is still a significant step forward, though, and might prove lifesaving if it leads to a vaccine or more effective containment.
Technology

Scientists Turn To Tech To Prevent Second Wave of Locusts in East Africa (theguardian.com) 37

Scientists monitoring the movements of the worst locust outbreak in Kenya in 70 years are hopeful that a new tracking program they will be able to prevent a second surge of the crop-ravaging insects. From a report: The UN has described the locust outbreak in the Horn of Africa, and the widespread breeding of the insects in Kenya, Ethiopia and Somalia that has followed, as "extremely alarming." The UN's Food and Agriculture Organization has warned that an imminent second hatch of the insects could threaten the food security of 25 million people across the region as it enters the cropping season. Kenneth Mwangi, a satellite information scientist, based at the Intergovernmental Authority on Development climate prediction and applications centre, based in Nairobi, said researchers were running a supercomputer model to predict breeding areas that may have been missed by ground monitoring. These areas could become sources of new swarms if not sprayed.

"The model will be able to tell us the areas in which hoppers are emerging," said Mwangi. "We will also get ground information. These areas can become a source of an upsurge, or a new generation of hoppers. It becomes very difficult and expensive to control, which is why we are looking to prevent an upsurge. The focus will be on stopping hoppers becoming adults, as that leads to another cycle of infestation. We want to avoid that. We want to advise governments early, before an upsurge happens." So far, the supercomputer, funded by $45 million of UK aid as part of its Weather and Climate Information Services for Africa programme, has successfully forecast the movement of locusts using data such as wind speed and direction, temperature, and humidity. The model has achieved 90% accuracy in forecasting the future locations of the swarms, Mwangi said.

AI

Defeated Chess Champ Garry Kasparov Has Made Peace With AI (wired.com) 106

Last week, Garry Kasparov, perhaps the greatest chess player in history, returned to the scene of his famous IBM supercomputer Deep Blue defeat -- the ballroom of a New York hotel -- for a debate with AI experts organized by the Association for the Advancement of Artificial Intelligence. He met with WIRED senior writer Will Knight there to discuss chess, AI, and a strategy for staying a step ahead of machines. From the report: WIRED: What was it like to return to the venue where you lost to Deep Blue?
Garry Kasparov: I've made my peace with it. At the end of the day, the match was not a curse but a blessing, because I was a part of something very important. Twenty-two years ago, I would have thought differently. But things happen. We all make mistakes. We lose. What's important is how we deal with our mistakes, with negative experience. 1997 was an unpleasant experience, but it helped me understand the future of human-machine collaboration. We thought we were unbeatable, at chess, Go, shogi. All these games, they have been gradually pushed to the side [by increasingly powerful AI programs]. But it doesn't mean that life is over. We have to find out how we can turn it to our advantage. I always say I was the first knowledge worker whose job was threatened by a machine. But that helps me to communicate a message back to the public. Because, you know, nobody can suspect me of being pro-computers.

What message do you want to give people about the impact of AI?
I think it's important that people recognize the element of inevitability. When I hear outcry that AI is rushing in and destroying our lives, that it's so fast, I say no, no, it's too slow. Every technology destroys jobs before creating jobs. When you look at the statistics, only 4 percent of jobs in the US require human creativity. That means 96 percent of jobs, I call them zombie jobs. They're dead, they just don't know it. For several decades we have been training people to act like computers, and now we are complaining that these jobs are in danger. Of course they are. We have to look for opportunities to create jobs that will emphasize our strengths. Technology is the main reason why so many of us are still alive to complain about technology. It's a coin with two sides. I think it's important that, instead of complaining, we look at how we can move forward faster. When these jobs start disappearing, we need new industries, we need to build foundations that will help. Maybe it's universal basic income, but we need to create a financial cushion for those who are left behind. Right now it's a very defensive reaction, whether it comes from the general public or from big CEOs who are looking at AI and saying it can improve the bottom line but it's a black box. I think it's we still struggling to understand how AI will fit in.
Further reading: Fast-and-Loose Culture of Esports is Upending Once Staid World of Chess; and Kramnik and AlphaZero: How To Rethink Chess.

Slashdot Top Deals