Earth

Supercomputer Re-Creates One of the Most Famous Pictures of Earth 18

sciencehabit shares a report from Science Magazine: Fifty years ago today, astronauts aboard Apollo 17, NASA's last crewed mission to the Moon, took an iconic photograph of our planet. The image became known as the Blue Marble -- the first fully illuminated picture of Earth, in color, taken by a person. Now, scientists have re-created that image during a test run of a cutting-edge digital climate model. The model can simulate climatic phenomena, such as storms and ocean eddies, at 1-kilometer resolution, as much as 100 times sharper than typical global simulations.

To re-create the swirling winds of the Blue Marble -- including a cyclone over the Indian Ocean -- the researchers fed weather records from 1972 into the supercomputer-powered software. The resulting world captured distinctive features of the region, such as upwelling waters off the coast of Namibia and long, reedlike cloud coverage. Experts say the stunt highlights the growing sophistication of high-resolution climate models. Those are expected to form the core of the European Union's Destination Earth project, which aims to create a 'digital twin' of Earth to better forecast extreme weather and guide preparation plans.
Intel

Intel's Take on the Next Wave of Moore's Law (ieee.org) 22

The next wave of Moore's Law will rely on a developing concept called system technology co-optimization, Ann B. Kelleher, general manager of technology development at Intel told IEEE Spectrum in an interview ahead of her plenary talk at the 2022 IEEE Electron Device Meeting. From a report: "Moore's Law is about increasing the integration of functions," says Kelleher. "As we look forward into the next 10 to 20 years, there's a pipeline full of innovation" that will continue the cadence of improved products every two years. That path includes the usual continued improvements in semiconductor processes and design, but system technology co-optimization (STCO) will make the biggest difference. Kelleher calls it an "outside-in" manner of development. It starts with the workload a product needs to support and its software, then works down to system architecture, then what type of silicon must be within a package, and finally down to the semiconductor manufacturing process. "With system technology co-optimization, it means all the pieces are optimized together so that you're getting your best answer for the end product," she says.

STCO is an option now in large part because advanced packaging, such as 3D integration, is allowing the high-bandwidth connection of chiplets -- small, functional chips -- inside a single package. This means that what would once be functions on a single chip can be disaggregated onto dedicated chiplets, which can each then be made using the most optimal semiconductor process technology. For example, Kelleher points out in her plenary that high-performance computing demands a large amount of cache memory per processor core, but chipmaker's ability to shrink SRAM is not proceeding at the same pace as the scaling down of logic. So it makes sense to build SRAM caches and compute cores as separate chiplets using different process technology and then stitch them together using 3D integration. A key example of STCO in action, says Kelleher, is the Ponte Vecchio processor at the heart of the Aurora supercomputer. It's composed of 47 active chiplets (as well as 8 blanks for thermal conduction). These are stitched together using both advanced horizontal connections (2.5 packaging tech) and 3D stacking. "It brings together silicon from different fabs and enables them to come together so that the system is able to perform against the workload that it's designed for," she says.

Cloud

Microsoft, Nvidia Partner To Build a Massive AI Supercomputer in the Cloud (zdnet.com) 11

Nvidia and Microsoft announced Wednesday a multi-year collaboration to build an AI supercomputer in the cloud, adding tens of thousands of Nvidia GPUs to Microsoft Azure. ZDNet: The new agreement makes Azure the first public cloud to incorporate Nvidia's full AI stack -- its GPUs, networking, and AI software. By beefing up Azure's infrastructure with Nvidia's full AI suite, more enterprises will be able to train, deploy, and scale AI -- including large, state-of-the-art models. "AI technology advances as well as industry adoption are accelerating," Manuvir Das, Nvidia's VP of enterprise computing, said in a statement. "The breakthrough of foundation models has triggered a tidal wave of research, fostered new startups, and enabled new enterprise applications."
Intel

Intel Takes on AMD and Nvidia With Mad 'Max' Chips For HPC (theregister.com) 26

Intel's latest plan to ward off rivals from high-performance computing workloads involves a CPU with large stacks of high-bandwidth memory and new kinds of accelerators, plus its long-awaited datacenter GPU that will go head-to-head against Nvidia's most powerful chips. From a report: After multiple delays, the x86 giant on Wednesday formally introduced the new Xeon CPU family formerly known as Sapphire Rapids HBM and its new datacenter GPU better known as Ponte Vecchio. Now you will know them as the Intel Xeon CPU Max Series and the Intel Data Center GPU Max Series, respectively, which were among the bevy of details shared by Intel today, including performance comparisons. These chips, set to arrive in early 2023 alongside the vanilla 4th generation Xeon Scalable CPUs, have been a source of curiosity within the HPC community for years because they will power the US Department of Energy's long-delayed Aurora supercomputer, which is expected to become the country's second exascale supercomputer and, consequently, one of the world's fastest.

In a briefing with journalists, Jeff McVeigh, the head of Intel's Super Compute Group, said the Max name represents the company's desire to maximize the bandwidth, compute and other capabilities for a wide range of HPC applications, whose primary users include governments, research labs, and corporations. McVeigh did admit that Intel has fumbled in how long it took the company to commercialize these chips, but he tried to spin the blunders into a higher purpose. "We're always going to be pushing the envelope. Sometimes that causes us to maybe not achieve it, but we're doing that in service of helping our developers, helping the ecosystem to help solve [the world's] biggest challenges," he said. [...] The Xeon Max Series will pack up to 56 performance cores, which are based on the same Golden Cove microarchitecture features as Intel's 12th-Gen Core CPUs, which debuted last year. Like the vanilla Sapphire Rapids chips coming next year, these chips will support DDR5, PCIe 5.0 and Compute Express Link (CXL) 1.1, which will enable memory to be directly attached to the CPU over PCIe 5.0.

Communications

European Observatory NOEMA Reaches Full Capacity With Twelve Antennas (phys.org) 18

The NOEMA radio telescope, located on the Plateau de Bure in the French Alps, is now equipped with twelve antennas, making it the most powerful radio telescope of its kind in the northern hemisphere. Phys.Org reports: Eight years after the inauguration of the first NOEMA antenna in 2014, the large-scale European project is now complete. Thanks to its twelve 15-meter antennas, which can be moved back and forth on a specially developed rail system up to a distance of 1.7 kilometers long, NOEMA is a unique instrument for astronomical research. The telescope is equipped with highly sensitive receiving systems that operate close at the quantum limit. During observations, the observatory's twelve antennas act as a single telescope -- a technique called interferometry. After all the antennas have been pointed towards one and the same region of space, the signals they receive are combined with the help of a supercomputer. Their detailed resolution then corresponds to that of a huge telescope whose diameter is equal to the distance between the outermost antennas.

The respective arrangement of the antennas can extend over distances from a few hundred meters to 1.7 kilometers. The network thus functions like a camera with a variable lens. The further apart the antennas are, the more powerful is the zoom: the maximum spatial resolution of NOEMA is so high that it would be able to detect a mobile phone at a distance of over 500 kilometers. NOEMA is one of the few radio observatories worldwide that can simultaneously detect and measure a large number of signatures -- i.e., "fingerprints" of molecules and atoms. Thanks to these so-called multi-line observations, combined with high sensitivity, NOEMA is a unique instrument for investigating the complexity of cold matter in interstellar space as well as the building blocks of the university. With NOEMA, over 5,000 researchers from all over the world study the composition and dynamics of galaxies as well as the birth and death of stars, comets in our solar system or the environment of black holes. The observatory captures light from cosmic objects that has traveled to Earth for more than 13 billion years.
NOEMA has "observed the most distant known galaxy, which formed shortly after the Big Bang," notes the report. It also "measured the temperature of the cosmic background radiation at a very early stage of the universe, a scientific first that should make it possible to trace the effects of dark energy driving the universe apart."
Transportation

Tesla Now Has 160,000 Customers Running Its Full Self Driving Beta (theverge.com) 134

One piece of news from Tesla's AI Day presentation on Friday that was overshadowed by the company's humanoid "Optimus" robot and Dojo supercomputer was the improvements to Tesla's Full Self Driving software. According to Autopilot director Ashok Elluswamy, "there are now 160,000 customers running the beta software, compared to 2,000 from this time last year," reports The Verge. From the report: In total, Tesla says there have been 35 software releases of FSD. In a Q&A at the end of the presentation, Musk made another prediction -- he's made a few before -- that the technology would be ready for a worldwide rollout by the end of this year but acknowledged the regulatory and testing hurdles that remained before that happens. Afterward, Tesla's tech lead for Autopilot motion planning, Paril Jain, showed how FSD has improved in specific interactions and can make "human-like" decisions. For example, when a Tesla makes a left turn into an intersection, it can choose a trajectory that doesn't make close calls with obstacles like people crossing the street.

It's known that every Tesla can provide datasets to build the models that FSD uses, and according to Tesla's engineering manager Phil Duan, now Tesla will start building and processing detailed 3D structures from that data. They said the cars are also improving decision-making in different environmental situations, like night, fog, and rain. Tesla trains the company's AI software on its supercomputer, then feeds the results to customers' vehicles via over-the-air software updates. To do this, it processes video feeds from Tesla's fleet of over 1 million camera-equipped vehicles on the road today and has a simulator built in Unreal Engine that is used to improve Autopilot.

Supercomputing

Tesla Unveils New Dojo Supercomputer So Powerful It Tripped the Power Grid (electrek.co) 106

An anonymous reader quotes a report from Electrek: Tesla has unveiled its latest version of its Dojo supercomputer and it's apparently so powerful that it tripped the power grid in Palo Alto. Dojo is Tesla's own custom supercomputer platform built from the ground up for AI machine learning and more specifically for video training using the video data coming from its fleet of vehicles. [...] Last year, at Tesla's AI Day, the company unveiled its Dojo supercomputer, but the company was still ramping up its effort at the time. It only had its first chip and training tiles, and it was still working on building a full Dojo cabinet and cluster or "Exapod." Now Tesla has unveiled the progress made with the Dojo program over the last year during its AI Day 2022 last night.

The company confirmed that it managed to go from a chip and tile to now a system tray and a full cabinet. Tesla claims it can replace 6 GPU boxes with a single Dojo tile, which the company claims costs less than one GPU box. There are 6 of those tiles per tray. Tesla says that a single tray is the equivalent of "3 to 4 fully-loaded supercomputer racks." The company is integrating its host interface directly on the system tray to create a big full host assembly. Tesla can fit two of these system trays with host assembly into a single Dojo cabinet. That's pretty much where Tesla is right now as the automaker is still developing and testing the infrastructure needed to put a few cabinets together to create the first "Dojo Exapod."

Bill Chang, Tesla's Principal System Engineer for Dojo, said: "We knew that we had to reexamine every aspect of the data center infrastructure in order to support our unprecedented cooling and power density." They had to develop their own high-powered cooling and power system to power the Dojo cabinets. Chang said that Tesla tripped their local electric grid's substation when testing the infrastructure earlier this year: "Earlier this year, we started load testing our power and cooling infrastructure and we were able to push it over 2 MW before we tripped our substation and got a call from the city." Tesla released the main specs of a Dojo Exapod: 1.1 EFLOP, 1.3 TB SRAM, and 13 TB high-bandwidth DRAM.

AI

Banned US AI Chips in High Demand at Chinese State Institutes (reuters.com) 44

High-profile universities and state-run research institutes in China have been relying on a U.S. computing chip to power their artificial intelligence (AI) technology but whose export to the country Washington has now restricted, a Reuters review showed. From the report: U.S. chip designer Nvidia last week said U.S. government officials have ordered it to stop exporting its A100 and H100 chips to China. Local peer Advanced Micro Devices also said new licence requirements now prevent export to China of its advanced AI chip MI250. The development signalled a major escalation of a U.S. campaign to stymie China's technological capability as tension bubbles over the fate of Taiwan, where chips for Nvidia and almost every other major chip firm are manufactured.

China views Taiwan as a rogue province and has not ruled out force to bring the democratically governed island under its control. Responding to the restrictions, China branded them a futile attempt to impose a technology blockade on a rival. A Reuters review of more than a dozen publicly available government tenders over the past two years indicated that among some of China's most strategically important research institutes, there is high demand - and need - for Nvidia's signature A100 chips. Tsinghua University, China's highest-ranked higher education institution globally, spent over $400,000 last October on two Nvidia AI supercomputers, each powered by four A100 chips, one of the tenders showed. In the same month, the Institute of Computing Technology, part of top research group, the Chinese Academy of Sciences (CAS), spent around $250,000 on A100 chips. The school of artificial intelligence at a CAS university in July this year also spent about $200,000 on high-tech equipment including a server partly powered by A100 chips. In November, the cybersecurity college of Guangdong-based Jinan University spent over $93,000 on an Nvidia AI supercomputer, while its school of intelligent systems science and engineering spent almost $100,000 on eight A100 chips just last month. Less well-known institutes and universities supported by municipal and provincial governments, such as in Shandong, Henan and Chongqing, also bought A100 chips, the tenders showed.

Science

Can We Make Computer Chips Act More Like Brain Cells? (scientificamerican.com) 58

Long-time Slashdot reader swell shared Scientific American's report on the quest for neuromorphic chips: The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb. Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft "neuromorphic" computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. "Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them," says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021 Annual Review of Materials Research. Salleo and other researchers have created electronic devices using these soft organic materials that can act like transistors (which amplify and switch electrical signals) and memory cells (which store information) and other basic electronic components.

The work grows out of an increasing interest in neuromorphic computer circuits that mimic how human neural connections, or synapses, work. These circuits, whether made of silicon, metal or organic materials, work less like those in digital computers and more like the networks of neurons in the human brain.... An individual neuron receives signals from many other neurons, and all these signals together add up to affect the electrical state of the receiving neuron. In effect, each neuron serves as both a calculating device — integrating the value of all the signals it has received — and a memory device: storing the value of all of those combined signals as an infinitely variable analog value, rather than the zero-or-one of digital computers.

Intel

Why Stacking Chips Like Pancakes Could Mean a Huge Leap for Laptops (cnet.com) 46

For decades, you could test a computer chip's mettle by how small and tightly packed its electronic circuitry was. Now Intel believes another dimension is as big a deal: how artfully a group of such chips can be packaged into a single, more powerful processor. From a report: At the Hot Chips conference Monday, Intel Chief Executive Pat Gelsinger will shine a spotlight on the company's packaging prowess. It's a crucial element to two new processors: Meteor Lake, a next-generation Core processor family member that'll power PCs in 2023, and Ponte Vecchio, the brains of what's expected to be the world's fastest supercomputer, Aurora.

"Meteor Lake will be a huge technical innovation," thanks to how it packages, said Real World Tech analyst David Kanter. For decades, staying on the cutting edge of chip progress meant miniaturizing chip circuitry. Chipmakers make that circuitry with a process called photolithography, using patterns of light to etch tiny on-off switches called transistors onto silicon wafers. The smaller the transistors, the more designers can add for new features like accelerators for graphics or artificial intelligence chores. Now Intel believes building these chiplets into a package will bring the same processing power boost as the traditional photolithography technique.

Google

Google's Quantum Supremacy Challenged By Ordinary Computers, For Now (newscientist.com) 18

Google has been challenged by an algorithm that could solve a problem faster than its Sycamore quantum computer, which it used in 2019 to claim the first example of "quantum supremacy" -- the point at which a quantum computer can complete a task that would be impossible for ordinary computers. Google concedes that its 2019 record won't stand, but says that quantum computers will win out in the end. From a report: Sycamore achieved quantum supremacy in a task that involves verifying that a sample of numbers output by a quantum circuit have a truly random distribution, which it was able to complete in 3 minutes and 20 seconds. The Google team said that even the world's most powerful supercomputer at the time, IBM's Summit, would take 10,000 years to achieve the same result. Now, Pan Zhang at the Chinese Academy of Sciences in Beijing and his colleagues have created an improved algorithm for a non-quantum computer that can solve the random sampling problem much faster, challenging Google's claim that a quantum computer is the only practical way to do it. The researchers found that they could skip some of the calculations without affecting the final output, which dramatically reduces the computational requirements compared with the previous best algorithms. The researchers ran their algorithm on a cluster of 512 GPUs, completing the task in around 15 hours. While this is significantly longer than Sycamore, they say it shows that a classical computer approach remains practical.
Supercomputing

Are the World's Most Powerful Supercomputers Operating In Secret? (msn.com) 42

"A new supercomputer called Frontier has been widely touted as the world's first exascale machine — but was it really?"

That's the question that long-time Slashdot reader MattSparkes explores in a new article at New Scientist... Although Frontier, which was built by the Oak Ridge National Laboratory in Tennessee, topped what is generally seen as the definitive list of supercomputers, others may already have achieved the milestone in secret....

The definitive list of supercomputers is the Top500, which is based on a single measurement: how fast a machine can solve vast numbers of equations by running software called the LINPACK benchmark. This gives a value in float-point operations per second, or FLOPS. But even Jack Dongarra at Top500 admits that not all supercomputers are listed, and will only feature if its owner runs the benchmark and submits a result. "If they don't send it in it doesn't get entered," he says. "I can't force them."

Some owners prefer not to release a benchmark figure, or even publicly reveal a machine's existence. Simon McIntosh-Smith at the University of Bristol, UK points out that not only do intelligence agencies and certain companies have an incentive to keep their machines secret, but some purely academic machines like Blue Waters, operated by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, are also just never entered.... Dongarra says that the consensus among supercomputer experts is that China has had at least two exascale machines running since 2021, known as OceanLight and Tianhe-3, and is working on an even larger third called Sugon. Scientific papers on unconnected research have revealed evidence of these machines when describing calculations carried out on them.

McIntosh-Smith also believes that intelligence agencies would rank well, if allowed. "Certainly in the [US], some of the security forces have things that would put them at the top," he says. "There are definitely groups who obviously wouldn't want this on the list."

United States

US Retakes First Place From Japan on Top500 Supercomputer Ranking (engadget.com) 29

The United States is on top of the supercomputing world in the Top500 ranking of the most powerful systems. From a report: The Frontier system from Oak Ridge National Laboratory (ORNL) running on AMD EPYC CPUs took first place from last year's champ, Japan's ARM A64X Fugaku system. It's still in the integration and testing process at the ORNL in Tennessee, but will eventually be operated by the US Air Force and US Department of Energy. Frontier, powered by Hewlett Packard Enterprise's (HPE) Cray EX platform, was the top machine by a wide margin, too. It's the first (known) true exascale system, hitting a peak 1.1 exaflops on the Linmark benchmark. Fugaku, meanwhile, managed less than half that at 442 petaflops, which was still enough to keep it in first place for the previous two years. Frontier was also the most efficient supercomputer, too. Running at just 52.23 gigaflops per watt, it beat out Japan's MN-3 system to grab first place on the Green500 list. "The fact that the world's fastest machine is also the most energy efficient is just simply amazing," ORNL lab director Thomas Zacharia said at a press conference.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AI

How AI Can Make Weather Forecasting Better and Cheaper (bloomberg.com) 23

An anonymous reader quotes a report from Bloomberg: In early February a black box crammed with computer processors took a flight from California to Uganda. The squat, 4-foot-high box resembled a giant stereo amp. Once settled into place in Kampala, its job was to predict the weather better than anything the nation had used before (Warning: source may be paywalled; alternative source). The California startup that shipped the device, Atmo AI, plans by this summer to swap it out for a grander invention: a sleek, metallic supercomputer standing 8 feet tall and packing in 20 times more power. "It's meant to be the iPhone of global meteorology," says Alexander Levy, Atmo's co-founder and chief executive officer. That's a nod to Apple's design cred and market strategy: In many countries, consumers who'd never owned desktop computers bought smartphones in droves. Similarly, Atmo says, countries without the pricey supercomputers and data centers needed to make state-of-the-art weather forecasts -- effectively, every nation that's not a global superpower -- will pay for its cheaper device instead.

For its first customer, though, the Uganda National Meteorological Authority (UNMA), Atmo is sending its beta version, the plain black box. Prizing function over form seems wise for the urgent problem at hand. In recent years, Uganda has had landslides, floods, and a Biblical plague of locusts that devastated farms. The locusts came after sporadic drought and rain, stunning officials who didn't anticipate the swarms. "It became an eye-opener for us," says David Elweru, UNMA's acting executive director. Many nations facing such ravages lack the most modern tools to plan for the changing climate. Atmo says artificial intelligence programs are the answer. "Response begins with predictions," Levy says. "If we expect countries to react to events only after they've happened, we're dooming people to disaster and suffering." It's a novel approach. Meteorology poses considerable challenges for AI systems, and only a few weather authorities have experimented with it. Most countries haven't had the resources to try.

Ugandan officials signed a multi-year deal with Atmo but declined to share the terms. The UNMA picked the startup partly because its device was "way, way cheaper" than alternatives, according to Stephen Kaboyo, an investor advising Atmo in Uganda. Kaboyo spoke by phone in February, Kampala's dry season, as rain pelted the city. "We haven't seen this before," he said of the weather. "Who knows what is going to happen in the next three seasons?" [...] Atmo reports that its early tests have doubled the accuracy scores of baseline forecasts in Southeast Asia, where the startup is pursuing contracts. Initial tests on the ground in Uganda correctly predicted rainfall when other systems didn't, according to UNMA officials.

Supercomputing

Can Russia Bootstrap High-Performance Computing Clusters with Native Tech? (theregister.com) 53

"The events of recent days have taken us away from the stable and predictable development of mankind," argue two Moscow-based technology professors in Communications of the ACM, citing anticipated shortages of high-performance processors. But fortunately, they have a partial workarond...

One of the professors — Andrei Sukhov of HSE University in Moscow — explained their idea to the Register: In a timely piece Sukhov explains how Russian computer science teams are looking at building the next generation of clusters using older clustering technologies and a slew of open-source software for managing everything from code portability to parallelization as well as standards including PCIe 3.0, USB 4, and even existing Russian knock-off buses inspired by Infiniband (Angara ES8430).... While all the pieces might be in place, there is still the need to manufacture new boards, a problem Sukhov said can be routed around by using wireless protocols as the switching mechanism between processors, even though the network latency hit will be subpar, making it difficult to do any true tightly coupled, low-latency HPC simulations (which come in handy in areas like nuclear weapons simulations, as just one example).

"Given that the available mobile systems-on-chip are on the order of 100 Gflops, performance of several teraflops for small clusters of high-performance systems-on-chip is quite achievable," Sukhov added. "The use of standard open operating systems, such as Linux, will greatly facilitate the use of custom applications and allow such systems to run in the near future. It is possible that such clusters can be heterogeneous, including different systems-on-chip for different tasks (or, for example, FPGAs to create specialized on-the-fly configurable accelerators for specific tasks)...."

As he told The Register in a short exchange following the article, "Naturally, it will be impossible to make a new supercomputer in Russia in the coming years. Nevertheless, it is quite possible to close all the current needs in computing and data processing using the approach we have proposed. Especially if we apply hardware acceleration to tasks, depending on their type," he adds.... "During this implementation, software solutions and new protocols for data exchange, as well as computing technologies, will be worked out."

As for Russia's existing supercomputers, "no special problems are foreseen," Sukhov added. "These supercomputers are based on Linux and can continue to operate without the support of the companies that supplied the hardware and software."

Thanks to Slashdot reader katydid77 for sharing the article.
Technology

Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy (wsj.com) 219

magzteel shares a report: For almost five years, an international consortium of scientists was chasing clouds, determined to solve a problem that bedeviled climate-change forecasts for a generation: How do these wisps of water vapor affect global warming? They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements. They tested the equations, debugged them and tested again. The scientists would find that even the best tools at hand can't model climates with the sureness the world needs as rising temperatures impact almost every region. When they ran the updated simulation in 2018, the conclusion jolted them: Earth's atmosphere was much more sensitive to greenhouse gases than decades of previous models had predicted, and future temperatures could be much higher than feared -- perhaps even beyond hope of practical remedy. "We thought this was really strange," said Gokhan Danabasoglu, chief scientist for the climate-model project at the Mesa Laboratory in Boulder at the National Center for Atmospheric Research, or NCAR. "If that number was correct, that was really bad news." At least 20 older, simpler global-climate models disagreed with the new one at NCAR, an open-source model called the Community Earth System Model 2, or CESM2, funded mainly by the U.S. National Science Foundation and arguably the world's most influential climate program. Then, one by one, a dozen climate-modeling groups around the world produced similar forecasts. "It was not just us," Dr. Danabasoglu said.

The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. "The old way is just wrong, we know that," said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. "I think our higher sensitivity is wrong too. It's probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another." Since then the CESM2 scientists have been reworking their climate-change algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire -- and still in flux. As world leaders consider how to limit greenhouse gases, they depend heavily on what computer climate models predict. But as algorithms and the computer they run on become more powerful -- able to crunch far more data and do better simulations -- that very complexity has left climate scientists grappling with mismatches among competing computer models.

The Media

Are TED Talks Just Propaganda For the Technocracy? (thedriftmag.com) 151

"People are still paying between $5,000 and $50,000 to attend the annual flagship TED conference. In 2021," notes The Drift magazine, noting last year's event was held in Monterey, California. "Amid wildfires and the Delta surge, its theme was 'the case for optimism.'"

The magazine makes the case that over the last decade TED talks have been "endlessly re-articulating tech's promises without any serious critical reflection." And they start with how Bill Gates told an audience in 2015 that "we can be ready for the next epidemic." Gates's popular and well-shared TED talk — viewed millions of times — didn't alter the course of history. Neither did any of the other "ideas worth spreading" (the organization's tagline) presented at the TED conference that year — including Monica Lewinsky's massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer's talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year.....

At the start of the pandemic, I noticed people sharing Gates's 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn't so sure. It seems to me that Gates's prediction and proposed solution are at least part of what landed us here. I don't mean to suggest that Gates's TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about "the future" that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories.... In other words, in the TED episteme, the function of a story isn't to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED's longtime curator, puts it, "We live in an era where the best way to make a dent on the world... may be simply to stand up and say something." And yet, TED's archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned.

But the article also notes that TED's philosophy became "a magnet for narcissistic, recognition-seeking characters and their Theranos-like projects." (In 2014 Elizabeth Holmes herself spoke at a medical-themed TED conference.) And since 2009 the TEDx franchise lets licensees use the brand platform to stage independent events — which is how at a 2010 TEDx event, Randy Powell gave his infamous talk about vortex-based mathematics which he said would "create inexhaustible free energy, end all diseases, produce all food, travel anywhere in the universe, build the ultimate supercomputer and artificial intelligence, and make obsolete all existing technology."

Yet these are all just symptoms of a larger problem, the article ultimately argues. "As the most visible and influential public speaking platform of the first two decades of the twenty-first century, it has been deeply implicated in broadcasting and championing the Silicon Valley version of the future. TED is probably best understood as the propaganda arm of an ascendant technocracy.

Slashdot Top Deals