Transportation

Tesla Now Has 160,000 Customers Running Its Full Self Driving Beta (theverge.com) 134

One piece of news from Tesla's AI Day presentation on Friday that was overshadowed by the company's humanoid "Optimus" robot and Dojo supercomputer was the improvements to Tesla's Full Self Driving software. According to Autopilot director Ashok Elluswamy, "there are now 160,000 customers running the beta software, compared to 2,000 from this time last year," reports The Verge. From the report: In total, Tesla says there have been 35 software releases of FSD. In a Q&A at the end of the presentation, Musk made another prediction -- he's made a few before -- that the technology would be ready for a worldwide rollout by the end of this year but acknowledged the regulatory and testing hurdles that remained before that happens. Afterward, Tesla's tech lead for Autopilot motion planning, Paril Jain, showed how FSD has improved in specific interactions and can make "human-like" decisions. For example, when a Tesla makes a left turn into an intersection, it can choose a trajectory that doesn't make close calls with obstacles like people crossing the street.

It's known that every Tesla can provide datasets to build the models that FSD uses, and according to Tesla's engineering manager Phil Duan, now Tesla will start building and processing detailed 3D structures from that data. They said the cars are also improving decision-making in different environmental situations, like night, fog, and rain. Tesla trains the company's AI software on its supercomputer, then feeds the results to customers' vehicles via over-the-air software updates. To do this, it processes video feeds from Tesla's fleet of over 1 million camera-equipped vehicles on the road today and has a simulator built in Unreal Engine that is used to improve Autopilot.

Supercomputing

Tesla Unveils New Dojo Supercomputer So Powerful It Tripped the Power Grid (electrek.co) 106

An anonymous reader quotes a report from Electrek: Tesla has unveiled its latest version of its Dojo supercomputer and it's apparently so powerful that it tripped the power grid in Palo Alto. Dojo is Tesla's own custom supercomputer platform built from the ground up for AI machine learning and more specifically for video training using the video data coming from its fleet of vehicles. [...] Last year, at Tesla's AI Day, the company unveiled its Dojo supercomputer, but the company was still ramping up its effort at the time. It only had its first chip and training tiles, and it was still working on building a full Dojo cabinet and cluster or "Exapod." Now Tesla has unveiled the progress made with the Dojo program over the last year during its AI Day 2022 last night.

The company confirmed that it managed to go from a chip and tile to now a system tray and a full cabinet. Tesla claims it can replace 6 GPU boxes with a single Dojo tile, which the company claims costs less than one GPU box. There are 6 of those tiles per tray. Tesla says that a single tray is the equivalent of "3 to 4 fully-loaded supercomputer racks." The company is integrating its host interface directly on the system tray to create a big full host assembly. Tesla can fit two of these system trays with host assembly into a single Dojo cabinet. That's pretty much where Tesla is right now as the automaker is still developing and testing the infrastructure needed to put a few cabinets together to create the first "Dojo Exapod."

Bill Chang, Tesla's Principal System Engineer for Dojo, said: "We knew that we had to reexamine every aspect of the data center infrastructure in order to support our unprecedented cooling and power density." They had to develop their own high-powered cooling and power system to power the Dojo cabinets. Chang said that Tesla tripped their local electric grid's substation when testing the infrastructure earlier this year: "Earlier this year, we started load testing our power and cooling infrastructure and we were able to push it over 2 MW before we tripped our substation and got a call from the city." Tesla released the main specs of a Dojo Exapod: 1.1 EFLOP, 1.3 TB SRAM, and 13 TB high-bandwidth DRAM.

AI

Banned US AI Chips in High Demand at Chinese State Institutes (reuters.com) 44

High-profile universities and state-run research institutes in China have been relying on a U.S. computing chip to power their artificial intelligence (AI) technology but whose export to the country Washington has now restricted, a Reuters review showed. From the report: U.S. chip designer Nvidia last week said U.S. government officials have ordered it to stop exporting its A100 and H100 chips to China. Local peer Advanced Micro Devices also said new licence requirements now prevent export to China of its advanced AI chip MI250. The development signalled a major escalation of a U.S. campaign to stymie China's technological capability as tension bubbles over the fate of Taiwan, where chips for Nvidia and almost every other major chip firm are manufactured.

China views Taiwan as a rogue province and has not ruled out force to bring the democratically governed island under its control. Responding to the restrictions, China branded them a futile attempt to impose a technology blockade on a rival. A Reuters review of more than a dozen publicly available government tenders over the past two years indicated that among some of China's most strategically important research institutes, there is high demand - and need - for Nvidia's signature A100 chips. Tsinghua University, China's highest-ranked higher education institution globally, spent over $400,000 last October on two Nvidia AI supercomputers, each powered by four A100 chips, one of the tenders showed. In the same month, the Institute of Computing Technology, part of top research group, the Chinese Academy of Sciences (CAS), spent around $250,000 on A100 chips. The school of artificial intelligence at a CAS university in July this year also spent about $200,000 on high-tech equipment including a server partly powered by A100 chips. In November, the cybersecurity college of Guangdong-based Jinan University spent over $93,000 on an Nvidia AI supercomputer, while its school of intelligent systems science and engineering spent almost $100,000 on eight A100 chips just last month. Less well-known institutes and universities supported by municipal and provincial governments, such as in Shandong, Henan and Chongqing, also bought A100 chips, the tenders showed.

Science

Can We Make Computer Chips Act More Like Brain Cells? (scientificamerican.com) 58

Long-time Slashdot reader swell shared Scientific American's report on the quest for neuromorphic chips: The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb. Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft "neuromorphic" computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. "Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them," says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021 Annual Review of Materials Research. Salleo and other researchers have created electronic devices using these soft organic materials that can act like transistors (which amplify and switch electrical signals) and memory cells (which store information) and other basic electronic components.

The work grows out of an increasing interest in neuromorphic computer circuits that mimic how human neural connections, or synapses, work. These circuits, whether made of silicon, metal or organic materials, work less like those in digital computers and more like the networks of neurons in the human brain.... An individual neuron receives signals from many other neurons, and all these signals together add up to affect the electrical state of the receiving neuron. In effect, each neuron serves as both a calculating device — integrating the value of all the signals it has received — and a memory device: storing the value of all of those combined signals as an infinitely variable analog value, rather than the zero-or-one of digital computers.

Intel

Why Stacking Chips Like Pancakes Could Mean a Huge Leap for Laptops (cnet.com) 46

For decades, you could test a computer chip's mettle by how small and tightly packed its electronic circuitry was. Now Intel believes another dimension is as big a deal: how artfully a group of such chips can be packaged into a single, more powerful processor. From a report: At the Hot Chips conference Monday, Intel Chief Executive Pat Gelsinger will shine a spotlight on the company's packaging prowess. It's a crucial element to two new processors: Meteor Lake, a next-generation Core processor family member that'll power PCs in 2023, and Ponte Vecchio, the brains of what's expected to be the world's fastest supercomputer, Aurora.

"Meteor Lake will be a huge technical innovation," thanks to how it packages, said Real World Tech analyst David Kanter. For decades, staying on the cutting edge of chip progress meant miniaturizing chip circuitry. Chipmakers make that circuitry with a process called photolithography, using patterns of light to etch tiny on-off switches called transistors onto silicon wafers. The smaller the transistors, the more designers can add for new features like accelerators for graphics or artificial intelligence chores. Now Intel believes building these chiplets into a package will bring the same processing power boost as the traditional photolithography technique.

Google

Google's Quantum Supremacy Challenged By Ordinary Computers, For Now (newscientist.com) 18

Google has been challenged by an algorithm that could solve a problem faster than its Sycamore quantum computer, which it used in 2019 to claim the first example of "quantum supremacy" -- the point at which a quantum computer can complete a task that would be impossible for ordinary computers. Google concedes that its 2019 record won't stand, but says that quantum computers will win out in the end. From a report: Sycamore achieved quantum supremacy in a task that involves verifying that a sample of numbers output by a quantum circuit have a truly random distribution, which it was able to complete in 3 minutes and 20 seconds. The Google team said that even the world's most powerful supercomputer at the time, IBM's Summit, would take 10,000 years to achieve the same result. Now, Pan Zhang at the Chinese Academy of Sciences in Beijing and his colleagues have created an improved algorithm for a non-quantum computer that can solve the random sampling problem much faster, challenging Google's claim that a quantum computer is the only practical way to do it. The researchers found that they could skip some of the calculations without affecting the final output, which dramatically reduces the computational requirements compared with the previous best algorithms. The researchers ran their algorithm on a cluster of 512 GPUs, completing the task in around 15 hours. While this is significantly longer than Sycamore, they say it shows that a classical computer approach remains practical.
Supercomputing

Are the World's Most Powerful Supercomputers Operating In Secret? (msn.com) 42

"A new supercomputer called Frontier has been widely touted as the world's first exascale machine — but was it really?"

That's the question that long-time Slashdot reader MattSparkes explores in a new article at New Scientist... Although Frontier, which was built by the Oak Ridge National Laboratory in Tennessee, topped what is generally seen as the definitive list of supercomputers, others may already have achieved the milestone in secret....

The definitive list of supercomputers is the Top500, which is based on a single measurement: how fast a machine can solve vast numbers of equations by running software called the LINPACK benchmark. This gives a value in float-point operations per second, or FLOPS. But even Jack Dongarra at Top500 admits that not all supercomputers are listed, and will only feature if its owner runs the benchmark and submits a result. "If they don't send it in it doesn't get entered," he says. "I can't force them."

Some owners prefer not to release a benchmark figure, or even publicly reveal a machine's existence. Simon McIntosh-Smith at the University of Bristol, UK points out that not only do intelligence agencies and certain companies have an incentive to keep their machines secret, but some purely academic machines like Blue Waters, operated by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, are also just never entered.... Dongarra says that the consensus among supercomputer experts is that China has had at least two exascale machines running since 2021, known as OceanLight and Tianhe-3, and is working on an even larger third called Sugon. Scientific papers on unconnected research have revealed evidence of these machines when describing calculations carried out on them.

McIntosh-Smith also believes that intelligence agencies would rank well, if allowed. "Certainly in the [US], some of the security forces have things that would put them at the top," he says. "There are definitely groups who obviously wouldn't want this on the list."

United States

US Retakes First Place From Japan on Top500 Supercomputer Ranking (engadget.com) 29

The United States is on top of the supercomputing world in the Top500 ranking of the most powerful systems. From a report: The Frontier system from Oak Ridge National Laboratory (ORNL) running on AMD EPYC CPUs took first place from last year's champ, Japan's ARM A64X Fugaku system. It's still in the integration and testing process at the ORNL in Tennessee, but will eventually be operated by the US Air Force and US Department of Energy. Frontier, powered by Hewlett Packard Enterprise's (HPE) Cray EX platform, was the top machine by a wide margin, too. It's the first (known) true exascale system, hitting a peak 1.1 exaflops on the Linmark benchmark. Fugaku, meanwhile, managed less than half that at 442 petaflops, which was still enough to keep it in first place for the previous two years. Frontier was also the most efficient supercomputer, too. Running at just 52.23 gigaflops per watt, it beat out Japan's MN-3 system to grab first place on the Green500 list. "The fact that the world's fastest machine is also the most energy efficient is just simply amazing," ORNL lab director Thomas Zacharia said at a press conference.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AI

How AI Can Make Weather Forecasting Better and Cheaper (bloomberg.com) 23

An anonymous reader quotes a report from Bloomberg: In early February a black box crammed with computer processors took a flight from California to Uganda. The squat, 4-foot-high box resembled a giant stereo amp. Once settled into place in Kampala, its job was to predict the weather better than anything the nation had used before (Warning: source may be paywalled; alternative source). The California startup that shipped the device, Atmo AI, plans by this summer to swap it out for a grander invention: a sleek, metallic supercomputer standing 8 feet tall and packing in 20 times more power. "It's meant to be the iPhone of global meteorology," says Alexander Levy, Atmo's co-founder and chief executive officer. That's a nod to Apple's design cred and market strategy: In many countries, consumers who'd never owned desktop computers bought smartphones in droves. Similarly, Atmo says, countries without the pricey supercomputers and data centers needed to make state-of-the-art weather forecasts -- effectively, every nation that's not a global superpower -- will pay for its cheaper device instead.

For its first customer, though, the Uganda National Meteorological Authority (UNMA), Atmo is sending its beta version, the plain black box. Prizing function over form seems wise for the urgent problem at hand. In recent years, Uganda has had landslides, floods, and a Biblical plague of locusts that devastated farms. The locusts came after sporadic drought and rain, stunning officials who didn't anticipate the swarms. "It became an eye-opener for us," says David Elweru, UNMA's acting executive director. Many nations facing such ravages lack the most modern tools to plan for the changing climate. Atmo says artificial intelligence programs are the answer. "Response begins with predictions," Levy says. "If we expect countries to react to events only after they've happened, we're dooming people to disaster and suffering." It's a novel approach. Meteorology poses considerable challenges for AI systems, and only a few weather authorities have experimented with it. Most countries haven't had the resources to try.

Ugandan officials signed a multi-year deal with Atmo but declined to share the terms. The UNMA picked the startup partly because its device was "way, way cheaper" than alternatives, according to Stephen Kaboyo, an investor advising Atmo in Uganda. Kaboyo spoke by phone in February, Kampala's dry season, as rain pelted the city. "We haven't seen this before," he said of the weather. "Who knows what is going to happen in the next three seasons?" [...] Atmo reports that its early tests have doubled the accuracy scores of baseline forecasts in Southeast Asia, where the startup is pursuing contracts. Initial tests on the ground in Uganda correctly predicted rainfall when other systems didn't, according to UNMA officials.

Supercomputing

Can Russia Bootstrap High-Performance Computing Clusters with Native Tech? (theregister.com) 53

"The events of recent days have taken us away from the stable and predictable development of mankind," argue two Moscow-based technology professors in Communications of the ACM, citing anticipated shortages of high-performance processors. But fortunately, they have a partial workarond...

One of the professors — Andrei Sukhov of HSE University in Moscow — explained their idea to the Register: In a timely piece Sukhov explains how Russian computer science teams are looking at building the next generation of clusters using older clustering technologies and a slew of open-source software for managing everything from code portability to parallelization as well as standards including PCIe 3.0, USB 4, and even existing Russian knock-off buses inspired by Infiniband (Angara ES8430).... While all the pieces might be in place, there is still the need to manufacture new boards, a problem Sukhov said can be routed around by using wireless protocols as the switching mechanism between processors, even though the network latency hit will be subpar, making it difficult to do any true tightly coupled, low-latency HPC simulations (which come in handy in areas like nuclear weapons simulations, as just one example).

"Given that the available mobile systems-on-chip are on the order of 100 Gflops, performance of several teraflops for small clusters of high-performance systems-on-chip is quite achievable," Sukhov added. "The use of standard open operating systems, such as Linux, will greatly facilitate the use of custom applications and allow such systems to run in the near future. It is possible that such clusters can be heterogeneous, including different systems-on-chip for different tasks (or, for example, FPGAs to create specialized on-the-fly configurable accelerators for specific tasks)...."

As he told The Register in a short exchange following the article, "Naturally, it will be impossible to make a new supercomputer in Russia in the coming years. Nevertheless, it is quite possible to close all the current needs in computing and data processing using the approach we have proposed. Especially if we apply hardware acceleration to tasks, depending on their type," he adds.... "During this implementation, software solutions and new protocols for data exchange, as well as computing technologies, will be worked out."

As for Russia's existing supercomputers, "no special problems are foreseen," Sukhov added. "These supercomputers are based on Linux and can continue to operate without the support of the companies that supplied the hardware and software."

Thanks to Slashdot reader katydid77 for sharing the article.
Technology

Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy (wsj.com) 219

magzteel shares a report: For almost five years, an international consortium of scientists was chasing clouds, determined to solve a problem that bedeviled climate-change forecasts for a generation: How do these wisps of water vapor affect global warming? They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements. They tested the equations, debugged them and tested again. The scientists would find that even the best tools at hand can't model climates with the sureness the world needs as rising temperatures impact almost every region. When they ran the updated simulation in 2018, the conclusion jolted them: Earth's atmosphere was much more sensitive to greenhouse gases than decades of previous models had predicted, and future temperatures could be much higher than feared -- perhaps even beyond hope of practical remedy. "We thought this was really strange," said Gokhan Danabasoglu, chief scientist for the climate-model project at the Mesa Laboratory in Boulder at the National Center for Atmospheric Research, or NCAR. "If that number was correct, that was really bad news." At least 20 older, simpler global-climate models disagreed with the new one at NCAR, an open-source model called the Community Earth System Model 2, or CESM2, funded mainly by the U.S. National Science Foundation and arguably the world's most influential climate program. Then, one by one, a dozen climate-modeling groups around the world produced similar forecasts. "It was not just us," Dr. Danabasoglu said.

The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. "The old way is just wrong, we know that," said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. "I think our higher sensitivity is wrong too. It's probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another." Since then the CESM2 scientists have been reworking their climate-change algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire -- and still in flux. As world leaders consider how to limit greenhouse gases, they depend heavily on what computer climate models predict. But as algorithms and the computer they run on become more powerful -- able to crunch far more data and do better simulations -- that very complexity has left climate scientists grappling with mismatches among competing computer models.

The Media

Are TED Talks Just Propaganda For the Technocracy? (thedriftmag.com) 151

"People are still paying between $5,000 and $50,000 to attend the annual flagship TED conference. In 2021," notes The Drift magazine, noting last year's event was held in Monterey, California. "Amid wildfires and the Delta surge, its theme was 'the case for optimism.'"

The magazine makes the case that over the last decade TED talks have been "endlessly re-articulating tech's promises without any serious critical reflection." And they start with how Bill Gates told an audience in 2015 that "we can be ready for the next epidemic." Gates's popular and well-shared TED talk — viewed millions of times — didn't alter the course of history. Neither did any of the other "ideas worth spreading" (the organization's tagline) presented at the TED conference that year — including Monica Lewinsky's massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer's talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year.....

At the start of the pandemic, I noticed people sharing Gates's 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn't so sure. It seems to me that Gates's prediction and proposed solution are at least part of what landed us here. I don't mean to suggest that Gates's TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about "the future" that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories.... In other words, in the TED episteme, the function of a story isn't to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED's longtime curator, puts it, "We live in an era where the best way to make a dent on the world... may be simply to stand up and say something." And yet, TED's archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned.

But the article also notes that TED's philosophy became "a magnet for narcissistic, recognition-seeking characters and their Theranos-like projects." (In 2014 Elizabeth Holmes herself spoke at a medical-themed TED conference.) And since 2009 the TEDx franchise lets licensees use the brand platform to stage independent events — which is how at a 2010 TEDx event, Randy Powell gave his infamous talk about vortex-based mathematics which he said would "create inexhaustible free energy, end all diseases, produce all food, travel anywhere in the universe, build the ultimate supercomputer and artificial intelligence, and make obsolete all existing technology."

Yet these are all just symptoms of a larger problem, the article ultimately argues. "As the most visible and influential public speaking platform of the first two decades of the twenty-first century, it has been deeply implicated in broadcasting and championing the Silicon Valley version of the future. TED is probably best understood as the propaganda arm of an ascendant technocracy.
AI

Meta Unveils New AI Supercomputer (wsj.com) 48

An anonymous reader quotes a report from The Wall Street Journal: Meta said Monday that its research team built a new artificial intelligence supercomputer that the company maintains will soon be the fastest in the world. The supercomputer, the AI Research SuperCluster, was the result of nearly two years of work, often conducted remotely during the height of the pandemic, and led by the Facebook parent's AI and infrastructure teams. Several hundred people, including researchers from partners Nvidia, Penguin Computing and Pure Storage, were involved in the project, the company said.

Meta, which announced the news in a blog post Monday, said its research team currently is using the supercomputer to train AI models in natural-language processing and computer vision for research. The aim is to boost capabilities to one day train models with more than a trillion parameters on data sets as large as an exabyte, which is roughly equivalent to 36,000 years of high-quality video. "The experiences we're building for the metaverse require enormous compute powerand RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more," Meta CEO Mark Zuckerberg said in a statement provided to The Wall Street Journal. Meta's AI supercomputer houses 6,080 Nvidia graphics-processing units, putting it fifth among the fastest supercomputers in the world, according to Meta.

By mid-summer, when the AI Research SuperCluster is fully built, it will house some 16,000 GPUs, becoming the fastest AI supercomputer in the world, Meta said. The company declined to comment on the location of the facility or the cost. [...] Eventually the supercomputer will help Meta's researchers build AI models that can work across hundreds of languages, analyze text, images and video together and develop augmented reality tools, the company said. The technology also will help Meta more easily identify harmful content and will aim to help Meta researchers develop artificial-intelligence models that think like the human brain and support rich, multidimensional experiences in the metaverse. "In the metaverse, it's one hundred percent of the time, a 3-D multi-sensorial experience, and you need to create artificial-intelligence agents in that environment that are relevant to you," said Jerome Pesenti, vice president of AI at Meta.

Data Storage

University Loses 77TB of Research Data Due To Backup Error (bleepingcomputer.com) 74

An anonymous reader quotes a report from BleepingComputer: The Kyoto University in Japan has lost about 77TB of research data due to an error in the backup system of its Hewlett-Packard supercomputer. The incident occurred between December 14 and 16, 2021, and resulted in 34 million files from 14 research groups being wiped from the system and the backup file. After investigating to determine the impact of the loss, the university concluded that the work of four of the affected groups could no longer be restored. All affected users have been individually notified of the incident via email, but no details were published on the type of work that was lost.

At the moment, the backup process has been stopped. To prevent data loss from happening again, the university has scraped the backup system and plans to apply improvements and re-introduce it in January 2022. The plan is to also keep incremental backups -- which cover files that have been changed since the last backup happened -- in addition to full backup mirrors. While the details of the type of data that was lost weren't revealed to the public, supercomputer research costs several hundreds of USD per hour, so this incident must have caused distress to the affected groups. The Kyoto University is considered one of Japan's most important research institutions and enjoys the second-largest scientific research investments from national grants. Its research excellence and importance is particularly distinctive in the area of chemistry, where it ranks fourth in the world, while it also contributes to biology, pharmacology, immunology, material science, and physics.

Businesses

US To Blacklist Eight More Chinese Companies, Including Drone Maker DJI (reuters.com) 115

schwit1 shares a report from the Financial Times: The US Treasury will put DJI and the other groups on its Chinese military-industrial complex companies blacklist on Thursday (Warning: source may be paywalled; alternative source), according to two people briefed on the move. US investors are barred from taking financial stakes in the 60 Chinese groups already on the blacklist. The measure marks the latest effort by President Biden to punish China for its repression of Uyghurs and other Muslim ethnic minorities in the north-western Xinjiang region.

The other Chinese companies that will be blacklisted on Thursday include Megvii, SenseTimes main rival that last year halted plans to list in Hong Kong after it was put on a separate US blacklist, and Dawning Information Industry, a supercomputer manufacturer that operates cloud computing services in Xinjiang. Also to be added are CloudWalk Technology, a facial recognition software company, Xiamen Meiya Pico, a cyber security group that works with law enforcement, Yitu Technology, an artificial intelligence company, Leon Technology, a cloud computing company, and NetPosa Technologies, a producer of cloud-based surveillance systems. DJI and Megvii are not publicly traded, but Dawning Information, which is also known as Sugon, is listed in Shanghai, and Leon, NetPosa and Meiya Pico trade in Shenzhen. All eight companies are already on the commerce department's "entity list," which restricts US companies from exporting technology or products from America to the Chinese groups without obtaining a government license.

Classic Games (Games)

Magnus Carlsen Wins 8th World Chess Championship. What Makes Him So Great? (espn.com) 42

"On Friday, needing just one point against Ian Nepomniachtchi to defend his world champion status, Magnus Carlsen closed the match out with three games to spare, 7.5-3.5," ESPN reports. "He's been the No 1 chess player in the world for a decade now...

"In a technologically flat, AI-powered chess world where preparation among the best players can be almost equal, what really makes one guy stand out with his dominance and genius for this long...? American Grandmaster and chess commentator Robert Hess describes Carlsen as the "hardest worker you'll find" both at the board and in preparation. "He is second-to-none at evading common theoretical lines and prefers to outplay his opponents in positions where both players must rely on their understanding of the current dynamics," Hess says...

At the start of this year, news emerged of Nepomniachtchi and his team having access to a supercomputer cluster, Zhores, from the Moscow-based Skolkovo Institute of Science and Technology. He was using it for his Candidates tournament preparation, a tournament he went on to win. He gained the challenger status for the World Championship and the Zhores supercomputer reportedly continued to be a mainstay in his team. Zhores was specifically designed to solve problems in machine learning and data-based modeling with a capacity of one Petaflop per second.... Players use computers and open-source AI engines to analyze openings, bolster preparation, scour for a bank of new ideas and to go down lines that the other is unlikely to have explored.

The tiny detail though is, that against Carlsen, it may not be enough. He has the notoriety of drawing opponents into obscure positions, hurling them out of preparation and into the deep end, often leading to a complex struggle. Whether you have the fastest supercomputer on your team then becomes almost irrelevant. It comes down to a battle of intuition, tactics and staying power, human to human. In such scenarios, almost always, Carlsen comes out on top. "[Nepomniachtchi] couldn't show his best chess...it's a pity for the excitement of the match," he said later, "I think that's what happens when you get into difficult situations...all the preparation doesn't necessarily help you if you can't cope in the moment...."

Soon after his win on Friday, Carlsen announced he'd be "celebrating" by playing the World Rapid and Blitz Championships in Warsaw, a fortnight from now. He presently holds both those titles...

The article also remembers what happened in 2018 when Carlsen was asked to name his favorite chess player from the past. Carlsen's answer?

"Probably myself, like, three or four years ago."
Robotics

World's First Living Robots Can Now Reproduce, Scientists Say (cnn.com) 77

The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals. CNN reports: Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal. Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University's Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.

[T]hey found that the xenobots, which were initially sphere-shaped and made from around 3,000 cells, could replicate. But it happened rarely and only in specific circumstances. The xenobots used "kinetic replication" -- a process that is known to occur at the molecular level but has never been observed before at the scale of whole cells or organisms [...]. With the help of artificial intelligence, the researchers then tested billions of body shapes to make the xenobots more effective at this type of replication. The supercomputer came up with a C-shape that resembled Pac-Man, the 1980s video game. They found it was able to find tiny stem cells in a petri dish, gather hundreds of them inside its mouth, and a few days later the bundle of cells became new xenobots.

The xenobots are very early technology -- think of a 1940s computer -- and don't yet have any practical applications. However, this combination of molecular biology and artificial intelligence could potentially be used in a host of tasks in the body and the environment, according to the researchers. This may include things like collecting microplastics in the oceans, inspecting root systems and regenerative medicine. While the prospect of self-replicating biotechnology could spark concern, the researchers said that the living machines were entirely contained in a lab and easily extinguished, as they are biodegradable and regulated by ethics experts.
"Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study, writing in the Proceedings of the National Academy of Sciences. "In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."

"The AI didn't program these machines in the way we usually think about writing code. It shaped and sculpted and came up with this Pac-Man shape," Bongard said. "The shape is, in essence, the program. The shape influences how the xenobots behave to amplify this incredibly surprising process."

Slashdot Top Deals