×
Intel

Why Stacking Chips Like Pancakes Could Mean a Huge Leap for Laptops (cnet.com) 46

For decades, you could test a computer chip's mettle by how small and tightly packed its electronic circuitry was. Now Intel believes another dimension is as big a deal: how artfully a group of such chips can be packaged into a single, more powerful processor. From a report: At the Hot Chips conference Monday, Intel Chief Executive Pat Gelsinger will shine a spotlight on the company's packaging prowess. It's a crucial element to two new processors: Meteor Lake, a next-generation Core processor family member that'll power PCs in 2023, and Ponte Vecchio, the brains of what's expected to be the world's fastest supercomputer, Aurora.

"Meteor Lake will be a huge technical innovation," thanks to how it packages, said Real World Tech analyst David Kanter. For decades, staying on the cutting edge of chip progress meant miniaturizing chip circuitry. Chipmakers make that circuitry with a process called photolithography, using patterns of light to etch tiny on-off switches called transistors onto silicon wafers. The smaller the transistors, the more designers can add for new features like accelerators for graphics or artificial intelligence chores. Now Intel believes building these chiplets into a package will bring the same processing power boost as the traditional photolithography technique.

Google

Google's Quantum Supremacy Challenged By Ordinary Computers, For Now (newscientist.com) 18

Google has been challenged by an algorithm that could solve a problem faster than its Sycamore quantum computer, which it used in 2019 to claim the first example of "quantum supremacy" -- the point at which a quantum computer can complete a task that would be impossible for ordinary computers. Google concedes that its 2019 record won't stand, but says that quantum computers will win out in the end. From a report: Sycamore achieved quantum supremacy in a task that involves verifying that a sample of numbers output by a quantum circuit have a truly random distribution, which it was able to complete in 3 minutes and 20 seconds. The Google team said that even the world's most powerful supercomputer at the time, IBM's Summit, would take 10,000 years to achieve the same result. Now, Pan Zhang at the Chinese Academy of Sciences in Beijing and his colleagues have created an improved algorithm for a non-quantum computer that can solve the random sampling problem much faster, challenging Google's claim that a quantum computer is the only practical way to do it. The researchers found that they could skip some of the calculations without affecting the final output, which dramatically reduces the computational requirements compared with the previous best algorithms. The researchers ran their algorithm on a cluster of 512 GPUs, completing the task in around 15 hours. While this is significantly longer than Sycamore, they say it shows that a classical computer approach remains practical.
Supercomputing

Are the World's Most Powerful Supercomputers Operating In Secret? (msn.com) 42

"A new supercomputer called Frontier has been widely touted as the world's first exascale machine — but was it really?"

That's the question that long-time Slashdot reader MattSparkes explores in a new article at New Scientist... Although Frontier, which was built by the Oak Ridge National Laboratory in Tennessee, topped what is generally seen as the definitive list of supercomputers, others may already have achieved the milestone in secret....

The definitive list of supercomputers is the Top500, which is based on a single measurement: how fast a machine can solve vast numbers of equations by running software called the LINPACK benchmark. This gives a value in float-point operations per second, or FLOPS. But even Jack Dongarra at Top500 admits that not all supercomputers are listed, and will only feature if its owner runs the benchmark and submits a result. "If they don't send it in it doesn't get entered," he says. "I can't force them."

Some owners prefer not to release a benchmark figure, or even publicly reveal a machine's existence. Simon McIntosh-Smith at the University of Bristol, UK points out that not only do intelligence agencies and certain companies have an incentive to keep their machines secret, but some purely academic machines like Blue Waters, operated by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, are also just never entered.... Dongarra says that the consensus among supercomputer experts is that China has had at least two exascale machines running since 2021, known as OceanLight and Tianhe-3, and is working on an even larger third called Sugon. Scientific papers on unconnected research have revealed evidence of these machines when describing calculations carried out on them.

McIntosh-Smith also believes that intelligence agencies would rank well, if allowed. "Certainly in the [US], some of the security forces have things that would put them at the top," he says. "There are definitely groups who obviously wouldn't want this on the list."

United States

US Retakes First Place From Japan on Top500 Supercomputer Ranking (engadget.com) 29

The United States is on top of the supercomputing world in the Top500 ranking of the most powerful systems. From a report: The Frontier system from Oak Ridge National Laboratory (ORNL) running on AMD EPYC CPUs took first place from last year's champ, Japan's ARM A64X Fugaku system. It's still in the integration and testing process at the ORNL in Tennessee, but will eventually be operated by the US Air Force and US Department of Energy. Frontier, powered by Hewlett Packard Enterprise's (HPE) Cray EX platform, was the top machine by a wide margin, too. It's the first (known) true exascale system, hitting a peak 1.1 exaflops on the Linmark benchmark. Fugaku, meanwhile, managed less than half that at 442 petaflops, which was still enough to keep it in first place for the previous two years. Frontier was also the most efficient supercomputer, too. Running at just 52.23 gigaflops per watt, it beat out Japan's MN-3 system to grab first place on the Green500 list. "The fact that the world's fastest machine is also the most energy efficient is just simply amazing," ORNL lab director Thomas Zacharia said at a press conference.
Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AI

How AI Can Make Weather Forecasting Better and Cheaper (bloomberg.com) 23

An anonymous reader quotes a report from Bloomberg: In early February a black box crammed with computer processors took a flight from California to Uganda. The squat, 4-foot-high box resembled a giant stereo amp. Once settled into place in Kampala, its job was to predict the weather better than anything the nation had used before (Warning: source may be paywalled; alternative source). The California startup that shipped the device, Atmo AI, plans by this summer to swap it out for a grander invention: a sleek, metallic supercomputer standing 8 feet tall and packing in 20 times more power. "It's meant to be the iPhone of global meteorology," says Alexander Levy, Atmo's co-founder and chief executive officer. That's a nod to Apple's design cred and market strategy: In many countries, consumers who'd never owned desktop computers bought smartphones in droves. Similarly, Atmo says, countries without the pricey supercomputers and data centers needed to make state-of-the-art weather forecasts -- effectively, every nation that's not a global superpower -- will pay for its cheaper device instead.

For its first customer, though, the Uganda National Meteorological Authority (UNMA), Atmo is sending its beta version, the plain black box. Prizing function over form seems wise for the urgent problem at hand. In recent years, Uganda has had landslides, floods, and a Biblical plague of locusts that devastated farms. The locusts came after sporadic drought and rain, stunning officials who didn't anticipate the swarms. "It became an eye-opener for us," says David Elweru, UNMA's acting executive director. Many nations facing such ravages lack the most modern tools to plan for the changing climate. Atmo says artificial intelligence programs are the answer. "Response begins with predictions," Levy says. "If we expect countries to react to events only after they've happened, we're dooming people to disaster and suffering." It's a novel approach. Meteorology poses considerable challenges for AI systems, and only a few weather authorities have experimented with it. Most countries haven't had the resources to try.

Ugandan officials signed a multi-year deal with Atmo but declined to share the terms. The UNMA picked the startup partly because its device was "way, way cheaper" than alternatives, according to Stephen Kaboyo, an investor advising Atmo in Uganda. Kaboyo spoke by phone in February, Kampala's dry season, as rain pelted the city. "We haven't seen this before," he said of the weather. "Who knows what is going to happen in the next three seasons?" [...] Atmo reports that its early tests have doubled the accuracy scores of baseline forecasts in Southeast Asia, where the startup is pursuing contracts. Initial tests on the ground in Uganda correctly predicted rainfall when other systems didn't, according to UNMA officials.

Supercomputing

Can Russia Bootstrap High-Performance Computing Clusters with Native Tech? (theregister.com) 53

"The events of recent days have taken us away from the stable and predictable development of mankind," argue two Moscow-based technology professors in Communications of the ACM, citing anticipated shortages of high-performance processors. But fortunately, they have a partial workarond...

One of the professors — Andrei Sukhov of HSE University in Moscow — explained their idea to the Register: In a timely piece Sukhov explains how Russian computer science teams are looking at building the next generation of clusters using older clustering technologies and a slew of open-source software for managing everything from code portability to parallelization as well as standards including PCIe 3.0, USB 4, and even existing Russian knock-off buses inspired by Infiniband (Angara ES8430).... While all the pieces might be in place, there is still the need to manufacture new boards, a problem Sukhov said can be routed around by using wireless protocols as the switching mechanism between processors, even though the network latency hit will be subpar, making it difficult to do any true tightly coupled, low-latency HPC simulations (which come in handy in areas like nuclear weapons simulations, as just one example).

"Given that the available mobile systems-on-chip are on the order of 100 Gflops, performance of several teraflops for small clusters of high-performance systems-on-chip is quite achievable," Sukhov added. "The use of standard open operating systems, such as Linux, will greatly facilitate the use of custom applications and allow such systems to run in the near future. It is possible that such clusters can be heterogeneous, including different systems-on-chip for different tasks (or, for example, FPGAs to create specialized on-the-fly configurable accelerators for specific tasks)...."

As he told The Register in a short exchange following the article, "Naturally, it will be impossible to make a new supercomputer in Russia in the coming years. Nevertheless, it is quite possible to close all the current needs in computing and data processing using the approach we have proposed. Especially if we apply hardware acceleration to tasks, depending on their type," he adds.... "During this implementation, software solutions and new protocols for data exchange, as well as computing technologies, will be worked out."

As for Russia's existing supercomputers, "no special problems are foreseen," Sukhov added. "These supercomputers are based on Linux and can continue to operate without the support of the companies that supplied the hardware and software."

Thanks to Slashdot reader katydid77 for sharing the article.
Technology

Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy (wsj.com) 219

magzteel shares a report: For almost five years, an international consortium of scientists was chasing clouds, determined to solve a problem that bedeviled climate-change forecasts for a generation: How do these wisps of water vapor affect global warming? They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements. They tested the equations, debugged them and tested again. The scientists would find that even the best tools at hand can't model climates with the sureness the world needs as rising temperatures impact almost every region. When they ran the updated simulation in 2018, the conclusion jolted them: Earth's atmosphere was much more sensitive to greenhouse gases than decades of previous models had predicted, and future temperatures could be much higher than feared -- perhaps even beyond hope of practical remedy. "We thought this was really strange," said Gokhan Danabasoglu, chief scientist for the climate-model project at the Mesa Laboratory in Boulder at the National Center for Atmospheric Research, or NCAR. "If that number was correct, that was really bad news." At least 20 older, simpler global-climate models disagreed with the new one at NCAR, an open-source model called the Community Earth System Model 2, or CESM2, funded mainly by the U.S. National Science Foundation and arguably the world's most influential climate program. Then, one by one, a dozen climate-modeling groups around the world produced similar forecasts. "It was not just us," Dr. Danabasoglu said.

The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. "The old way is just wrong, we know that," said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. "I think our higher sensitivity is wrong too. It's probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another." Since then the CESM2 scientists have been reworking their climate-change algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire -- and still in flux. As world leaders consider how to limit greenhouse gases, they depend heavily on what computer climate models predict. But as algorithms and the computer they run on become more powerful -- able to crunch far more data and do better simulations -- that very complexity has left climate scientists grappling with mismatches among competing computer models.

The Media

Are TED Talks Just Propaganda For the Technocracy? (thedriftmag.com) 151

"People are still paying between $5,000 and $50,000 to attend the annual flagship TED conference. In 2021," notes The Drift magazine, noting last year's event was held in Monterey, California. "Amid wildfires and the Delta surge, its theme was 'the case for optimism.'"

The magazine makes the case that over the last decade TED talks have been "endlessly re-articulating tech's promises without any serious critical reflection." And they start with how Bill Gates told an audience in 2015 that "we can be ready for the next epidemic." Gates's popular and well-shared TED talk — viewed millions of times — didn't alter the course of history. Neither did any of the other "ideas worth spreading" (the organization's tagline) presented at the TED conference that year — including Monica Lewinsky's massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer's talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year.....

At the start of the pandemic, I noticed people sharing Gates's 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn't so sure. It seems to me that Gates's prediction and proposed solution are at least part of what landed us here. I don't mean to suggest that Gates's TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about "the future" that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories.... In other words, in the TED episteme, the function of a story isn't to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED's longtime curator, puts it, "We live in an era where the best way to make a dent on the world... may be simply to stand up and say something." And yet, TED's archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned.

But the article also notes that TED's philosophy became "a magnet for narcissistic, recognition-seeking characters and their Theranos-like projects." (In 2014 Elizabeth Holmes herself spoke at a medical-themed TED conference.) And since 2009 the TEDx franchise lets licensees use the brand platform to stage independent events — which is how at a 2010 TEDx event, Randy Powell gave his infamous talk about vortex-based mathematics which he said would "create inexhaustible free energy, end all diseases, produce all food, travel anywhere in the universe, build the ultimate supercomputer and artificial intelligence, and make obsolete all existing technology."

Yet these are all just symptoms of a larger problem, the article ultimately argues. "As the most visible and influential public speaking platform of the first two decades of the twenty-first century, it has been deeply implicated in broadcasting and championing the Silicon Valley version of the future. TED is probably best understood as the propaganda arm of an ascendant technocracy.
AI

Meta Unveils New AI Supercomputer (wsj.com) 48

An anonymous reader quotes a report from The Wall Street Journal: Meta said Monday that its research team built a new artificial intelligence supercomputer that the company maintains will soon be the fastest in the world. The supercomputer, the AI Research SuperCluster, was the result of nearly two years of work, often conducted remotely during the height of the pandemic, and led by the Facebook parent's AI and infrastructure teams. Several hundred people, including researchers from partners Nvidia, Penguin Computing and Pure Storage, were involved in the project, the company said.

Meta, which announced the news in a blog post Monday, said its research team currently is using the supercomputer to train AI models in natural-language processing and computer vision for research. The aim is to boost capabilities to one day train models with more than a trillion parameters on data sets as large as an exabyte, which is roughly equivalent to 36,000 years of high-quality video. "The experiences we're building for the metaverse require enormous compute powerand RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more," Meta CEO Mark Zuckerberg said in a statement provided to The Wall Street Journal. Meta's AI supercomputer houses 6,080 Nvidia graphics-processing units, putting it fifth among the fastest supercomputers in the world, according to Meta.

By mid-summer, when the AI Research SuperCluster is fully built, it will house some 16,000 GPUs, becoming the fastest AI supercomputer in the world, Meta said. The company declined to comment on the location of the facility or the cost. [...] Eventually the supercomputer will help Meta's researchers build AI models that can work across hundreds of languages, analyze text, images and video together and develop augmented reality tools, the company said. The technology also will help Meta more easily identify harmful content and will aim to help Meta researchers develop artificial-intelligence models that think like the human brain and support rich, multidimensional experiences in the metaverse. "In the metaverse, it's one hundred percent of the time, a 3-D multi-sensorial experience, and you need to create artificial-intelligence agents in that environment that are relevant to you," said Jerome Pesenti, vice president of AI at Meta.

Data Storage

University Loses 77TB of Research Data Due To Backup Error (bleepingcomputer.com) 74

An anonymous reader quotes a report from BleepingComputer: The Kyoto University in Japan has lost about 77TB of research data due to an error in the backup system of its Hewlett-Packard supercomputer. The incident occurred between December 14 and 16, 2021, and resulted in 34 million files from 14 research groups being wiped from the system and the backup file. After investigating to determine the impact of the loss, the university concluded that the work of four of the affected groups could no longer be restored. All affected users have been individually notified of the incident via email, but no details were published on the type of work that was lost.

At the moment, the backup process has been stopped. To prevent data loss from happening again, the university has scraped the backup system and plans to apply improvements and re-introduce it in January 2022. The plan is to also keep incremental backups -- which cover files that have been changed since the last backup happened -- in addition to full backup mirrors. While the details of the type of data that was lost weren't revealed to the public, supercomputer research costs several hundreds of USD per hour, so this incident must have caused distress to the affected groups. The Kyoto University is considered one of Japan's most important research institutions and enjoys the second-largest scientific research investments from national grants. Its research excellence and importance is particularly distinctive in the area of chemistry, where it ranks fourth in the world, while it also contributes to biology, pharmacology, immunology, material science, and physics.

Businesses

US To Blacklist Eight More Chinese Companies, Including Drone Maker DJI (reuters.com) 115

schwit1 shares a report from the Financial Times: The US Treasury will put DJI and the other groups on its Chinese military-industrial complex companies blacklist on Thursday (Warning: source may be paywalled; alternative source), according to two people briefed on the move. US investors are barred from taking financial stakes in the 60 Chinese groups already on the blacklist. The measure marks the latest effort by President Biden to punish China for its repression of Uyghurs and other Muslim ethnic minorities in the north-western Xinjiang region.

The other Chinese companies that will be blacklisted on Thursday include Megvii, SenseTimes main rival that last year halted plans to list in Hong Kong after it was put on a separate US blacklist, and Dawning Information Industry, a supercomputer manufacturer that operates cloud computing services in Xinjiang. Also to be added are CloudWalk Technology, a facial recognition software company, Xiamen Meiya Pico, a cyber security group that works with law enforcement, Yitu Technology, an artificial intelligence company, Leon Technology, a cloud computing company, and NetPosa Technologies, a producer of cloud-based surveillance systems. DJI and Megvii are not publicly traded, but Dawning Information, which is also known as Sugon, is listed in Shanghai, and Leon, NetPosa and Meiya Pico trade in Shenzhen. All eight companies are already on the commerce department's "entity list," which restricts US companies from exporting technology or products from America to the Chinese groups without obtaining a government license.

Classic Games (Games)

Magnus Carlsen Wins 8th World Chess Championship. What Makes Him So Great? (espn.com) 42

"On Friday, needing just one point against Ian Nepomniachtchi to defend his world champion status, Magnus Carlsen closed the match out with three games to spare, 7.5-3.5," ESPN reports. "He's been the No 1 chess player in the world for a decade now...

"In a technologically flat, AI-powered chess world where preparation among the best players can be almost equal, what really makes one guy stand out with his dominance and genius for this long...? American Grandmaster and chess commentator Robert Hess describes Carlsen as the "hardest worker you'll find" both at the board and in preparation. "He is second-to-none at evading common theoretical lines and prefers to outplay his opponents in positions where both players must rely on their understanding of the current dynamics," Hess says...

At the start of this year, news emerged of Nepomniachtchi and his team having access to a supercomputer cluster, Zhores, from the Moscow-based Skolkovo Institute of Science and Technology. He was using it for his Candidates tournament preparation, a tournament he went on to win. He gained the challenger status for the World Championship and the Zhores supercomputer reportedly continued to be a mainstay in his team. Zhores was specifically designed to solve problems in machine learning and data-based modeling with a capacity of one Petaflop per second.... Players use computers and open-source AI engines to analyze openings, bolster preparation, scour for a bank of new ideas and to go down lines that the other is unlikely to have explored.

The tiny detail though is, that against Carlsen, it may not be enough. He has the notoriety of drawing opponents into obscure positions, hurling them out of preparation and into the deep end, often leading to a complex struggle. Whether you have the fastest supercomputer on your team then becomes almost irrelevant. It comes down to a battle of intuition, tactics and staying power, human to human. In such scenarios, almost always, Carlsen comes out on top. "[Nepomniachtchi] couldn't show his best chess...it's a pity for the excitement of the match," he said later, "I think that's what happens when you get into difficult situations...all the preparation doesn't necessarily help you if you can't cope in the moment...."

Soon after his win on Friday, Carlsen announced he'd be "celebrating" by playing the World Rapid and Blitz Championships in Warsaw, a fortnight from now. He presently holds both those titles...

The article also remembers what happened in 2018 when Carlsen was asked to name his favorite chess player from the past. Carlsen's answer?

"Probably myself, like, three or four years ago."
Robotics

World's First Living Robots Can Now Reproduce, Scientists Say (cnn.com) 77

The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals. CNN reports: Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal. Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University's Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.

[T]hey found that the xenobots, which were initially sphere-shaped and made from around 3,000 cells, could replicate. But it happened rarely and only in specific circumstances. The xenobots used "kinetic replication" -- a process that is known to occur at the molecular level but has never been observed before at the scale of whole cells or organisms [...]. With the help of artificial intelligence, the researchers then tested billions of body shapes to make the xenobots more effective at this type of replication. The supercomputer came up with a C-shape that resembled Pac-Man, the 1980s video game. They found it was able to find tiny stem cells in a petri dish, gather hundreds of them inside its mouth, and a few days later the bundle of cells became new xenobots.

The xenobots are very early technology -- think of a 1940s computer -- and don't yet have any practical applications. However, this combination of molecular biology and artificial intelligence could potentially be used in a host of tasks in the body and the environment, according to the researchers. This may include things like collecting microplastics in the oceans, inspecting root systems and regenerative medicine. While the prospect of self-replicating biotechnology could spark concern, the researchers said that the living machines were entirely contained in a lab and easily extinguished, as they are biodegradable and regulated by ethics experts.
"Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study, writing in the Proceedings of the National Academy of Sciences. "In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."

"The AI didn't program these machines in the way we usually think about writing code. It shaped and sculpted and came up with this Pac-Man shape," Bongard said. "The shape is, in essence, the program. The shape influences how the xenobots behave to amplify this incredibly surprising process."
Supercomputing

Japan's Fugaku Retains Title As World's Fastest Supercomputer (datacenterdynamics.com) 13

According to a report from Nikkei Asia (paywalled), "The Japanese-made Fugaku captured its fourth consecutive title as the world's fastest supercomputer on Tuesday, although a rival from the U.S. or China is poised to steal the crown as soon as next year." From a report: But while Fugaku is the world's most powerful public supercomputer, at 442 petaflops, China is believed to secretly operate two exascale (1,000 petaflops) supercomputers, which were launched earlier this year. The top 10 list did not change much since the last report six months ago, with only one new addition -- a Microsoft Azure system called Voyager-EUS2. Voyager, featuring AMD Epyc CPUs and Nvidia A100 GPUs, achieved 30.05 petaflops, making it the tenth most powerful supercomputer in the world.

The other systems remained in the same position - after Japan's Arm-based Fugaku comes the US Summit system, an IBM Power and Nvidia GPU supercomputer capable of 148 petaflops. The similarly-architected 94 petaflops US Sierra system is next. Then comes what is officially China's most powerful supercomputer, the 93 petaflops Sunway TaihuLight, which features Sunway chips. The Biden administration sanctioned the company earlier this year.
You can read a summary of the systems in the Top10 here.
Software

'If Apple Keeps Letting Its Software Slip, the Next Big Thing Won't Matter' (macworld.com) 116

If Apple can't improve the reliability of its software, the next big thing won't matter, argues Dan Moren in an opinion piece for Macworld. From the report: Uneven distribution: As sci-fi writer William Gibson famously said, "the future is already here -- it's just not evenly distributed." While Gibson's comment resonates mostly on a socio-economic level that is borne out by Apple's not inexpensive technology, it's also embodied geographically by the company's work: if you're interested, you can see which Apple features are available in which regions. Many of these, of course, are due to restrictions and laws in specific regions or places where, say, Apple has not prioritized language localization. But some of them are cases where features have been rolled out only slowly to certain places. [...] It's surely less exciting for Apple to think about rolling out these (in some cases years old) features, especially those which might require a large degree of legwork, to various places than it is for the company to demonstrate its latest shiny feature, but it also means that sometimes these features don't make it to many, if not most of the users of its devices. Uneven distribution, indeed.

To error is machine: It's happened to pretty much any Apple device user: You go to use a feature and it just doesn't work. Sometimes there's no explanation as to why; other times, there's just a cryptic error message that provides no help at all. [...]

Shooting trouble: Sometimes what we're dealing with in the aforementioned situations are what we call "edge cases." Apple engineers surely do their best to test their features with a variety of hardware, in different places, with different settings. [...] Nobody expects Apple to catch everything, but the question remains: when these problems do arise, what do we do about them? One thing Apple could improve is the ease for users to report issues they encounter. Too often, I see missives posted on Apple discussion boards that encourage people to get in touch with Apple support... which often means a lengthy reiteration of the old troubleshooting canards. While these can sometimes solve problems, if not actually explain them, it's not a process that most consumers are likely to go through. And when those steps don't resolve the issues, users are often left with a virtual shrug.

Likewise, while Apple does provide a place to send feedback about products, it's explicitly not a way to report problems. Making it easier for users to report bugs and unexpected behavior would go a long way to helping owners of Apple products feel like they're not simply shouting their frustrations into a void (aka Twitter). If Apple can't improve the reliability of its software [...] it at least owes it to its users to create more robust resources for helping them help themselves. Because there's nothing more frustrating than not understanding why a miraculous device that can contact people around the world instantaneously, run incredibly powerful games, and crunch data faster than a supercomputer of yesteryear sometimes can't do something as simple as export a video of a vacation.
While Moren focuses primarily on unfinished features to help make his case, "there is also a huge problem with things being touched for no reason and making them worse," says HN reader makecheck. "When handed what must be a mountain of bugs and unfinished items, why the hell did they prioritize things like breaking notifications and Safari tabs, for instance? They're in a position where engineering resources desperately need to be closing gaps, not creating huge new ones."

An example of this would be the current UX of notifications. "A notification comes up, I hover and wait for the cross to appear and click it," writes noneeeed. "But then some time later I unlock my machine or something happens and apparently all my notifications are still there for some reason and I have to clear them again, only this time they are in groups and I have to clear multiple groups."

"Don't get me started on the new iOS podcast app," adds another reader.
China

Have Scientists Disproven Google's Quantum Supremacy Claim? (scmp.com) 35

Slashdot reader AltMachine writes: In October 2019, Google said its Sycamore processor was the first to achieve quantum supremacy by completing a task in three minutes and 20 seconds that would have taken the best classical supercomputer, IBM's Summit, 10,000 years. That claim — particularly how Google scientists arrived at the "10,000 years" conclusion — has been questioned by some researchers, but the counterclaim itself was not definitive.

Now though, in a paper to be submitted to a scientific journal for peer review, scientists at the Institute of Theoretical Physics under the Chinese Academy of Sciences said their algorithm on classical computers completed the simulation for the Sycamore quantum circuits [possibly paywalled; alternative source of the same article] "in about 15 hours using 512 graphics processing units (GPUs)" at a higher fidelity than Sycamore's. Further, the team said "if our simulation of the quantum supremacy circuits can be implemented in an upcoming exaflop supercomputer with high efficiency, in principle, the overall simulation time can be reduced to a few dozens of seconds, which is faster than Google's hardware experiments".

As China unveiled a photonic quantum computer which solved a Gaussian boson sampling problem in 200 seconds that would have taken 600 million years on classical computer, in December 2020, disproving Sycamore's claim would place China being the first country to achieve quantum supremacy.

Supercomputing

China's New Quantum Computer Has 1 Million Times the Power of Google's (interestingengineering.com) 143

Physicists in China claim they've constructed two quantum computers with performance speeds that outrival competitors in the U.S., debuting a superconducting machine, in addition to an even speedier one that uses light photons to obtain unprecedented results, according to a recent study published in the peer-reviewed journals Physical Review Letters and Science Bulletin. Interesting Engineering reports: The supercomputer, called Jiuzhang 2, can calculate in a single millisecond a task that the fastest conventional computer in the world would take a mind-numbing 30 trillion years to do. The breakthrough was revealed during an interview with the research team, which was broadcast on China's state-owned CCTV on Tuesday, which could make the news suspect. But with two peer-reviewed papers, it's important to take this seriously. Pan Jianwei, lead researcher of the studies, said that Zuchongzhi 2, which is a 66-qubit programmable superconducting quantum computer is an incredible 10 million times faster than Google's 55-qubit Sycamore, making China's new machine the fastest in the world, and the first to beat Google's in two years.

The Zuchongzhi 2 is an improved version of a previous machine, completed three months ago. The Jiuzhang 2, a different quantum computer that runs on light, has fewer applications but can run at blinding speeds of 100 sextillion times faster than the biggest conventional computers of today. In case you missed it, that's a one with 23 zeroes behind it. But while the features of these new machines hint at a computing revolution, they won't hit the marketplace anytime soon. As things stand, the two machines can only operate in pristine environments, and only for hyper-specific tasks. And even with special care, they still make lots of errors. "In the next step we hope to achieve quantum error correction with four to five years of hard work," said Professor Pan of the University of Science and Technology of China, in Hefei, which is in the southeastern province of Anhui.

Slashdot Top Deals