×
Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

AI

How AI Can Make Weather Forecasting Better and Cheaper (bloomberg.com) 23

An anonymous reader quotes a report from Bloomberg: In early February a black box crammed with computer processors took a flight from California to Uganda. The squat, 4-foot-high box resembled a giant stereo amp. Once settled into place in Kampala, its job was to predict the weather better than anything the nation had used before (Warning: source may be paywalled; alternative source). The California startup that shipped the device, Atmo AI, plans by this summer to swap it out for a grander invention: a sleek, metallic supercomputer standing 8 feet tall and packing in 20 times more power. "It's meant to be the iPhone of global meteorology," says Alexander Levy, Atmo's co-founder and chief executive officer. That's a nod to Apple's design cred and market strategy: In many countries, consumers who'd never owned desktop computers bought smartphones in droves. Similarly, Atmo says, countries without the pricey supercomputers and data centers needed to make state-of-the-art weather forecasts -- effectively, every nation that's not a global superpower -- will pay for its cheaper device instead.

For its first customer, though, the Uganda National Meteorological Authority (UNMA), Atmo is sending its beta version, the plain black box. Prizing function over form seems wise for the urgent problem at hand. In recent years, Uganda has had landslides, floods, and a Biblical plague of locusts that devastated farms. The locusts came after sporadic drought and rain, stunning officials who didn't anticipate the swarms. "It became an eye-opener for us," says David Elweru, UNMA's acting executive director. Many nations facing such ravages lack the most modern tools to plan for the changing climate. Atmo says artificial intelligence programs are the answer. "Response begins with predictions," Levy says. "If we expect countries to react to events only after they've happened, we're dooming people to disaster and suffering." It's a novel approach. Meteorology poses considerable challenges for AI systems, and only a few weather authorities have experimented with it. Most countries haven't had the resources to try.

Ugandan officials signed a multi-year deal with Atmo but declined to share the terms. The UNMA picked the startup partly because its device was "way, way cheaper" than alternatives, according to Stephen Kaboyo, an investor advising Atmo in Uganda. Kaboyo spoke by phone in February, Kampala's dry season, as rain pelted the city. "We haven't seen this before," he said of the weather. "Who knows what is going to happen in the next three seasons?" [...] Atmo reports that its early tests have doubled the accuracy scores of baseline forecasts in Southeast Asia, where the startup is pursuing contracts. Initial tests on the ground in Uganda correctly predicted rainfall when other systems didn't, according to UNMA officials.

Supercomputing

Can Russia Bootstrap High-Performance Computing Clusters with Native Tech? (theregister.com) 53

"The events of recent days have taken us away from the stable and predictable development of mankind," argue two Moscow-based technology professors in Communications of the ACM, citing anticipated shortages of high-performance processors. But fortunately, they have a partial workarond...

One of the professors — Andrei Sukhov of HSE University in Moscow — explained their idea to the Register: In a timely piece Sukhov explains how Russian computer science teams are looking at building the next generation of clusters using older clustering technologies and a slew of open-source software for managing everything from code portability to parallelization as well as standards including PCIe 3.0, USB 4, and even existing Russian knock-off buses inspired by Infiniband (Angara ES8430).... While all the pieces might be in place, there is still the need to manufacture new boards, a problem Sukhov said can be routed around by using wireless protocols as the switching mechanism between processors, even though the network latency hit will be subpar, making it difficult to do any true tightly coupled, low-latency HPC simulations (which come in handy in areas like nuclear weapons simulations, as just one example).

"Given that the available mobile systems-on-chip are on the order of 100 Gflops, performance of several teraflops for small clusters of high-performance systems-on-chip is quite achievable," Sukhov added. "The use of standard open operating systems, such as Linux, will greatly facilitate the use of custom applications and allow such systems to run in the near future. It is possible that such clusters can be heterogeneous, including different systems-on-chip for different tasks (or, for example, FPGAs to create specialized on-the-fly configurable accelerators for specific tasks)...."

As he told The Register in a short exchange following the article, "Naturally, it will be impossible to make a new supercomputer in Russia in the coming years. Nevertheless, it is quite possible to close all the current needs in computing and data processing using the approach we have proposed. Especially if we apply hardware acceleration to tasks, depending on their type," he adds.... "During this implementation, software solutions and new protocols for data exchange, as well as computing technologies, will be worked out."

As for Russia's existing supercomputers, "no special problems are foreseen," Sukhov added. "These supercomputers are based on Linux and can continue to operate without the support of the companies that supplied the hardware and software."

Thanks to Slashdot reader katydid77 for sharing the article.
Technology

Climate Scientists Encounter Limits of Computer Models, Bedeviling Policy (wsj.com) 219

magzteel shares a report: For almost five years, an international consortium of scientists was chasing clouds, determined to solve a problem that bedeviled climate-change forecasts for a generation: How do these wisps of water vapor affect global warming? They reworked 2.1 million lines of supercomputer code used to explore the future of climate change, adding more-intricate equations for clouds and hundreds of other improvements. They tested the equations, debugged them and tested again. The scientists would find that even the best tools at hand can't model climates with the sureness the world needs as rising temperatures impact almost every region. When they ran the updated simulation in 2018, the conclusion jolted them: Earth's atmosphere was much more sensitive to greenhouse gases than decades of previous models had predicted, and future temperatures could be much higher than feared -- perhaps even beyond hope of practical remedy. "We thought this was really strange," said Gokhan Danabasoglu, chief scientist for the climate-model project at the Mesa Laboratory in Boulder at the National Center for Atmospheric Research, or NCAR. "If that number was correct, that was really bad news." At least 20 older, simpler global-climate models disagreed with the new one at NCAR, an open-source model called the Community Earth System Model 2, or CESM2, funded mainly by the U.S. National Science Foundation and arguably the world's most influential climate program. Then, one by one, a dozen climate-modeling groups around the world produced similar forecasts. "It was not just us," Dr. Danabasoglu said.

The scientists soon concluded their new calculations had been thrown off kilter by the physics of clouds in a warming world, which may amplify or damp climate change. "The old way is just wrong, we know that," said Andrew Gettelman, a physicist at NCAR who specializes in clouds and helped develop the CESM2 model. "I think our higher sensitivity is wrong too. It's probably a consequence of other things we did by making clouds better and more realistic. You solve one problem and create another." Since then the CESM2 scientists have been reworking their climate-change algorithms using a deluge of new information about the effects of rising temperatures to better understand the physics at work. They have abandoned their most extreme calculations of climate sensitivity, but their more recent projections of future global warming are still dire -- and still in flux. As world leaders consider how to limit greenhouse gases, they depend heavily on what computer climate models predict. But as algorithms and the computer they run on become more powerful -- able to crunch far more data and do better simulations -- that very complexity has left climate scientists grappling with mismatches among competing computer models.

The Media

Are TED Talks Just Propaganda For the Technocracy? (thedriftmag.com) 151

"People are still paying between $5,000 and $50,000 to attend the annual flagship TED conference. In 2021," notes The Drift magazine, noting last year's event was held in Monterey, California. "Amid wildfires and the Delta surge, its theme was 'the case for optimism.'"

The magazine makes the case that over the last decade TED talks have been "endlessly re-articulating tech's promises without any serious critical reflection." And they start with how Bill Gates told an audience in 2015 that "we can be ready for the next epidemic." Gates's popular and well-shared TED talk — viewed millions of times — didn't alter the course of history. Neither did any of the other "ideas worth spreading" (the organization's tagline) presented at the TED conference that year — including Monica Lewinsky's massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer's talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year.....

At the start of the pandemic, I noticed people sharing Gates's 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn't so sure. It seems to me that Gates's prediction and proposed solution are at least part of what landed us here. I don't mean to suggest that Gates's TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about "the future" that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories.... In other words, in the TED episteme, the function of a story isn't to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED's longtime curator, puts it, "We live in an era where the best way to make a dent on the world... may be simply to stand up and say something." And yet, TED's archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned.

But the article also notes that TED's philosophy became "a magnet for narcissistic, recognition-seeking characters and their Theranos-like projects." (In 2014 Elizabeth Holmes herself spoke at a medical-themed TED conference.) And since 2009 the TEDx franchise lets licensees use the brand platform to stage independent events — which is how at a 2010 TEDx event, Randy Powell gave his infamous talk about vortex-based mathematics which he said would "create inexhaustible free energy, end all diseases, produce all food, travel anywhere in the universe, build the ultimate supercomputer and artificial intelligence, and make obsolete all existing technology."

Yet these are all just symptoms of a larger problem, the article ultimately argues. "As the most visible and influential public speaking platform of the first two decades of the twenty-first century, it has been deeply implicated in broadcasting and championing the Silicon Valley version of the future. TED is probably best understood as the propaganda arm of an ascendant technocracy.
AI

Meta Unveils New AI Supercomputer (wsj.com) 48

An anonymous reader quotes a report from The Wall Street Journal: Meta said Monday that its research team built a new artificial intelligence supercomputer that the company maintains will soon be the fastest in the world. The supercomputer, the AI Research SuperCluster, was the result of nearly two years of work, often conducted remotely during the height of the pandemic, and led by the Facebook parent's AI and infrastructure teams. Several hundred people, including researchers from partners Nvidia, Penguin Computing and Pure Storage, were involved in the project, the company said.

Meta, which announced the news in a blog post Monday, said its research team currently is using the supercomputer to train AI models in natural-language processing and computer vision for research. The aim is to boost capabilities to one day train models with more than a trillion parameters on data sets as large as an exabyte, which is roughly equivalent to 36,000 years of high-quality video. "The experiences we're building for the metaverse require enormous compute powerand RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more," Meta CEO Mark Zuckerberg said in a statement provided to The Wall Street Journal. Meta's AI supercomputer houses 6,080 Nvidia graphics-processing units, putting it fifth among the fastest supercomputers in the world, according to Meta.

By mid-summer, when the AI Research SuperCluster is fully built, it will house some 16,000 GPUs, becoming the fastest AI supercomputer in the world, Meta said. The company declined to comment on the location of the facility or the cost. [...] Eventually the supercomputer will help Meta's researchers build AI models that can work across hundreds of languages, analyze text, images and video together and develop augmented reality tools, the company said. The technology also will help Meta more easily identify harmful content and will aim to help Meta researchers develop artificial-intelligence models that think like the human brain and support rich, multidimensional experiences in the metaverse. "In the metaverse, it's one hundred percent of the time, a 3-D multi-sensorial experience, and you need to create artificial-intelligence agents in that environment that are relevant to you," said Jerome Pesenti, vice president of AI at Meta.

Data Storage

University Loses 77TB of Research Data Due To Backup Error (bleepingcomputer.com) 74

An anonymous reader quotes a report from BleepingComputer: The Kyoto University in Japan has lost about 77TB of research data due to an error in the backup system of its Hewlett-Packard supercomputer. The incident occurred between December 14 and 16, 2021, and resulted in 34 million files from 14 research groups being wiped from the system and the backup file. After investigating to determine the impact of the loss, the university concluded that the work of four of the affected groups could no longer be restored. All affected users have been individually notified of the incident via email, but no details were published on the type of work that was lost.

At the moment, the backup process has been stopped. To prevent data loss from happening again, the university has scraped the backup system and plans to apply improvements and re-introduce it in January 2022. The plan is to also keep incremental backups -- which cover files that have been changed since the last backup happened -- in addition to full backup mirrors. While the details of the type of data that was lost weren't revealed to the public, supercomputer research costs several hundreds of USD per hour, so this incident must have caused distress to the affected groups. The Kyoto University is considered one of Japan's most important research institutions and enjoys the second-largest scientific research investments from national grants. Its research excellence and importance is particularly distinctive in the area of chemistry, where it ranks fourth in the world, while it also contributes to biology, pharmacology, immunology, material science, and physics.

Businesses

US To Blacklist Eight More Chinese Companies, Including Drone Maker DJI (reuters.com) 115

schwit1 shares a report from the Financial Times: The US Treasury will put DJI and the other groups on its Chinese military-industrial complex companies blacklist on Thursday (Warning: source may be paywalled; alternative source), according to two people briefed on the move. US investors are barred from taking financial stakes in the 60 Chinese groups already on the blacklist. The measure marks the latest effort by President Biden to punish China for its repression of Uyghurs and other Muslim ethnic minorities in the north-western Xinjiang region.

The other Chinese companies that will be blacklisted on Thursday include Megvii, SenseTimes main rival that last year halted plans to list in Hong Kong after it was put on a separate US blacklist, and Dawning Information Industry, a supercomputer manufacturer that operates cloud computing services in Xinjiang. Also to be added are CloudWalk Technology, a facial recognition software company, Xiamen Meiya Pico, a cyber security group that works with law enforcement, Yitu Technology, an artificial intelligence company, Leon Technology, a cloud computing company, and NetPosa Technologies, a producer of cloud-based surveillance systems. DJI and Megvii are not publicly traded, but Dawning Information, which is also known as Sugon, is listed in Shanghai, and Leon, NetPosa and Meiya Pico trade in Shenzhen. All eight companies are already on the commerce department's "entity list," which restricts US companies from exporting technology or products from America to the Chinese groups without obtaining a government license.

Classic Games (Games)

Magnus Carlsen Wins 8th World Chess Championship. What Makes Him So Great? (espn.com) 42

"On Friday, needing just one point against Ian Nepomniachtchi to defend his world champion status, Magnus Carlsen closed the match out with three games to spare, 7.5-3.5," ESPN reports. "He's been the No 1 chess player in the world for a decade now...

"In a technologically flat, AI-powered chess world where preparation among the best players can be almost equal, what really makes one guy stand out with his dominance and genius for this long...? American Grandmaster and chess commentator Robert Hess describes Carlsen as the "hardest worker you'll find" both at the board and in preparation. "He is second-to-none at evading common theoretical lines and prefers to outplay his opponents in positions where both players must rely on their understanding of the current dynamics," Hess says...

At the start of this year, news emerged of Nepomniachtchi and his team having access to a supercomputer cluster, Zhores, from the Moscow-based Skolkovo Institute of Science and Technology. He was using it for his Candidates tournament preparation, a tournament he went on to win. He gained the challenger status for the World Championship and the Zhores supercomputer reportedly continued to be a mainstay in his team. Zhores was specifically designed to solve problems in machine learning and data-based modeling with a capacity of one Petaflop per second.... Players use computers and open-source AI engines to analyze openings, bolster preparation, scour for a bank of new ideas and to go down lines that the other is unlikely to have explored.

The tiny detail though is, that against Carlsen, it may not be enough. He has the notoriety of drawing opponents into obscure positions, hurling them out of preparation and into the deep end, often leading to a complex struggle. Whether you have the fastest supercomputer on your team then becomes almost irrelevant. It comes down to a battle of intuition, tactics and staying power, human to human. In such scenarios, almost always, Carlsen comes out on top. "[Nepomniachtchi] couldn't show his best chess...it's a pity for the excitement of the match," he said later, "I think that's what happens when you get into difficult situations...all the preparation doesn't necessarily help you if you can't cope in the moment...."

Soon after his win on Friday, Carlsen announced he'd be "celebrating" by playing the World Rapid and Blitz Championships in Warsaw, a fortnight from now. He presently holds both those titles...

The article also remembers what happened in 2018 when Carlsen was asked to name his favorite chess player from the past. Carlsen's answer?

"Probably myself, like, three or four years ago."
Robotics

World's First Living Robots Can Now Reproduce, Scientists Say (cnn.com) 77

The US scientists who created the first living robots say the life forms, known as xenobots, can now reproduce -- and in a way not seen in plants and animals. CNN reports: Formed from the stem cells of the African clawed frog (Xenopus laevis) from which it takes its name, xenobots are less than a millimeter (0.04 inches) wide. The tiny blobs were first unveiled in 2020 after experiments showed that they could move, work together in groups and self-heal. Now the scientists that developed them at the University of Vermont, Tufts University and Harvard University's Wyss Institute for Biologically Inspired Engineering said they have discovered an entirely new form of biological reproduction different from any animal or plant known to science.

[T]hey found that the xenobots, which were initially sphere-shaped and made from around 3,000 cells, could replicate. But it happened rarely and only in specific circumstances. The xenobots used "kinetic replication" -- a process that is known to occur at the molecular level but has never been observed before at the scale of whole cells or organisms [...]. With the help of artificial intelligence, the researchers then tested billions of body shapes to make the xenobots more effective at this type of replication. The supercomputer came up with a C-shape that resembled Pac-Man, the 1980s video game. They found it was able to find tiny stem cells in a petri dish, gather hundreds of them inside its mouth, and a few days later the bundle of cells became new xenobots.

The xenobots are very early technology -- think of a 1940s computer -- and don't yet have any practical applications. However, this combination of molecular biology and artificial intelligence could potentially be used in a host of tasks in the body and the environment, according to the researchers. This may include things like collecting microplastics in the oceans, inspecting root systems and regenerative medicine. While the prospect of self-replicating biotechnology could spark concern, the researchers said that the living machines were entirely contained in a lab and easily extinguished, as they are biodegradable and regulated by ethics experts.
"Most people think of robots as made of metals and ceramics but it's not so much what a robot is made from but what it does, which is act on its own on behalf of people," said Josh Bongard, a computer science professor and robotics expert at the University of Vermont and lead author of the study, writing in the Proceedings of the National Academy of Sciences. "In that way it's a robot but it's also clearly an organism made from genetically unmodified frog cell."

"The AI didn't program these machines in the way we usually think about writing code. It shaped and sculpted and came up with this Pac-Man shape," Bongard said. "The shape is, in essence, the program. The shape influences how the xenobots behave to amplify this incredibly surprising process."
Supercomputing

Japan's Fugaku Retains Title As World's Fastest Supercomputer (datacenterdynamics.com) 13

According to a report from Nikkei Asia (paywalled), "The Japanese-made Fugaku captured its fourth consecutive title as the world's fastest supercomputer on Tuesday, although a rival from the U.S. or China is poised to steal the crown as soon as next year." From a report: But while Fugaku is the world's most powerful public supercomputer, at 442 petaflops, China is believed to secretly operate two exascale (1,000 petaflops) supercomputers, which were launched earlier this year. The top 10 list did not change much since the last report six months ago, with only one new addition -- a Microsoft Azure system called Voyager-EUS2. Voyager, featuring AMD Epyc CPUs and Nvidia A100 GPUs, achieved 30.05 petaflops, making it the tenth most powerful supercomputer in the world.

The other systems remained in the same position - after Japan's Arm-based Fugaku comes the US Summit system, an IBM Power and Nvidia GPU supercomputer capable of 148 petaflops. The similarly-architected 94 petaflops US Sierra system is next. Then comes what is officially China's most powerful supercomputer, the 93 petaflops Sunway TaihuLight, which features Sunway chips. The Biden administration sanctioned the company earlier this year.
You can read a summary of the systems in the Top10 here.
Software

'If Apple Keeps Letting Its Software Slip, the Next Big Thing Won't Matter' (macworld.com) 116

If Apple can't improve the reliability of its software, the next big thing won't matter, argues Dan Moren in an opinion piece for Macworld. From the report: Uneven distribution: As sci-fi writer William Gibson famously said, "the future is already here -- it's just not evenly distributed." While Gibson's comment resonates mostly on a socio-economic level that is borne out by Apple's not inexpensive technology, it's also embodied geographically by the company's work: if you're interested, you can see which Apple features are available in which regions. Many of these, of course, are due to restrictions and laws in specific regions or places where, say, Apple has not prioritized language localization. But some of them are cases where features have been rolled out only slowly to certain places. [...] It's surely less exciting for Apple to think about rolling out these (in some cases years old) features, especially those which might require a large degree of legwork, to various places than it is for the company to demonstrate its latest shiny feature, but it also means that sometimes these features don't make it to many, if not most of the users of its devices. Uneven distribution, indeed.

To error is machine: It's happened to pretty much any Apple device user: You go to use a feature and it just doesn't work. Sometimes there's no explanation as to why; other times, there's just a cryptic error message that provides no help at all. [...]

Shooting trouble: Sometimes what we're dealing with in the aforementioned situations are what we call "edge cases." Apple engineers surely do their best to test their features with a variety of hardware, in different places, with different settings. [...] Nobody expects Apple to catch everything, but the question remains: when these problems do arise, what do we do about them? One thing Apple could improve is the ease for users to report issues they encounter. Too often, I see missives posted on Apple discussion boards that encourage people to get in touch with Apple support... which often means a lengthy reiteration of the old troubleshooting canards. While these can sometimes solve problems, if not actually explain them, it's not a process that most consumers are likely to go through. And when those steps don't resolve the issues, users are often left with a virtual shrug.

Likewise, while Apple does provide a place to send feedback about products, it's explicitly not a way to report problems. Making it easier for users to report bugs and unexpected behavior would go a long way to helping owners of Apple products feel like they're not simply shouting their frustrations into a void (aka Twitter). If Apple can't improve the reliability of its software [...] it at least owes it to its users to create more robust resources for helping them help themselves. Because there's nothing more frustrating than not understanding why a miraculous device that can contact people around the world instantaneously, run incredibly powerful games, and crunch data faster than a supercomputer of yesteryear sometimes can't do something as simple as export a video of a vacation.
While Moren focuses primarily on unfinished features to help make his case, "there is also a huge problem with things being touched for no reason and making them worse," says HN reader makecheck. "When handed what must be a mountain of bugs and unfinished items, why the hell did they prioritize things like breaking notifications and Safari tabs, for instance? They're in a position where engineering resources desperately need to be closing gaps, not creating huge new ones."

An example of this would be the current UX of notifications. "A notification comes up, I hover and wait for the cross to appear and click it," writes noneeeed. "But then some time later I unlock my machine or something happens and apparently all my notifications are still there for some reason and I have to clear them again, only this time they are in groups and I have to clear multiple groups."

"Don't get me started on the new iOS podcast app," adds another reader.
China

Have Scientists Disproven Google's Quantum Supremacy Claim? (scmp.com) 35

Slashdot reader AltMachine writes: In October 2019, Google said its Sycamore processor was the first to achieve quantum supremacy by completing a task in three minutes and 20 seconds that would have taken the best classical supercomputer, IBM's Summit, 10,000 years. That claim — particularly how Google scientists arrived at the "10,000 years" conclusion — has been questioned by some researchers, but the counterclaim itself was not definitive.

Now though, in a paper to be submitted to a scientific journal for peer review, scientists at the Institute of Theoretical Physics under the Chinese Academy of Sciences said their algorithm on classical computers completed the simulation for the Sycamore quantum circuits [possibly paywalled; alternative source of the same article] "in about 15 hours using 512 graphics processing units (GPUs)" at a higher fidelity than Sycamore's. Further, the team said "if our simulation of the quantum supremacy circuits can be implemented in an upcoming exaflop supercomputer with high efficiency, in principle, the overall simulation time can be reduced to a few dozens of seconds, which is faster than Google's hardware experiments".

As China unveiled a photonic quantum computer which solved a Gaussian boson sampling problem in 200 seconds that would have taken 600 million years on classical computer, in December 2020, disproving Sycamore's claim would place China being the first country to achieve quantum supremacy.

Supercomputing

China's New Quantum Computer Has 1 Million Times the Power of Google's (interestingengineering.com) 143

Physicists in China claim they've constructed two quantum computers with performance speeds that outrival competitors in the U.S., debuting a superconducting machine, in addition to an even speedier one that uses light photons to obtain unprecedented results, according to a recent study published in the peer-reviewed journals Physical Review Letters and Science Bulletin. Interesting Engineering reports: The supercomputer, called Jiuzhang 2, can calculate in a single millisecond a task that the fastest conventional computer in the world would take a mind-numbing 30 trillion years to do. The breakthrough was revealed during an interview with the research team, which was broadcast on China's state-owned CCTV on Tuesday, which could make the news suspect. But with two peer-reviewed papers, it's important to take this seriously. Pan Jianwei, lead researcher of the studies, said that Zuchongzhi 2, which is a 66-qubit programmable superconducting quantum computer is an incredible 10 million times faster than Google's 55-qubit Sycamore, making China's new machine the fastest in the world, and the first to beat Google's in two years.

The Zuchongzhi 2 is an improved version of a previous machine, completed three months ago. The Jiuzhang 2, a different quantum computer that runs on light, has fewer applications but can run at blinding speeds of 100 sextillion times faster than the biggest conventional computers of today. In case you missed it, that's a one with 23 zeroes behind it. But while the features of these new machines hint at a computing revolution, they won't hit the marketplace anytime soon. As things stand, the two machines can only operate in pristine environments, and only for hyper-specific tasks. And even with special care, they still make lots of errors. "In the next step we hope to achieve quantum error correction with four to five years of hard work," said Professor Pan of the University of Science and Technology of China, in Hefei, which is in the southeastern province of Anhui.

Supercomputing

Scientists Develop the Next Generation of Reservoir Computing (phys.org) 48

An anonymous reader quotes a report from Phys.Org: A relatively new type of computing that mimics the way the human brain works was already transforming how scientists could tackle some of the most difficult information processing problems. Now, researchers have found a way to make what is called reservoir computing work between 33 and a million times faster, with significantly fewer computing resources and less data input needed. In fact, in one test of this next-generation reservoir computing, researchers solved a complex computing problem in less than a second on a desktop computer. Using the now current state-of-the-art technology, the same problem requires a supercomputer to solve and still takes much longer, said Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University. The study was published today in the journal Nature Communications.

Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the "hardest of the hard" computing problems, such as forecasting the evolution of dynamical systems that change over time, Gauthier said. Previous research has shown that reservoir computing is well-suited for learning dynamical systems and can provide accurate forecasts about how they will behave in the future, Gauthier said. It does that through the use of an artificial neural network, somewhat like a human brain. Scientists feed data on a dynamical network into a "reservoir" of randomly connected artificial neurons in a network. The network produces useful output that the scientists can interpret and feed back into the network, building a more and more accurate forecast of how the system will evolve in the future. The larger and more complex the system and the more accurate that the scientists want the forecast to be, the bigger the network of artificial neurons has to be and the more computing resources and time that are needed to complete the task.

In this study, Gauthier and his colleagues [...] found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time. They tested their concept on a forecasting task involving a weather system developed by Edward Lorenz, whose work led to our understanding of the butterfly effect. Their next-generation reservoir computing was a clear winner over today's state-of-the-art on this Lorenz forecasting task. In one relatively simple simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model. But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model, Gauthier said. An important reason for the speed-up is that the "brain" behind this next generation of reservoir computing needs a lot less warmup and training compared to the current generation to produce the same results. Warmup is training data that needs to be added as input into the reservoir computer to prepare it for its actual task.

Operating Systems

Happy Birthday, Linux: From a Bedroom Project To Billions of Devices in 30 Years (theregister.com) 122

On August 25, 1991, Linus Torvalds, then a student at the University of Helsinki in Finland, sent a message to the comp.os.minix newsgroup soliciting feature suggestions for a free Unix-like operating system he was developing as a hobby. Thirty years later, that software, now known as Linux, is everywhere. From a report: It dominates the supercomputer world, with 100 per cent market share. According to Google, the Linux kernel is at the heart of more than three billion active devices running Android, the most-used operating system in the world. Linux also powers the vast majority of web-facing servers Netcraft surveyed. It is even used more than Microsoft Windows on Microsoft's own Azure cloud. And then there are the embedded electronics and Internet-of-Things spaces, and other areas.

Linux has failed to gain traction among mainstream desktop users, where it has a market share of about 2.38 per cent, or 3.59 per cent if you include ChromeOS, compared to Windows (73.04 per cent) and macOS (15.43 per cent). But the importance of Linux has more to do with the triumph of an idea: of free, open-source software. "It cannot be overstated how critical Linux is to today's internet ecosystem," Kees Cook, security and Linux kernel engineer at Google, told The Register via email. "Linux currently runs on everything from the smartphone we rely on everyday to the International Space Station. To rely on the internet is to rely on Linux." The next 30 years of Linux, Cook contends, will require the tech industry to work together on security and to provide more resources for maintenance and testing.

Intel

45 Teraflops: Intel Unveils Details of Its 100-Billion Transistor AI Chip (siliconangle.com) 16

At its annual Architecture Day semiconductor event Thursday, Intel revealed new details about its powerful Ponte Vecchio chip for data centers, reports SiliconANGLE: Intel is looking to take on Nvidia Corp. in the AI silicon market with Ponte Vecchio, which the company describes as its most complex system-on-chip or SOC to date. Ponte Vecchio features some 100 billion transistors, nearly twice as many as Nvidia's flagship A100 data center graphics processing unit. The chip's 100 billion transistors are divided among no fewer than 47 individual processing modules made using five different manufacturing processes. Normally, an SOC's processing modules are arranged side by side in a flat two-dimensional design. Ponte Vecchio, however, stacks the modules on one another in a vertical, three-dimensional structure created using Intel's Foveros technology.

The bulk of Ponte Vecchio's processing power comes from a set of modules aptly called the Compute Tiles. Each Compute Tile has eight Xe cores, GPU cores specifically optimized to run AI workloads. Every Xe core, in turn, consists of eight vector engines and eight matrix engines, processing modules specifically built to run the narrow set of mathematical operations that AI models use to turn data into insights... Intel shared early performance data about the chip in conjunction with the release of the technical details. According to the company, early Ponte Vecchio silicon has demonstrated performance of more than 45 teraflops, or about 45 trillion operations per second.

The article adds that it achieved those speeds while processing 32-bit single-precision floating-point values floating point values — and that at least one customer has already signed up to use Ponte Vecchio. The Argonne National Laboratory will include Ponte Vecchio chips in its upcoming $500 million Aurora supercomputer. Aurora will provide one exaflop of performance when it becomes fully operational, the equivalent of a quintillion calculations per second.
AI

What Does It Take to Build the World's Largest Computer Chip? (newyorker.com) 23

The New Yorker looks at Cerebras, a startup which has raised nearly half a billion dollars to build massive plate-sized chips targeted at AI applications — the largest computer chip in the world. In the end, said Cerebras's co-founder Andrew Feldman, the mega-chip design offers several advantages. Cores communicate faster when they're on the same chip: instead of being spread around a room, the computer's brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that's ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home...

A typical, large computer chip might draw three hundred and fifty watts of power, but Cerebras's giant chip draws fifteen kilowatts — enough to run a small house. "Nobody ever delivered that much power to a chip," Feldman said. "Nobody ever had to cool a chip like that." In the end, three-quarters of the CS-1, the computer that Cerebras built around its WSE-1 chip, is dedicated to preventing the motherboard from melting. Most computers use fans to blow cool air over their processors, but the CS-1 uses water, which conducts heat better; connected to piping and sitting atop the silicon is a water-cooled plate, made of a custom copper alloy that won't expand too much when warmed, and polished to perfection so as not to scratch the chip. On most chips, data and power flow in through wires at the edges, in roughly the same way that they arrive at a suburban house; for the more metropolitan Wafer-Scale Engines, they needed to come in perpendicularly, from below. The engineers had to invent a new connecting material that could withstand the heat and stress of the mega-chip environment. "That took us more than a year," Feldman said...

[I]n a rack in a data center, it takes up the same space as fifteen of the pizza-box-size machines powered by G.P.U.s. Custom-built machine-learning software works to assign tasks to the chip in the most efficient way possible, and even distributes work in order to prevent cold spots, so that the wafer doesn't crack.... According to Cerebras, the CS-1 is being used in several world-class labs — including the Lawrence Livermore National Laboratory, the Pittsburgh Supercomputing Center, and E.P.C.C., the supercomputing centre at the University of Edinburgh — as well as by pharmaceutical companies, industrial firms, and "military and intelligence customers." Earlier this year, in a blog post, an engineer at the pharmaceutical company AstraZeneca wrote that it had used a CS-1 to train a neural network that could extract information from research papers; the computer performed in two days what would take "a large cluster of G.P.U.s" two weeks.

The U.S. National Energy Technology Laboratory reported that its CS-1 solved a system of equations more than two hundred times faster than its supercomputer, while using "a fraction" of the power consumption. "To our knowledge, this is the first ever system capable of faster-than real-time simulation of millions of cells in realistic fluid-dynamics models," the researchers wrote. They concluded that, because of scaling inefficiencies, there could be no version of their supercomputer big enough to beat the CS-1.... Bronis de Supinski, the C.T.O. for Livermore Computing, told me that, in initial tests, the CS-1 had run neural networks about five times as fast per transistor as a cluster of G.P.U.s, and had accelerated network training even more.

It all suggests one possible work-around for Moore's Law: optimizing chips for specific applications. "For now," Feldman tells the New Yorker, "progress will come through specialization."
AI

Tesla Unveils Dojo Supercomputer: World's New Most Powerful AI Training Machine (electrek.co) 32

New submitter Darth Technoid shares a report from Electrek: At its AI Day, Tesla unveiled its Dojo supercomputer technology while flexing its growing in-house chip design talent. The automaker claims to have developed the fastest AI training machine in the world. For years now, Tesla has been teasing the development of a new supercomputer in-house optimized for neural net video training. Tesla is handling an insane amount of video data from its fleet of over 1 million vehicles, which it uses to train its neural nets.

The automaker found itself unsatisfied with current hardware options to train its computer vision neural nets and believed it could do better internally. Over the last two years, CEO Elon Musk has been teasing the development of Tesla's own supercomputer called "Dojo." Last year, he even teased that Tesla's Dojo would have a capacity of over an exaflop, which is one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS. It could potentially makes Dojo the new most powerful supercomputer in the world.

Ganesh Venkataramanan, Tesla's senior director of Autopilot hardware and the leader of the Dojo project, led the presentation. The engineer started by unveiling Dojo's D1 chip, which is using 7 nanometer technology and delivers breakthrough bandwidth and compute performance. Tesla designed the chip to "seamlessly connect without any glue to each other," and the automaker took advantage of that by connecting 500,000 nodes together. It adds the interface, power, and thermal management, and it results in what it calls a training tile. The result is a 9 PFlops training tile with 36TB per second of bandwight in a less than 1 cubic foot format. But now it still has to form a compute cluster using those training tiles in order to truly build the first Dojo supercomputer. Tesla hasn't put that system together yet, but CEO Elon Musk claimed that it will be operational next year.

Slashdot Top Deals