×
Supercomputing

Europe's First Exascale Supercomputer Will Run On ARM Instead of X86 (extremetech.com) 40

An anonymous reader quotes a report from ExtremeTech: One of the world's most powerful supercomputers will soon be online in Europe, but it's not just the raw speed that will make the Jupiter supercomputer special. Unlike most of the Top 500 list, the exascale Jupiter system will rely on ARM cores instead of x86 parts. Intel and AMD might be disappointed, but Nvidia will get a piece of the Jupiter action. [...] Jupiter is a project of the European High-Performance Computing Joint Undertaking (EuroHPC JU), which is working with computing firms Eviden and ParTec to assemble the machine. Europe's first exascale computer will be installed at the Julich Supercomputing Centre in Munich, and assembly could start as soon as early 2024.

EuroHPC has opted to go with SiPearl's Rhea processor, which is based on ARM architecture. Most of the top 10 supercomputers in the world are running x86 chips, and only one is running on ARM. While ARM designs were initially popular in mobile devices, the compact, efficient cores have found use in more powerful systems. Apple has recently finished moving all its desktop and laptop computers to the ARM platform, and Qualcomm has new desktop-class chips on its roadmap. Rhea is based on ARM's Neoverse V1 CPU design, which was developed specifically for high-performance computing (HPC) applications with 72 cores. It supports HBM2e high-bandwidth memory, as well as DDR5, and the cache tops out at an impressive 160MB.
The report says the Jupiter system "will have Nvidia's Booster Module, which includes GPUs and Mellanox ultra-high bandwidth interconnects," and will likely include the current-gen H100 chips. "When complete, Jupiter will be near the very top of the supercomputer list."
United States

Los Alamos's New Project: Updating America's Aging Nuclear Weapons (apnews.com) 192

During World War II, "Los Alamos was the perfect spot for the U.S. government's top-secret Manhattan Project," remembers the Associated Press.

"The community is facing growing pains again, 80 years later, as Los Alamos National Laboratory takes part in the nation's most ambitious nuclear weapons effort since World War II." The mission calls for modernizing the arsenal with droves of new workers producing plutonium cores — key components for nuclear weapons. Some 3,300 workers have been hired in the last two years, with the workforce now topping more than 17,270. Close to half of them commute to work from elsewhere in northern New Mexico and from as far away as Albuquerque, helping to nearly double Los Alamos' population during the work week... While the priority at Los Alamos is maintaining the nuclear stockpile, the lab also conducts a range of national security work and research in diverse fields of space exploration, supercomputing, renewable energy and efforts to limit global threats from disease and cyberattacks...

The headline grabber, though, is the production of plutonium cores. Lab managers and employees defend the massive undertaking as necessary in the face of global political instability. With most people in Los Alamos connected to the lab, opposition is rare. But watchdog groups and non-proliferation advocates question the need for new weapons and the growing price tag... Aside from pressing questions about the morality of nuclear weapons, watchdogs argue the federal government's modernization effort already has outpaced spending predictions and is years behind schedule. Independent government analysts issued a report earlier this month that outlined the growing budget and schedule delays.

"A hairline scratch on a warhead's polished black cone could send the bomb off course..." notes an earlier article.

"The U.S. will spend more than $750 billion over the next 10 years replacing almost every component of its nuclear defenses, including new stealth bombers, submarines and ground-based intercontinental ballistic missiles in the country's most ambitious nuclear weapons effort since the Manhattan Project."
Encryption

Google Releases First Quantum-Resilient FIDO2 Key Implementation (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: Google has announced the first open-source quantum resilient FIDO2 security key implementation, which uses a unique ECC/Dilithium hybrid signature schema co-created with ETH Zurich. FIDO2 is the second major version of the Fast IDentity Online authentication standard, and FIDO2 keys are used for passwordless authentication and as a multi-factor authentication (MFA) element. Google explains that a quantum-resistant FIDO2 security key implementation is a crucial step towards ensuring safety and security as the advent of quantum computing approaches and developments in the field follow an accelerating trajectory.

To protect against quantum computers, a new hybrid algorithm was created by combining the established ECDSA algorithm with the Dilithium algorithm. Dilithium is a quantum-resistant cryptographic signature scheme that NIST included in its post-quantum cryptography standardization proposals, praising its strong security and excellent performance, making it suitable for use in a wide array of applications. This hybrid signature approach that blends classic and quantum-resistant features wasn't simple to manifest, Google says. Designing a Dilithium implementation that's compact enough for security keys was incredibly challenging. Its engineers, however, managed to develop a Rust-based implementation that only needs 20KB of memory, making the endeavor practically possible, while they also noted its high-performance potential.

The hybrid signature schema was first presented in a 2022 paper (PDF) and recently gained recognition at the ACNS (Applied Cryptography and Network Security) 2023, where it won the "best workshop paper" award. This new hybrid implementation is now part of the OpenSK, Google's open-source security keys implementation that supports the FIDO U2F and FIDO2 standards. The tech giant hopes that its proposal will be adopted by FIDO2 as a new standard and supported by major web browsers with large user bases. The firm calls the application of next-gen cryptography at the internet scale "a massive undertaking" and urges all stakeholders to move quickly to maintain good progress on that front.

Supercomputing

Can Computing Clean Up Its Act? (economist.com) 107

Long-time Slashdot reader SpzToid shares a report from The Economist: What you notice first is how silent it is," says Kimmo Koski, the boss of the Finnish IT Centre for Science. Dr Koski is describing LUMI -- Finnish for "snow" -- the most powerful supercomputer in Europe, which sits 250km south of the Arctic Circle in the town of Kajaani in Finland. LUMI, which was inaugurated last year, is used for everything from climate modeling to searching for new drugs. It has tens of thousands of individual processors and is capable of performing up to 429 quadrillion calculations every second. That makes it the third-most-powerful supercomputer in the world. Powered by hydroelectricity, and with its waste heat used to help warm homes in Kajaani, it even boasts negative emissions of carbon dioxide. LUMI offers a glimpse of the future of high-performance computing (HPC), both on dedicated supercomputers and in the cloud infrastructure that runs much of the internet. Over the past decade the demand for HPC has boomed, driven by technologies like machine learning, genome sequencing and simulations of everything from stockmarkets and nuclear weapons to the weather. It is likely to carry on rising, for such applications will happily consume as much computing power as you can throw at them. Over the same period the amount of computing power required to train a cutting-edge AI model has been doubling every five months. All this has implications for the environment.

HPC -- and computing more generally -- is becoming a big user of energy. The International Energy Agency reckons data centers account for between 1.5% and 2% of global electricity consumption, roughly the same as the entire British economy. That is expected to rise to 4% by 2030. With its eye on government pledges to reduce greenhouse-gas emissions, the computing industry is trying to find ways to do more with less and boost the efficiency of its products. The work is happening at three levels: that of individual microchips; of the computers that are built from those chips; and the data centers that, in turn, house the computers. [...] The standard measure of a data centre's efficiency is the power usage effectiveness (pue), the ratio between the data centre's overall power consumption and how much of that is used to do useful work. According to the Uptime Institute, a firm of it advisers, a typical data centre has a pue of 1.58. That means that about two-thirds of its electricity goes to running its computers while a third goes to running the data centre itself, most of which will be consumed by its cooling systems. Clever design can push that number much lower.

Most existing data centers rely on air cooling. Liquid cooling offers better heat transfer, at the cost of extra engineering effort. Several startups even offer to submerge circuit boards entirely in specially designed liquid baths. Thanks in part to its use of liquid cooling, Frontier boasts a pue of 1.03. One reason lumi was built near the Arctic Circle was to take advantage of the cool sub-Arctic air. A neighboring computer, built in the same facility, makes use of that free cooling to reach a pue rating of just 1.02. That means 98% of the electricity that comes in gets turned into useful mathematics. Even the best commercial data centers fall short of such numbers. Google's, for instance, have an average pue value of 1.1. The latest numbers from the Uptime Institute, published in June, show that, after several years of steady improvement, global data-centre efficiency has been stagnant since 2018.
The report notes that the U.S., Britain and the European Union, among others, are considering new rules that "could force data centers to become more efficient." Germany has proposed the Energy Efficiency Act that would mandate a minimum pue of 1.5 by 2027, and 1.3 by 2030.
Supercomputing

Intel To Start Shipping a Quantum Processor (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Intel does a lot of things, but it's mostly noted for making and shipping a lot of processors, many of which have been named after bodies of water. So, saying that the company is set to start sending out a processor called Tunnel Falls would seem unsurprising if it weren't for some key details. Among them: The processor's functional units are qubits, and you shouldn't expect to be able to pick one up on New Egg. Ever. Tunnel Falls appears to be named after a waterfall near Intel's Oregon facility, where the company's quantum research team does much of its work. It's a 12-qubit chip, which places it well behind the qubit count of many of Intel's competitors -- all of which are making processors available via cloud services. But Jim Clarke, who heads Intel's quantum efforts, said these differences were due to the company's distinct approach to developing quantum computers.

Intel, in contrast, is attempting to build silicon-based qubits that can benefit from the developments that most of the rest of the company is working on. The company hopes to "ride the coattails of what the CMOS industry has been doing for years," Clarke said in a call with the press and analysts. The goal, according to Clarke, is to make sure the answer to "what do we have to change from our silicon chip in order to make it?" is "as little as possible." The qubits are based on quantum dots, structures that are smaller than the wavelength of an electron in the material. Quantum dots can be used to trap individual electrons, and the properties of the electron can then be addressed to store quantum information. Intel uses its fabrication expertise to craft the quantum dot and create all the neighboring features needed to set and read its state and perform manipulations.

However, Clarke said there are different ways of encoding a qubit in a quantum dot (Loss-DiVincenzo, singlet-triplet, and exchange-only, for those curious). This gets at another key difference with Intel's efforts: While most of its competitors are focused solely on fostering a software developer community, Intel is simultaneously trying to develop a community that will help it improve its hardware. (For software developers, the company also released a software developer kit.) To help get this community going, Intel will send Tunnel Falls processors out to a few universities: The Universities of Maryland, Rochester, Wisconsin, and Sandia National Lab will be the first to receive the new chip, and the company is interested in signing up others. The hope is that researchers at these sites will help Intel characterize sources of error and which forms of qubits provide the best performance.
"Overall, Intel has made a daring choice for its quantum strategy," concludes Ars' John Timmer. "Electron-based qubits have been more difficult to work with than many other technologies because they tend to have shorter life spans before they decohere and lose the information they should be holding. Intel is counting on rapid iteration, a large manufacturing capacity, and a large community to help it figure out how to overcome this. But testing quantum computing chips and understanding why their qubits sometimes go wrong is not an easy process; it requires highly specialized refrigeration hardware that takes roughly a day to get the chips down to a temperature where they can be used."

"The company seems to be doing what it needs to overcome that bottleneck, but it's likely to need more than three universities to sign up if the strategy is going to work."
Supercomputing

Iran Unveils 'Quantum' Device That Anyone Can Buy for $589 on Amazon (vice.com) 67

What Iran's military called "the first product of the quantum processing algorithm" of the Naval university appears to be a stock development board, available widely online for around $600. Motherboard reports: According to multiple state-linked news agencies in Iran, the computer will help Iran detect disturbances on the surface of water using algorithms. Iranian Rear Admiral Habibollah Sayyari showed off the board during the ceremony and spoke of Iran's recent breakthroughs in the world of quantum technology. The touted quantum device appears to be a development board manufactured by a company called Diligent. The brand "ZedBoard" appears clearly in pictures. According to the company's website, the ZedBoard has everything the beginning developer needs to get started working in Android, Linux, and Windows. It does not appear to come with any of the advanced qubits that make up a quantum computer, and suggested uses include "video processing, reconfigurable computing, motor control, software acceleration," among others.

"I'm sure this board can work perfectly for people with more advanced [Field Programmable Gate Arrays] experience, however, I am a beginner and I can say that this is also a good beginner-friendly board," said one review on Diligent's website. Those interested in the board can buy one on Amazon for $589. It's impossible to know if Iran has figured out how to use off-the-shelf dev boards to make quantum algorithms, but it's not likely.

Open Source

Peplum: F/OSS Distributed Parallel Computing and Supercomputing At Home With Ruby Infrastructure (ecsypno.com) 20

Slashdot reader Zapotek brings an update from the Ecsypno skunkworks, where they've been busy with R&D for distributed computing systems: Armed with Cuboid, Qmap was built, which tackled the handling of nmap in a distributed environment, with great results. Afterwards, an iterative clean-up process led to a template of sorts, for scheduling most applications in such environments.

With that, Peplum was born, which allows for OS applications, Ruby code and C/C++/Rust code (via Ruby extensions) to be distributed across machines and tackle the processing of neatly grouped objects.

In essence, Peplum:

- Is a distributed computing solution backed by Cuboid.
- Its basic function is to distribute workloads and deliver payloads across multiple machines and thus parallelize otherwise time consuming tasks.
- Allows you to combine several machines and built a cluster/supercomputer of sorts with great ease.

After that was dealt with, it was time to port Qmap over to Peplum for easier long-term maintenance, thus renamed Peplum::Nmap.

We have high hopes for Peplum as it basically means easy, simple and joyful cloud/clustering/super-computing at home, on-premise, anywhere really. Along with the capability to turn a lot of security oriented apps into super versions of themselves, it is quite the infrastructure.

Yes, this means there's a new solution if you're using multiple machines for "running simulations, to network mapping/security scans, to password cracking/recovery or just encoding your collection of music and video" -- or anything else: Peplum is a F/OSS (MIT licensed) project aimed at making clustering/super-computing affordable and accessible, by making it simple to setup a distributed parallel computing environment for abstract applications... TLDR: You no longer have to only imagine a Beowulf cluster of those, you can now easily build one yourself with Peplum.
Some technical specs: It is written in the Ruby programming language, thus coming with an entire ecosystem of libraries and the capability to run abstract Ruby code, execute external utilities, run OS commands, call C/C++/Rust routines and more...

Peplum is powered by Cuboid, a F/OSS (MIT licensed) abstract framework for distributed computing — both of them are funded by Ecsypno Single Member P.C., a new R&D and Consulting company.

Supercomputing

IBM Wants To Build a 100,000-Qubit Quantum Computer (technologyreview.com) 27

IBM has announced its goal to build a 100,000-qubit quantum computing machine within the next 10 years in collaboration with the University of Tokyo and the University of Chicago. MIT Technology Review reports: Late last year, IBM took the record for the largest quantum computing system with a processor that contained 433 quantum bits, or qubits, the fundamental building blocks of quantum information processing. Now, the company has set its sights on a much bigger target: a 100,000-qubit machine that it aims to build within 10 years. IBM made the announcement on May 22 at the G7 summit in Hiroshima, Japan. The company will partner with the University of Tokyo and the University of Chicago in a $100 million dollar initiative to push quantum computing into the realm of full-scale operation, where the technology could potentially tackle pressing problems that no standard supercomputer can solve.

Or at least it can't solve them alone. The idea is that the 100,000 qubits will work alongside the best "classical" supercomputers to achieve new breakthroughs in drug discovery, fertilizer production, battery performance, and a host of other applications. "I call this quantum-centric supercomputing," IBM's VP of quantum, Jay Gambetta, told MIT Technology Review in an in-person interview in London last week. [...] IBM has already done proof-of-principle experiments (PDF) showing that integrated circuits based on "complementary metal oxide semiconductor" (CMOS) technology can be installed next to the cold qubits to control them with just tens of milliwatts. Beyond that, he admits, the technology required for quantum-centric supercomputing does not yet exist: that is why academic research is a vital part of the project.

The qubits will exist on a type of modular chip that is only just beginning to take shape in IBM labs. Modularity, essential when it will be impossible to put enough qubits on a single chip, requires interconnects that transfer quantum information between modules. IBM's "Kookaburra," a 1,386-qubit multichip processor with a quantum communication link, is under development and slated for release in 2025. Other necessary innovations are where the universities come in. Researchers at Tokyo and Chicago have already made significant strides in areas such as components and communication innovations that could be vital parts of the final product, Gambetta says. He thinks there will likely be many more industry-academic collaborations to come over the next decade. "We have to help the universities do what they do best," he says.

Intel

Intel Gives Details on Future AI Chips as It Shifts Strategy (reuters.com) 36

Intel on Monday provided a handful of new details on a chip for artificial intelligence (AI) computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and Advanced Micro Devices. From a report: At a supercomputing conference in Germany on Monday, Intel said its forthcoming "Falcon Shores" chip will have 288 gigabytes of memory and support 8-bit floating point computation. Those technical specifications are important as artificial intelligence models similar to services like ChatGPT have exploded in size, and businesses are looking for more powerful chips to run them.

The details are also among the first to trickle out as Intel carries out a strategy shift to catch up to Nvidia, which leads the market in chips for AI, and AMD, which is expected to challenge Nvidia's position with a chip called the MI300. Intel, by contrast, has essentially no market share after its would-be Nvidia competitor, a chip called Ponte Vecchio, suffered years of delays. Intel on Monday said it has nearly completed shipments for Argonne National Lab's Aurora supercomputer based on Ponte Vecchio, which Intel claims has better performance than Nvidia's latest AI chip, the H100. But Intel's Falcon Shores follow-on chip won't be to market until 2025, when Nvidia will likely have another chip of its own out.

Supercomputing

UK To Invest 900 Million Pounds In Supercomputer In Bid To Build Own 'BritGPT' (theguardian.com) 35

An anonymous reader quotes a report from The Guardian: The UK government is to invest 900 million pounds in a cutting-edge supercomputer as part of an artificial intelligence strategy that includes ensuring the country can build its own "BritGPT". The treasury outlined plans to spend around 900 million pounds on building an exascale computer, which would be several times more powerful than the UK's biggest computers, and establishing a new AI research body. An exascale computer can be used for training complex AI models, but also have other uses across science, industry and defense, including modeling weather forecasts and climate projections. The Treasury said the investment will "allow researchers to better understand climate change, power the discovery of new drugs and maximize our potential in AI.".

An exascale computer is one that can carry out more than one billion billion simple calculations a second, a metric known as an "exaflops". Only one such machine is known to exist, Frontier, which is housed at America's Oak Ridge National Laboratory and used for scientific research -- although supercomputers have such important military applications that it may be the case that others already exist but are not acknowledged by their owners. Frontier, which cost about 500 million pounds to produce and came online in 2022, is more than twice as powerful as the next fastest machine.

The Treasury said it would award a 1 million-pound prize every year for the next 10 years to the most groundbreaking AI research. The award will be called the Manchester Prize, in memory of the so-called Manchester Baby, a forerunner of the modern computer built at the University of Manchester in 1948. The government will also invest 2.5 billion pounds over the next decade in quantum technologies. Quantum computing is based on quantum physics -- which looks at how the subatomic particles that make up the universe work -- and quantum computers are capable of computing their way through vast numbers of different outcomes.

Supercomputing

Satoshi Matsuoka Mocks 12 Myths of High-Performance Computing (insidehpc.com) 25

insideHPC reports that Satoshi Matsuoka, the head of Japan's largest supercomputing center, has co-authored a high-performance computing paper challenging conventional wisdom. In a paper entitled "Myths and Legends of High-Performance Computing" appearing this week on the Arvix site, Matsuoka and four colleagues offer opinions and analysis on such issues as quantum replacing classical HPC, the zettascale timeline, disaggregated computing, domain-specific languages (DSLs) vs. Fortran and cloud subsuming HPC, among other topics.

"We believe (these myths and legends) represent the zeitgeist of the current era of massive change, driven by the end of many scaling laws, such as Dennard scaling and Moore's law," the authors said.

In this way they join the growing "end of" discussions in HPC. For example, as the industry moves through 3nm, 2nm, and 1.4nm chips – then what? Will accelerators displace CPUs altogether? What's next after overburdened electrical I/O interconnects? How do we get more memory per core?

The paper's abstract promises a "humorous and thought provoking" discussion — for example, on the possibility of quantum computing taking over high-performance computing. ("Once a quantum state is constructed, it can often be "used" only once because measurements destroy superposition. A second limitation stems from the lack of algorithms with high speedups....")

The paper also tackles myths like "all high-performance computing will be subsumed by the clouds" and "everything will be deep learning."

Thanks to guest reader for submitting the article.
AI

Microsoft and OpenAI Working On ChatGPT-Powered Bing In Challenge To Google 61

Microsoft is in the works to launch a version of its search engine Bing using the artificial intelligence behind OpenAI-launched chatbot ChatGPT, The Information reported on Tuesday, citing two people with direct knowledge of the plans. Reuters reports: Microsoft could launch the new feature before the end of March, and hopes to challenge Alphabet-owned search engine Google, the San Francisco-based technology news website said in a report. Microsoft said in a blog post last year that it planned to integrate image-generation software from OpenAI, DALL-E 2, into Bing.

Microsoft had in 2019 backed San Francisco-based artificial intelligence company OpenAI, offering $1 billion in funding. The two had formed a multi-year partnership to develop artificial intelligence supercomputing technologies on Microsoft's Azure cloud computing service.
Further reading: ChatGPT Is a 'Code Red' For Google's Search Business
AI

Driverless Electric Robot Tractors are Here, Powered by NVIDIA AI Chips (theverge.com) 82

NVIDIA is proud of its role in the first commercially available smart tractor (which began rolling off the production line Thursday). Monarch Tractor's MK-V "combines electrification, automation, and data analysis to help farmers reduce their carbon footprint, improve field safety, streamline farming operations, and increase their bottom lines," according to NVIDIA's blog.

NVIDIA's been touting the ability to accelerate machine learning applications with its low-power Jetson boards (each with a system on a chip integrating an ARM-architecture CPU) , and they write that the new tractor "cuts energy costs and diesel emissions, while also helping reduce harmful herbicides, which are expensive and deplete the soil." Mark Schwager, former Tesla Gigafactory chief, is president; Zachary Omohundro, a robotics Ph.D. from Carnegie Mellon, is CTO; Praveen Penmetsa, CEO of Monarch Tractor, is an autonomy and mobility engineer. Penmetsa likens the revolutionary new tractor to paradigm shifts in PCs and smartphones, enablers of world-changing applications. Monarch's role, he said, is as the hub to enable smart implements — precision sprayers, harvesters and more — for computer vision applications to help automate farming....

Tapping into six NVIDIA Jetson Xavier NX SOMs (system on modules), Monarch's Founder Series MK-V tractors are essentially roving robots packing supercomputing. Monarch has harnessed Jetson to deliver tractors that can safely traverse rows within agriculture fields using only cameras. "This is important in certain agriculture environments because there may be no GPS signal," said Penmetsa. "It's also crucial for safety as the Monarch is intended for totally driverless operation."The Founder Series MK-V runs two 3D cameras and six standard cameras.

In one pilot test a tractor lowered energy costs (compared to a diesel tractor) by $2,600 a year, according to NVIDIA's blog post. And the tractor collects and analyzes crop data daily, so hopes are high for the system. Monarch has already raised more than $110 million in funding, reports the Verge: Many tractors out in farming fields have semiautonomous modes but largely require a driver to be seated. They also mostly run on diesel gas, so the MK-V, with its fully electric design and driver-optional smarts, is claiming it's the first production model of its kind.
Supercomputing

IBM Unveils Its 433 Qubit Osprey Quantum Computer (techcrunch.com) 29

An anonymous reader quotes a report from TechCrunch: IBM wants to scale up its quantum computers to over 4,000 qubits by 2025 -- but we're not quite there yet. For now, we have to make do with significantly smaller systems and today, IBM announced the launch of its Osprey quantum processor, which features 433 qubits, up from the 127 qubits of its 2021 Eagle processor. And with that, the slow but steady march toward a quantum processor with real-world applications continues.

IBM's quantum roadmap includes two additional stages -- the 1,121-qubit Condor and 1,386-qubit Flamingo processors in 2023 and 2024 -- before it plans to hit the 4,000-qubit stage with its Kookaburra processor in 2025. So far, the company has generally been able to make this roadmap work, but the number of qubits in a quantum processor is obviously only one part of a very large and complex puzzle, with longer coherence times and reduced noise being just as important.

The company also today detailed (Link: YouTube) its Quantum System Two -- basically IBM's quantum mainframe -- which will be able to house multiple quantum processors and integrate them into a single system with high-speed communication links. The idea here is to launch this system by the end of 2023.
"The new 433 qubit 'Osprey' processor brings us a step closer to the point where quantum computers will be used to tackle previously unsolvable problems," said Dario Gil, senior vice president, IBM and director of Research. "We are continuously scaling up and advancing our quantum technology across hardware, software and classical integration to meet the biggest challenges of our time, in conjunction with our partners and clients worldwide. This work will prove foundational for the coming era of quantum-centric supercomputing."

Further reading: IBM Held Talks With Biden Administration on Quantum Controls
Intel

Intel CEO Calls New US Restrictions on Chip Exports To China Inevitable (wsj.com) 9

Intel Chief Executive Pat Gelsinger said that recently imposed U.S. restrictions on semiconductor-industry exports to China were inevitable as America seeks to maintain technological leadership in competition with China. From a report: Speaking at The Wall Street Journal's annual Tech Live conference, Mr. Gelsinger said the restrictions, which require chip companies to obtain a license to export certain advanced artificial-intelligence and supercomputing chips as well as equipment used in advanced manufacturing, are part of a necessary shift of chip supply chains. "I viewed this geopolitically as inevitable," Mr. Gelsinger said. "And that's why the rebalancing of supply chains is so critical." His comments Monday followed high-profile public lobbying of Congress to pass the bipartisan Chips and Science Act, which extends nearly $53 billion in subsidies for research and development and to build or expand fabs in the U.S., in July. Mr. Gelsinger was a leading advocate for the legislation.

Mr. Gelsinger has embarked on a massive expansion of chip plants, referred to as fabs. The company has announced plans to erect new facilities in Ohio, Germany and elsewhere since Mr. Gelsinger took over last year at a combined cost potentially topping $100 billion. "Where the oil reserves are defined geopolitics for the last five decades. Where the fabs are for the next five decades is more important," Mr. Gelsinger said Monday. Mr. Gelsinger said the ambition for efforts to boost domestic chip manufacturing in Western countries was to shift from about 80% in Asia to about 50% by the end of the decade, with the U.S. taking 30% and Europe the remaining 20%. "We would all feel so good" if that were to happen, he said.

China

US Eyes Expanding China Tech Ban To Quantum Computing and AI (bloomberg.com) 47

An anonymous reader quotes a report from Bloomberg: The Biden administration is exploring the possibility of new export controls that would limit China's access to some of the most powerful emerging computing technologies, according to people familiar with the situation. The potential plans, which are in an early stage, are focused on the still-experimental field of quantum computing, as well as artificial intelligence software, according to the people, who asked not to be named discussing private deliberations. Industry experts are weighing in on how to set the parameters of the restrictions on this nascent technology, they said. The efforts, if implemented, would follow separate restrictions announced earlier this month aimed at stunting Beijing's ability to deploy cutting-edge semiconductors in weapons and surveillance systems.

National Security Advisor Jake Sullivan, in a speech last month on technology, competitiveness and national security, referred to "computing-related technologies, including microelectronics, quantum information systems and artificial intelligence" as among developments "set to play an outsized importance over the coming decade." He also noted the importance of export controls to "maintain as large of a lead as possible" over adversaries. Expanding the wall around advanced technologies risks further antagonizing China and forcing other countries to pick sides between the world's two top economies. The new ideas have been shared with US allies, according to the people. Officials are still determining how to frame the controls on quantum computing, which will probably focus on the level of output and the so-called error correction rate, the people said. [...] The Biden administration is also working on an executive order for an outbound investment review mechanism that would scrutinize money heading to certain Chinese technologies, and the quantum computing and artificial intelligence controls could be included, one of the people said. That could incorporate some aspects similar to a measure pushed by senators Bob Casey, a Pennsylvania Democrat, and John Cornyn, a Texas Republican.

AI

US Said To Plan New Limits on China's AI and Supercomputing Firms (nytimes.com) 53

The Biden administration is expected to announce new measures to restrict Chinese companies from accessing technologies that enable high-performance computing, The New York Times reported Monday, citing several people familiar with the matter, the latest in a series of moves aimed at hobbling Beijing's ambitions to craft next-generation weapons and automate large-scale surveillance systems. From a report: The measures, which could be announced as soon as this week, would be some of the most significant steps taken by the Biden administration to cut off China's access to advanced semiconductor technology. They would build on a Trump-era rule that struck a blow to the Chinese telecom giant Huawei by prohibiting companies around the world from sending it products made with the use of American technology, machinery or software. A number of Chinese firms, government research labs and other entities are expected to face restrictions similar to Huawei, according to two people with knowledge of the plans. In effect, any firm that uses American-made technologies would be blocked from selling to the Chinese entities that are targeted by the administration. It's not yet clear which Chinese firms and labs would be impacted. The broad expansion of what is known as the foreign direct product rule is just one part of Washington's planned restrictions. The administration is also expected to try to control the sale of cutting-edge U.S.-made tools to China's domestic chip makers.
Supercomputing

Tesla Unveils New Dojo Supercomputer So Powerful It Tripped the Power Grid (electrek.co) 106

An anonymous reader quotes a report from Electrek: Tesla has unveiled its latest version of its Dojo supercomputer and it's apparently so powerful that it tripped the power grid in Palo Alto. Dojo is Tesla's own custom supercomputer platform built from the ground up for AI machine learning and more specifically for video training using the video data coming from its fleet of vehicles. [...] Last year, at Tesla's AI Day, the company unveiled its Dojo supercomputer, but the company was still ramping up its effort at the time. It only had its first chip and training tiles, and it was still working on building a full Dojo cabinet and cluster or "Exapod." Now Tesla has unveiled the progress made with the Dojo program over the last year during its AI Day 2022 last night.

The company confirmed that it managed to go from a chip and tile to now a system tray and a full cabinet. Tesla claims it can replace 6 GPU boxes with a single Dojo tile, which the company claims costs less than one GPU box. There are 6 of those tiles per tray. Tesla says that a single tray is the equivalent of "3 to 4 fully-loaded supercomputer racks." The company is integrating its host interface directly on the system tray to create a big full host assembly. Tesla can fit two of these system trays with host assembly into a single Dojo cabinet. That's pretty much where Tesla is right now as the automaker is still developing and testing the infrastructure needed to put a few cabinets together to create the first "Dojo Exapod."

Bill Chang, Tesla's Principal System Engineer for Dojo, said: "We knew that we had to reexamine every aspect of the data center infrastructure in order to support our unprecedented cooling and power density." They had to develop their own high-powered cooling and power system to power the Dojo cabinets. Chang said that Tesla tripped their local electric grid's substation when testing the infrastructure earlier this year: "Earlier this year, we started load testing our power and cooling infrastructure and we were able to push it over 2 MW before we tripped our substation and got a call from the city." Tesla released the main specs of a Dojo Exapod: 1.1 EFLOP, 1.3 TB SRAM, and 13 TB high-bandwidth DRAM.

Supercomputing

China's Baidu Reveals Its First Quantum Computer, 'Qianshi' (reuters.com) 15

Chinese search engine giant Baidu revealed its first quantum computer on Thursday and is ready to make it available to external users, joining the global race to apply the technology to practical uses. Reuters reports: The Baidu-developed quantum computer, dubbed "Qianshi," has a 10-quantum-bit (qubit) processor, Baidu said in a statement. The Beijing-based company has also developed a 36-qubit quantum chip, it said. Governments and companies around the world for years have touted the potential of quantum computing, a form of high-speed calculation at extraordinarily cold temperatures that will bring computers to unprecedented processing speeds. However, current real-world applications in the field are still very basic and limited to a small group of early clients.
Supercomputing

Are the World's Most Powerful Supercomputers Operating In Secret? (msn.com) 42

"A new supercomputer called Frontier has been widely touted as the world's first exascale machine — but was it really?"

That's the question that long-time Slashdot reader MattSparkes explores in a new article at New Scientist... Although Frontier, which was built by the Oak Ridge National Laboratory in Tennessee, topped what is generally seen as the definitive list of supercomputers, others may already have achieved the milestone in secret....

The definitive list of supercomputers is the Top500, which is based on a single measurement: how fast a machine can solve vast numbers of equations by running software called the LINPACK benchmark. This gives a value in float-point operations per second, or FLOPS. But even Jack Dongarra at Top500 admits that not all supercomputers are listed, and will only feature if its owner runs the benchmark and submits a result. "If they don't send it in it doesn't get entered," he says. "I can't force them."

Some owners prefer not to release a benchmark figure, or even publicly reveal a machine's existence. Simon McIntosh-Smith at the University of Bristol, UK points out that not only do intelligence agencies and certain companies have an incentive to keep their machines secret, but some purely academic machines like Blue Waters, operated by the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, are also just never entered.... Dongarra says that the consensus among supercomputer experts is that China has had at least two exascale machines running since 2021, known as OceanLight and Tianhe-3, and is working on an even larger third called Sugon. Scientific papers on unconnected research have revealed evidence of these machines when describing calculations carried out on them.

McIntosh-Smith also believes that intelligence agencies would rank well, if allowed. "Certainly in the [US], some of the security forces have things that would put them at the top," he says. "There are definitely groups who obviously wouldn't want this on the list."

Slashdot Top Deals