Supercomputing

After AI, Quantum Computing Eyes Its 'Sputnik' Moment (phys.org) 52

The founder of Cambridge-based Riverlane, Steve Brierley, predicts quantum computing will have its "Sputnik" breakthrough within years. "Quantum computing is not going to be just slightly better than the previous computer, it's going to be a huge step forward," he said. Phys.org reports: His company produces the world's first dedicated quantum decoder chip, which detects and corrects the errors currently holding the technology back. In a sign of confidence in Riverlane's work and the sector in general, the company announced on Tuesday that it had raised $75 million in Series C funding, typically the last round of venture capital financing prior to an initial public offering. "Over the next two to three years, we'll be able to get to systems that can support a million error-free operations," said Earl Campbell, vice president of quantum science at Riverlane. This is the threshold where a quantum computer should be able to perform certain tasks better than conventional computers, he added.

Quantum computers are "really good at simulating other quantum systems", explained Brierley, meaning they can simulate interactions between particles, atoms and molecules. This could open the door to revolutionary medicines and also promises huge efficiency improvements in how fertilizers are made, transforming an industry that today produces around two percent of global CO2 emissions. It also paves the way for much more efficient batteries, another crucial weapon in the fight against climate change. "I think most people are more familiar with exponential after COVID, so we know how quickly something that's exponential can spread," said Campbell, inside Riverlane's testing lab, a den of oscilloscopes and chipboards. [...]

While today's quantum computers can only perform around 1,000 operations before being overwhelmed by errors, the quality of the actual components has "got to the point where the physical qubits are good enough," said Brierley. "So this is a super exciting time. The challenge now is to scale up... and to add error correction into the systems," he added. Such progress, along with quantum computing's potential to crack all existing cryptography and create potent new materials, is spurring regulators into action. "There's definitely a scrambling to understand what's coming next in technology. It's really important that we learn the lessons from AI, to not be surprised by the technology and think early about what those implications are going to be," said Brierley. "I think there will ultimately be regulation around quantum computing, because it's such an important technology. And I think this is a technology where no government wants to come second."

China

China Is Getting Secretive About Its Supercomputers 28

For decades, American and Chinese scientists collaborated on supercomputers. But Chinese scientists have become more secretive as the U.S. has tried to hinder China's technological progress, and they have stopped participating altogether in a prominent international supercomputing forum. From a report: The withdrawal marked the end of an era and created a divide that Western scientists say will slow the development of AI and other technologies as countries pursue separate projects. The new secrecy also makes it harder for the U.S. government to answer a question it deems essential to national security: Does the U.S. or China have faster supercomputers? Some academics have taken it upon themselves to hunt for clues about China's supercomputing progress, scrutinizing research papers and cornering Chinese peers at conferences.

Supercomputers have become central to the U.S.-China technological Cold War because the country with the faster supercomputers can also hold an advantage in developing nuclear weapons and other military technology. "If the other guy can use a supercomputer to simulate and develop a fighter jet or weapon 20% or even 1% better than yours in terms of range, speed and accuracy, it's going to target you first, and then it's checkmate," said Jimmy Goodrich, a senior adviser for technology analysis at Rand, a think tank. The forum that China recently stopped participating in is called the Top500, which ranks the world's 500 fastest supercomputers. While the latest ranking, released in June, says the world's three fastest computers are in the U.S., the reality is probably different.
Supercomputing

$2.4 Million Texas Home Listing Boasts Built-In 5,786 sq ft Data Center (tomshardware.com) 34

A Zillow listing for a $2.4 million house in a Dallas suburb is grabbing attention for its 5,786-square-foot data center with immersion cooling tanks, massive server racks, and two separate power grids. Tom's Hardware reports: With a brick exterior, cute paving, and mini-McMansion arch stylings, the building certainly looks to be a residential home for the archetypal Texas family. Prospective home-buyers will thus be disappointed by the 0 bedroom, 1 bathroom setup, which becomes a warehouse-feeling office from the first step inside where you are met with a glass-shielded reception desk in a white-brick corridor. The "Crypto Collective" branding betrays the former life of the unit, which served admirably as a crypto mining base.

The purchase of the "upgraded turnkey Tier 2 Data Center" will include all of its cooling and power infrastructure. Three Engineered Fluids "SLICTanks," single-phase liquid immersion cooling tanks for use with dielectric coolant, will come with pumps and a 500kW dry cooler. The tanks are currently filled with at least 80 mining computers visible from the photos, though the SLICTanks can be configured to fit more machines. Also visible in proximity to the cooling array is a deep row of classic server racks and a staggering amount of networking.

The listing advertises a host of potential uses for future customers, from "AI services, cloud hosting, traditional data center, servers or even Bitcoin Mining". Also packed into the 5,786 square feet of real estate is two separate power grids, 5 HVAC units, a hefty amount of four levels of warehouse-style storage aisles, a lounge/office space, and a fully-paved backyard. In other good news, its future corporate residents will not have an HOA to deal with, and will only be 20 minutes outside of the heart of Dallas, sitting just out of earshot of two major highways.

Hardware

Will Tesla Do a Phone? Yes, Says Morgan Stanley 170

Morgan Stanley, in a note -- seen by Slashdot -- sent to its clients on Wednesday: From our continuing discussions with automotive management teams and industry experts, the car is an extension of the phone. The phone is an extension of the car. The lines between car and phone are truly blurring.

For years, we have been writing about the potential for Tesla to expand into edge compute domains beyond the car, including last October where we described a mobile AI assistant as a 'heavy key.' Following Apple's WWDC, Tesla CEO Elon Musk re-ignited the topic by saying that making such a device is 'not out of the question.' As Mr. Musk continues to invest further into his own LLM/genAI efforts, such as 'Grok,' the potential strategic and userexperience overlap becomes more obvious.

From an automotive perspective, the topic of supercomputing at both the datacenter level and at the edge are highly relevant given the incremental global unit sold is a car that can perform OTA updates of firmware, has a battery with a stored energy equivalent of approx. 2,000 iPhones, and a liquid cooled inference supercomputer as standard kit. What if your phone could tap into your vehicle's compute power and battery supply to run AI applications?

Edge compute and AI have brought to light some of the challenges (battery life, thermal, latency, etc.) of marrying today's smartphones with ever more powerful AI-driven applications. Numerous media reports have discussed OpenAI potentially developing a consumer device specifically designed for AI.

The phone as a (heavy) car key? Any Tesla owner will tell you how they use their smartphone as their primary key to unlock their car as well as running other remote applications while they interact with their vehicles. The 'action button' on the iPhone 15 potentially takes this to a different level of convenience.
Supercomputing

UK Imposes Mysterious Ban On Quantum Computer Exports (newscientist.com) 19

Longtime Slashdot reader MattSparkes shares a report from NewScientist: Quantum computing experts are baffled by the UK government's new export restrictions on the exotic devices (source paywalled), saying they make little sense. [The UK government has set limits on the capabilities of quantum computers that can be exported -- starting with those above 34 qubits, and rising as long as error rates are also higher -- and has declined to explain these limits on the grounds of national security.] The legislation applies to both existing, small quantum computers that are of no practical use and larger computers that don't actually exist, so cannot be exported. Instead, there are fears the limits will restrict sales and add bureaucracy to a new and growing sector. For more context, here's an excerpt from an article published by The Telegraph in March: The technology has been added to a list of "dual use" items that could have military uses maintained by the Export Control Joint Unit, which scrutinizes sales of sensitive goods. A national quantum computer strategy published last year described the technology as being "critically important" for defense and national security and said the UK was in a "global race" to develop it. [...] The changes have been introduced as part of a broader update to export rules agreed by Western allies including the US and major European countries. Several nations with particular expertise on quantum computer technologies have added specific curbs, including France which introduced rules at the start of this month.

Last year, industry body Quantum UK said British companies were concerned about the prospect of further export controls, and that they could even put off US companies seeking to relocate to the UK. Quantum computer exports only previously required licenses in specific cases, such as when they were likely to lead to military use. Oxford Instruments, which makes cooling systems for quantum computers, said last year that sales in China had been hit by increasing curbs. James Lindop of law firm Eversheds Sutherland said: "Semiconductor and quantum technologies -- two areas in which the UK already holds a world-leading position -- are increasingly perceived to be highly strategic and critical to UK national security. This will undoubtedly create an additional compliance burden for businesses active in the development and production of the targeted technologies."

Supercomputing

Linux Foundation Announces Launch of 'High Performance Software Foundation' (linuxfoundation.org) 4

This week the nonprofit Linux Foundation announced the launch of the High Performance Software Foundation, which "aims to build, promote, and advance a portable core software stack for high performance computing" (or HPC) by "increasing adoption, lowering barriers to contribution, and supporting development efforts."

It promises initiatives focused on "continuously built, turnkey software stacks," as well as other initiatives including architecture support and performance regression testing. Its first open source technical projects are:

- Spack: the HPC package manager.

- Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.

- Viskores (formerly VTK-m): a toolkit of scientific visualization algorithms for accelerator architectures.

- HPCToolkit: performance measurement and analysis tools for computers ranging from desktop systems to GPU-accelerated supercomputers.

- Apptainer: Formerly known as Singularity, Apptainer is a Linux Foundation project providing a high performance, full featured HPC and computing optimized container subsystem.

- E4S: a curated, hardened distribution of scientific software packages.

As use of HPC becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation will provide a neutral space for pivotal projects in the high performance computing ecosystem, enabling industry, academia, and government entities to collaborate on the scientific software.

The High Performance Software Foundation benefits from strong support across the HPC landscape, including Premier Members Amazon Web Services (AWS), Hewlett Packard Enterprise, Lawrence Livermore National Laboratory, and Sandia National Laboratories; General Members AMD, Argonne National Laboratory, Intel, Kitware, Los Alamos National Laboratory, NVIDIA, and Oak Ridge National Laboratory; and Associate Members University of Maryland, University of Oregon, and Centre for Development of Advanced Computing.

In a statement, an AMD vice president said that by joining "we are using our collective hardware and software expertise to help develop a portable, open-source software stack for high-performance computing across industry, academia, and government." And an AWS executive said the high-performance computing community "has a long history of innovation being driven by open source projects. AWS is thrilled to join the High Performance Software Foundation to build on this work. In particular, AWS has been deeply involved in contributing upstream to Spack, and we're looking forward to working with the HPSF to sustain and accelerate the growth of key HPC projects so everyone can benefit."

The new foundation will "set up a technical advisory committee to manage working groups tackling a variety of HPC topics," according to the announcement, following a governance model based on the Cloud Native Computing Foundation.
Supercomputing

Intel Aurora Supercomputer Breaks Exascale Barrier 28

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel's newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD's Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world's best performance for AI at 10.61 "AI exaflops." Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial "partial run" in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it's still unable to dethrone AMD's Frontier system, which is also an HPE supercomputer. AMD's Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel's Aurora supercomputer uses the company's latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it's fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.
Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Supercomputing

Europe Plans To Build 100-Qubit Quantum Computer By 2026 (physicsworld.com) 27

An anonymous reader quotes a report published last week by Physics World: Researchers at the Dutch quantum institute QuTech in Delft have announced plans to build Europe's first 100-quantum bit (qubit) quantum computer. When complete in 2026, the device will be made publicly available, providing scientists with a tool for quantum calculations and simulations. The project is funded by the Dutch umbrella organization Quantum Delta NL via the European OpenSuperQPlus initiative, which has 28 partners from 10 countries. Part of the 10-year, 1 billion-euro European Quantum Flagship program, OpenSuperQPlus aims to build a 100-qubit superconducting quantum processor as a stepping stone to an eventual 1000-qubit European quantum computer.

Quantum Delta NL says the 100-qubit quantum computer will be made publicly available via a cloud platform as an extension of the existing platform Quantum Inspire that first came online in 2020. It currently includes a two-qubit processor of spin qubits in silicon, as well as a five-qubit processor based on superconducting qubits. Quantum Inspire is currently focused on training and education but the upgrade to 100 qubits is expected to allow research into quantum computing. Lead researcher from QuTech Leonardo DiCarlo believes the R&D cycle has "come full circle," where academic research first enabled spin-off companies to grow and now their products are being used to accelerate academic research.

Supercomputing

New Advances Promise Secure Quantum Computing At Home (phys.org) 27

Scientists from Oxford University Physics have developed a breakthrough in cloud-based quantum computing that could allow it to be harnessed by millions of individuals and companies. The findings have been published in the journal Physical Review Letters. Phys.Org reports: In the new study, the researchers use an approach dubbed "blind quantum computing," which connects two totally separate quantum computing entities -- potentially an individual at home or in an office accessing a cloud server -- in a completely secure way. Importantly, their new methods could be scaled up to large quantum computations. "Using blind quantum computing, clients can access remote quantum computers to process confidential data with secret algorithms and even verify the results are correct, without revealing any useful information. Realizing this concept is a big step forward in both quantum computing and keeping our information safe online," said study lead Dr. Peter Drmota, of Oxford University Physics.

The researchers created a system comprising a fiber network link between a quantum computing server and a simple device detecting photons, or particles of light, at an independent computer remotely accessing its cloud services. This allows so-called blind quantum computing over a network. Every computation incurs a correction that must be applied to all that follow and needs real-time information to comply with the algorithm. The researchers used a unique combination of quantum memory and photons to achieve this. The results could ultimately lead to commercial development of devices to plug into laptops, to safeguard data when people are using quantum cloud computing services.
"We have shown for the first time that quantum computing in the cloud can be accessed in a scalable, practical way which will also give people complete security and privacy of data, plus the ability to verify its authenticity," said Professor David Lucas, who co-heads the Oxford University Physics research team and is lead scientist at the UK Quantum Computing and Simulation Hub, led from Oxford University Physics.
Crime

Former Google Engineer Indicted For Stealing AI Secrets To Aid Chinese Companies 28

Linwei Ding, a former Google software engineer, has been indicted for stealing trade secrets related to AI to benefit two Chinese companies. He faces up to 10 years in prison and a $250,000 fine on each criminal count. Reuters reports: Ding's indictment was unveiled a little over a year after the Biden administration created an interagency Disruptive Technology Strike Force to help stop advanced technology being acquired by countries such as China and Russia, or potentially threaten national security. "The Justice Department just will not tolerate the theft of our trade secrets and intelligence," U.S. Attorney General Merrick Garland said at a conference in San Francisco.

According to the indictment, Ding stole detailed information about the hardware infrastructure and software platform that lets Google's supercomputing data centers train large AI models through machine learning. The stolen information included details about chips and systems, and software that helps power a supercomputer "capable of executing at the cutting edge of machine learning and AI technology," the indictment said. Google designed some of the allegedly stolen chip blueprints to gain an edge over cloud computing rivals Amazon.com and Microsoft, which design their own, and reduce its reliance on chips from Nvidia.

Hired by Google in 2019, Ding allegedly began his thefts three years later, while he was being courted to become chief technology officer for an early-stage Chinese tech company, and by May 2023 had uploaded more than 500 confidential files. The indictment said Ding founded his own technology company that month, and circulated a document to a chat group that said "We have experience with Google's ten-thousand-card computational power platform; we just need to replicate and upgrade it." Google became suspicious of Ding in December 2023 and took away his laptop on Jan. 4, 2024, the day before Ding planned to resign.
A Google spokesperson said: "We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets. After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement."
Supercomputing

Investors Threw 50% Less Money At Quantum Last Year (theregister.com) 32

Dan Robinson reports via The Register: Quantum companies received 50 percent less venture cap funding last year as investors switched to generative AI or shied away from risky bets on Silicon Valley startups. Progress in quantum computing is being made, but practical applications of the technology are still likely years away. Investment in quantum technology reached a high of $2.2 billion in 2022, as confidence (or hype) grew in this emerging market, but that funding fell to about $1.2 billion last year, according to the latest State of Quantum report, produced by The Quantum Insider, with quantum computing company IQM, plus VCs OpenOcean and Lakestar. The picture is even starker in the US, where there was an 80 percent decline in venture capital for quantum, while the APAC region dropped by 17 percent, and EMEA grew slightly by three percent.

But the report denies that we have reached a "quantum winter," comparable with the "AI winter" periods of scarce funding and little progress. Instead, the quantum industry continues to progress towards useful quantum systems, just at a slower pace, and the decline in funding must be seen as part of broader venture capital trends, it insists. "Calendar year 2023 was an interesting year with regards to quantum," Heather West, research manager for Quantum Computing, Infrastructure Systems, Platforms, and Technology at IDC told The Register. "With the increased interest in generative AI, we started to observe that some of the funding that was being invested into quantum was transferred to AI initiatives and companies. Generative AI was seen as the new disruptive technology which end users could use immediately to gain an advantage or value, whereas quantum, while expected to be a disruptive technology, is still very early in development," West told The Register.

Gartner Research vice president Matthew Brisse agreed. "It's due to the slight shift of CIO priorities toward GenAI. If organizations were spending 10 innovation dollars on quantum, now they are spending five. Not abandoning it, but looking at GenAI to provide value sooner to the organization than quantum," he told us. Meanwhile, venture capitalists in America are fighting shy of risky bets on Silicon Valley startups and instead keeping their powder dry as they look to more established technology companies or else shore up their existing portfolio of investments, according to the Financial Times.

Networking

Ceph: a Journey To 1 TiB/s (ceph.io) 16

It's "a free and open-source, software-defined storage platform," according to Wikipedia, providing object storage, block storage, and file storage "built on a common distributed cluster foundation". The charter advisory board for Ceph included people from Canonical, CERN, Cisco, Fujitsu, Intel, Red Hat, SanDisk, and SUSE.

And Nite_Hawk (Slashdot reader #1,304) is one of its core engineers — a former Red Hat principal software engineer named Mark Nelson. (He's now leading R&D for a small cloud systems company called Clyso that provides Ceph consulting.) And he's returned to Slashdot to share a blog post describing "a journey to 1 TiB/s". This gnarly tale-from-Production starts while assisting Clyso with "a fairly hip and cutting edge company that wanted to transition their HDD-backed Ceph cluster to a 10 petabyte NVMe deployment" using object-based storage devices [or OSDs]...) I can't believe they figured it out first. That was the thought going through my head back in mid-December after several weeks of 12-hour days debugging why this cluster was slow... Half-forgotten superstitions from the 90s about appeasing SCSI gods flitted through my consciousness...

Ultimately they decided to go with a Dell architecture we designed, which quoted at roughly 13% cheaper than the original configuration despite having several key advantages. The new configuration has less memory per OSD (still comfortably 12GiB each), but faster memory throughput. It also provides more aggregate CPU resources, significantly more aggregate network throughput, a simpler single-socket configuration, and utilizes the newest generation of AMD processors and DDR5 RAM. By employing smaller nodes, we halved the impact of a node failure on cluster recovery....

The initial single-OSD test looked fantastic for large reads and writes and showed nearly the same throughput we saw when running FIO tests directly against the drives. As soon as we ran the 8-OSD test, however, we observed a performance drop. Subsequent single-OSD tests continued to perform poorly until several hours later when they recovered. So long as a multi-OSD test was not introduced, performance remained high. Confusingly, we were unable to invoke the same behavior when running FIO tests directly against the drives. Just as confusing, we saw that during the 8 OSD test, a single OSD would use significantly more CPU than the others. A wallclock profile of the OSD under load showed significant time spent in io_submit, which is what we typically see when the kernel starts blocking because a drive's queue becomes full...

For over a week, we looked at everything from bios settings, NVMe multipath, low-level NVMe debugging, changing kernel/Ubuntu versions, and checking every single kernel, OS, and Ceph setting we could think of. None these things fully resolved the issue. We even performed blktrace and iowatcher analysis during "good" and "bad" single OSD tests, and could directly observe the slow IO completion behavior. At this point, we started getting the hardware vendors involved. Ultimately it turned out to be unnecessary. There was one minor, and two major fixes that got things back on track.

It's a long blog post, but here's where it ends up:
  • Fix One: "Ceph is incredibly sensitive to latency introduced by CPU c-state transitions. A quick check of the bios on these nodes showed that they weren't running in maximum performance mode which disables c-states."
  • Fix Two: [A very clever engineer working for the customer] "ran a perf profile during a bad run and made a very astute discovery: A huge amount of time is spent in the kernel contending on a spin lock while updating the IOMMU mappings. He disabled IOMMU in the kernel and immediately saw a huge increase in performance during the 8-node tests." In a comment below, Nelson adds that "We've never seen the IOMMU issue before with Ceph... I'm hoping we can work with the vendors to understand better what's going on and get it fixed without having to completely disable IOMMU."
  • Fix Three: "We were not, in fact, building RocksDB with the correct compile flags... It turns out that Canonical fixed this for their own builds as did Gentoo after seeing the note I wrote in do_cmake.sh over 6 years ago... With the issue understood, we built custom 17.2.7 packages with a fix in place. Compaction time dropped by around 3X and 4K random write performance doubled."

The story has a happy ending, with performance testing eventually showing data being read at 635 GiB/s — and a colleague daring them to attempt 1 TiB/s. They built a new testing configuration targeting 63 nodes — achieving 950GiB/s — then tried some more performance optimizations...


Supercomputing

Quantum Computing Startup Says It Will Beat IBM To Error Correction (arstechnica.com) 39

An anonymous reader quotes a report from Ars Technica: On Tuesday, the quantum computing startup Quera laid out a road map that will bring error correction to quantum computing in only two years and enable useful computations using it by 2026, years ahead of when IBM plans to offer the equivalent. Normally, this sort of thing should be dismissed as hype. Except the company is Quera, which is a spinoff of the Harvard University lab that demonstrated the ability to identify and manage errors using hardware that's similar in design to what Quera is building. Also notable: Quera uses the same type of qubit that a rival startup, Atom Computing, has already scaled up to over 1,000 qubits. So, while the announcement should be viewed cautiously -- several companies have promised rapid scaling and then failed to deliver -- there are some reasons it should be viewed seriously as well. [...]

As our earlier coverage described, the Harvard lab where the technology behind Quera's hardware was developed has already demonstrated a key step toward error correction. It created logical qubits from small collections of atoms, performed operations on them, and determined when errors occurred (those errors were not corrected in these experiments). But that work relied on operations that are relatively easy to perform with trapped atoms: two qubits were superimposed, and both were exposed to the same combination of laser lights, essentially performing the same manipulation on both simultaneously. Unfortunately, only a subset of the operations that are likely to be desired for a calculation can be done that way. So, the road map includes a demonstration of additional types of operations in 2024 and 2025. At the same time, the company plans to rapidly scale the number of qubits. Its goal for 2024 hasn't been settled on yet, but [Quera's Yuval Boger] indicated that the goal is unlikely to be much more than double the current 256. By 2025, however, the road map calls for over 3,000 qubits and over 10,000 a year later. This year's small step will add pressure to the need for progress in the ensuing years.

If things go according to plan, the 3,000-plus qubits of 2025 can be combined to produce 30 logical qubits, meaning about 100 physical qubits per logical one. This allows fairly robust error correction schemes and has undoubtedly been influenced by Quera's understanding of the error rate of its current atomic qubits. That's not enough to perform any algorithms that can't be simulated on today's hardware, but it would be more than sufficient to allow people to get experience with developing software using the technology. (The company will also release a logical qubit simulator to help here.) Quera will undoubtedly use this system to develop its error correction process -- Boger indicated that the company expected it would be transparent to the user. In other words, people running operations on Quera's hardware can submit jobs knowing that, while they're running, the system will be handling the error correction for them. Finally, the 2026 machine will enable up to 100 logical qubits, which is expected to be sufficient to perform useful calculations, such as the simulation of small molecules. More general-purpose quantum computing will need to wait for higher qubit counts still.

Supercomputing

How a Cray-1 Supercomputer Compares to a Raspberry Pi (roylongbottom.org.uk) 145

Roy Longbottom worked for the U.K. covernment's Central Computer Agency from 1960 to 1993, and "from 1972 to 2022 I produced and ran computer benchmarking and stress testing programs..." Known as the official design authority for the Whetstone benchmark), Longbottom writes that "In 2019 (aged 84), I was recruited as a voluntary member of Raspberry Pi pre-release Alpha testing team."

And this week — now at age 87 — Longbottom has created a web page titled "Cray 1 supercomputer performance comparisons with home computers, phones and tablets." And one statistic really captures the impact of our decades of technological progress.

"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1."


Thanks to long-time Slashdot reader bobdevine for sharing the link.
Supercomputing

Quantum Computer Sets Record For Largest Ever Number of 'Logical Quantum Bits' (newscientist.com) 16

An anonymous reader quotes a report from New Scientist: Another quantum computing record has been broken. A team has built a quantum computer with the largest ever number of so-called logical qubits (quantum bits). Unlike standard qubits, logical qubits are better able to carry out computations unmarred by errors, making the new device a potentially important step towards practical quantum computing. How complicated of a calculation a quantum computer can complete depends on the number of qubits it contains. Recently, IBM and California-based Atom Computing unveiled devices with more than 1000 qubits, nearly tripling the size of previously largest quantum computers. But the existence of these devices has not led to an immediate and dramatic increase in computing capability, because larger quantum computers often also make more errors.

To make a quantum computer that can correct its errors, researchers from the quantum computing start-up QuEra in Boston and several academics focused instead on increasing its number of logical qubits, which are groups of qubits that are connected to each other through quantum entanglement. In conventional computers, error-correction relies on keeping multiple redundant copies of information, but quantum information is fundamentally different and cannot be copied -- so researchers use entanglement to spread it across several qubits, which achieves a similar redundancy, says Dolev Bluvstein at Harvard University in Massachusetts who was part of the team. To make their quantum computer, the researchers started with several thousand rubidium atoms in an airless container. They then used forces from lasers and magnets to cool the atoms to temperatures close to absolute zero where their quantum properties are most prominent. Under these conditions, they could control the atoms' quantum states very precisely by again hitting them with lasers. Accordingly, they first created 280 qubits from the atoms and then went a step further by using another laser pulse to entangle groups of those – for instance 7 qubits at a time -- to make a logical qubit. By doing this, the researchers were able to make as many as 48 logical qubits at one time. This is more than 10 times the number of logical qubits that have ever been created before.

"It's a big deal to have that many logical qubits. A very remarkable result for any quantum computing platform" says Mark Saffman at the University of Wisconsin-Madison. He says that the new quantum computer greatly benefits from being made of atoms that are controlled by light because this kind of control is very efficient. QuEra's computer makes its qubits interact and exchange information by moving them closer to each other inside the computer with optical "tweezers" made of laser beams. In contrast, chip-based quantum computers, like those made by IBM and Google, must use multiple wires to control each qubit. Bluvstein and his colleagues implemented several computer operations, codes and algorithms on the new computer to test the logical qubits' performance. He says that though these tests were more preliminary than the calculations that quantum computers will eventually perform, the team already found that using logical qubits led to fewer errors than seen in quantum computers using physical qubits.
The research has been published in the journal Nature.
IBM

IBM Claims Quantum Computing Research Milestone (ft.com) 33

Quantum computing is starting to fulfil its promise as a crucial scientific research tool, IBM researchers claim, as the US tech group attempts to quell fears that the technology will fail to match high hopes for it. From a report: The company is due to unveil 10 projects on Monday that point to the power of quantum calculation when twinned with established techniques such as conventional supercomputing, said Dario Gil, its head of research. "For the first time now we have large enough systems, capable enough systems, that you can do useful technical and scientific work with it," Gil said in an interview. The papers presented on Monday are the work of IBM and partners including the Los Alamos National Laboratory, University of California, Berkeley, and the University of Tokyo. They focus mainly on areas such as simulating quantum physics and solving problems in chemistry and materials science.

Expectations that quantum systems would by now be close to commercial uses prompted a wave of funding for the technology in recent years. But signs that business applications are further off than expected have led to warnings of a possible "quantum winter" of waning investor confidence and financial backing. IBM's announcements suggest the technology's main applications have not yet fully extended to the broad range of commercialisable computing tasks many in the field want to see. "It's going to take a while before we go from scientific value to, let's say, business value," said Jay Gambetta, IBM's vice-president of quantum. "But in my opinion the difference between research and commercialisation is getting tighter."

China

China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor (tomshardware.com) 73

An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks.

The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS.

The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.

Supercomputing

Linux Foundation Announces Intent to Form 'High Performance Software Foundation' (linuxfoundation.org) 5

This week the Linux Foundation "announced the intention to form the High Performance Software Foundation.

"Through a series of technical projects, the High Performance Software Foundation aims to build, promote, and advance a portable software stack for high performance computing by increasing adoption, lowering barriers to contribution, and supporting development efforts." As use of high performance computing becomes ubiquitous in scientific computing and digital engineering, and AI use cases multiply, more and more data centers deploy GPUs and other compute accelerators. The High Performance Software Foundation intends to leverage investments made by the United States Department of Energy's Exascale Computing Project, the EuroHPC Joint Undertaking, and other international projects in accelerated high performance computing to exploit the performance of this diversifying set of architectures. As an umbrella project under the Linux Foundation, HPSF intends to provide a neutral space for pivotal projects in the high performance software ecosystem, enabling industry, academia, and government entities to collaborate together on the scientific software stack.

The High Performance Software Foundation already benefits from strong support across the high performance computing landscape, including leading companies and organizations like Amazon Web Services, Argonne National Laboratory, CEA, CIQ, Hewlett Packard Enterprise, Intel, Kitware, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, NVIDIA, Oak Ridge National Laboratory, Sandia National Laboratory, and the University of Oregon.

Its first open source technical projects include:
  • Spack: the high performance computing package manager
  • Kokkos: a performance-portable programming model for writing modern C++ applications in a hardware-agnostic way.
  • AMReX: a performance-portable software framework designed to accelerate solving partial differential equations on block-structured, adaptively refined meshes.
  • WarpX: a performance-portable Particle-in-Cell code with advanced algorithms that won the 2022 Gordon Bell Prize
  • Trilinos: a collection of reusable scientific software libraries, known in particular for linear, non-linear, and transient solvers, as well as optimization and uncertainty quantification.
  • Apptainer: a container system and image format specifically designed for secure high-performance computing.
  • VTK-m: a toolkit of scientific visualization algorithms for accelerator architectures.
  • HPCToolkit: performance measurement and analysis tools for computers ranging from laptops to the world's largest GPU-accelerated supercomputers.
  • E4S: the Extreme-scale Scientific Software Stack
  • Charliecloud: high performance computing-tailored, lightweight, fully unprivileged container implementation.

Microsoft

Microsoft Partners With VCs To Give Startups Free AI Chip Access (techcrunch.com) 4

In the midst of an AI chip shortage, Microsoft wants to give a privileged few startups free access to "supercomputing" resources from its Azure cloud for developing AI models. From a report: Microsoft today announced it's updating its startup program, Microsoft for Startups Founders Hub, to include a no-cost Azure AI infrastructure option for "high-end," Nvidia-based GPU virtual machine clusters to train and run generative models, including large language models along the lines of ChatGPT. Y Combinator and its community of startup founders will be the first to gain access to the clusters in private preview. Why Y Combinator? Annie Pearl, VP of growth and ecosystems, Microsoft, called YC the "ideal initial partner," given its track record working with startups "at the earliest stages."

"We're working closely with Y Combinator to prioritize the asks from their current cohort, and then alumni, as part of our initial preview," Pearl said. "The focus will be on tasks like training and fine-tuning use cases that unblock innovation." It's not the first time Microsoft's attempted to curry favor with Y Combinator startups. In 2015, the company said it would give $500,000 in Azure credits to YC's Winter 2015 batch, a move that at the time was perceived as an effort to draw these startups away from rival clouds. One might argue the GPU clusters for AI training and inferencing are along the same self-serving vein.

Slashdot Top Deals