AI

Nvidia Unveils $3,000 Personal AI Supercomputer (nvidia.com) 80

Nvidia will begin selling a personal AI supercomputer in May that can run sophisticated AI models with up to 200 billion parameters, the chipmaker has announced. The $3,000 Project Digits system is powered by the new GB10 Grace Blackwell Superchip and can operate from a standard power outlet.

The device delivers 1 petaflop of AI performance and includes 128GB of memory and up to 4TB of storage. Two units can be linked to handle models with 405 billion parameters. "AI will be mainstream in every application for every industry," Nvidia CEO Jensen Huang said. The system runs on Linux-based Nvidia DGX OS and supports PyTorch, Python, and Jupyter notebooks.
AI

Michael Dell Says Adoption of AI PCs is 'Definitely Delayed' (fortune.com) 30

Dell CEO Michael Dell has acknowledged delays in corporate adoption of AI-enabled PCs but remains confident in their eventual widespread uptake, citing his four decades of industry experience with technology transitions.

The PC maker's chief executive told Fortune that while the current refresh cycle is "definitely delayed," adoption is inevitable once sufficient features drive customer demand. Meanwhile, Dell's infrastructure division saw 80% revenue growth last quarter from AI-server sales. The company is supplying servers for xAI's Colossus supercomputer project in Memphis and sees opportunities in "sovereign AI" systems for nations seeking technological independence. "Pick a country ranked by GDP, the [top] 49 other than the U.S., they all need one," Dell said.
AI

Elon Musk's xAI Plans Massive Expansion of AI Supercomputer in Memphis (usnews.com) 135

An anonymous reader shared this report from Reuters: Elon Musk's artificial intelligence startup xAI plans to expand its Memphis, Tennessee, supercomputer to house at least one million graphics processing units (GPUs), the Greater Memphis Chamber said on Wednesday, as xAI races to compete against rivals like OpenAI.

The move represents a massive expansion for the supercomputer called Colossus, which currently has 100,000 GPUs to train xAI's chatbot called Grok. As part of the expansion, Nvidia, which supplies the GPUs, and Dell and Super Micro, which have assembled the server racks for the computer, will establish operations in Memphis, the chamber said in a statement.

The Greater Memphis chamber (an economic development organization) called it "the largest capital investment in the region's history," even saying that xAI "is setting the stage for Memphis to become the global epicenter of artificial intelligence." ("To facilitate this massive undertaking, the Greater Memphis Chamber established an xAI Special Operations Team... This team provides round-the-clock concierge service to the company.")

Reuters calls the supercomputer "a critical component of advancing Musk's AI efforts, as the billionaire has deepened his rivalry against OpenAI..." And the Greater Memphis chamber describes the expansion by Nvidia/Dell/Super Micro as "further solidifying the city's position as the 'Digital Delta'... Memphis has provided the power and velocity necessary for not just xAI to grow and thrive, but making way for other companies as well."
AI

AI's Future and Nvidia's Fortunes Ride on the Race To Pack More Chips Into One Place (yahoo.com) 21

Leading technology companies are dramatically expanding their AI capabilities by building multibillion-dollar "super clusters" packed with unprecedented numbers of Nvidia's AI processors. Elon Musk's xAI recently constructed Colossus, a supercomputer containing 100,000 Nvidia Hopper chips, while Meta CEO Mark Zuckerberg claims his company operates an even larger system for training advanced AI models. The push toward massive chip clusters has helped drive Nvidia's quarterly revenue from $7 billion to over $35 billion in two years, making it the world's most valuable public company.

WSJ adds: Nvidia Chief Executive Jensen Huang said in a call with analysts following its earnings Wednesday that there was still plenty of room for so-called AI foundation models to improve with larger-scale computing setups. He predicted continued investment as the company transitions to its next-generation AI chips, called Blackwell, which are several times as powerful as its current chips.

Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia's current chips, "the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving."

Supercomputing

'El Capitan' Ranked Most Powerful Supercomputer In the World (engadget.com) 44

Lawrence Livermore National Laboratory's "El Capitan" supercomputer is now ranked as the world's most powerful, exceeding a High-Performance Linpack (HPL) score of 1.742 exaflops on the latest Top500 list. Engadget reports: El Capitan is only the third "exascale" computer, meaning it can perform more than a quintillion calculations in a second. The other two, called Frontier and Aurora, claim the second and third place slots on the TOP500 now. Unsurprisingly, all of these massive machines live within government research facilities: El Capitan is housed at Lawrence Livermore National Laboratory; Frontier is at Oak Ridge National Laboratory; Argonne National Laboratory claims Aurora. [Cray Computing] had a hand in all three systems.

El Capitan has more than 11 million combined CPU and GPU cores based on AMD 4th-gen EPYC processors. These 24-core processors are rated at 1.8GHz each and have AMD Instinct M1300A APUs. It's also relatively efficient, as such systems go, squeezing out an estimated 58.89 Gigaflops per watt. If you're wondering what El Capitan is built for, the answer is addressing nuclear stockpile safety, but it can also be used for nuclear counterterrorism.

Earth

Diamond Dust Could Cool the Planet At a Cost of Mere Trillions (science.org) 98

sciencehabit shares a report from Science Magazine: From dumping iron into the ocean to launching mirrors into space, proposals to cool the planet through 'geoengineering' tend to be controversial -- and sometimes fantastical. A new idea isn't any less far-out, but it may avoid some of the usual pitfalls of strategies to fill the atmosphere with tiny, reflective particles. In a modeling study published this month in Geophysical Research Letters, scientists report that shooting 5 million tons of diamond dust into the stratosphere each year could cool the planet by 1.6C -- enough to stave off the worst consequences of global warming. The scheme wouldn't be cheap, however: experts estimate it would cost nearly $200 trillion over the remainder of this century -- far more than traditional proposals to use sulfur particles. [...]

The researchers modeled the effects of seven compounds, including sulfur dioxide, as well as particles of diamond, aluminum, and calcite, the primary ingredient in limestone. They evaluated the effects of each particle across 45 years in the model, where each trial took more than a week in real-time on a supercomputer. The results showed diamond particles were best at reflecting radiation while also staying aloft and avoiding clumping. Diamond is also thought to be chemically inert, meaning it would not react to form acid rain, like sulfur. To achieve 1.6C of cooling, 5 million tons of diamond particles would need to be injected into the stratosphere each year. Such a large quantity would require a huge ramp up in synthetic diamond production before high-altitude aircraft could sprinkle the ground-up gems across the stratosphere. At roughly $500,000 per ton, synthetic diamond dust would be 2,400 times more expensive than sulfur and cost $175 trillion if deployed from 2035 to 2100, one study estimates.

Supercomputing

Google Identifies Low Noise 'Phase Transition' In Its Quantum Processor (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: Back in 2019, Google made waves by claiming it had achieved what has been called "quantum supremacy" -- the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

United Kingdom

UK Government Shelves $1.66 Billion Tech and AI Plans 35

An anonymous reader shares a report: The new Labour government has shelved $1.66 bn of funding promised by the Conservatives for tech and Artificial Intelligence (AI) projects, the BBC has learned. It includes $1 bn for the creation of an exascale supercomputer at Edinburgh University and a further $640m for AI Research Resource, which funds computing power for AI. Both funds were unveiled less than 12 months ago.

The Department for Science, Innovation and Technology (DSIT) said the money was promised by the previous administration but was never allocated in its budget. Some in the industry have criticised the government's decision. Tech business founder Barney Hussey-Yeo posted on X that reducing investment risked "pushing more entrepreneurs to the US." Businessman Chris van der Kuyl described the move as "idiotic." Trade body techUK said the government now needed to make "new proposals quickly" or the UK risked "losing out" to other countries in what are crucial industries of the future.
China

China Is Getting Secretive About Its Supercomputers 28

For decades, American and Chinese scientists collaborated on supercomputers. But Chinese scientists have become more secretive as the U.S. has tried to hinder China's technological progress, and they have stopped participating altogether in a prominent international supercomputing forum. From a report: The withdrawal marked the end of an era and created a divide that Western scientists say will slow the development of AI and other technologies as countries pursue separate projects. The new secrecy also makes it harder for the U.S. government to answer a question it deems essential to national security: Does the U.S. or China have faster supercomputers? Some academics have taken it upon themselves to hunt for clues about China's supercomputing progress, scrutinizing research papers and cornering Chinese peers at conferences.

Supercomputers have become central to the U.S.-China technological Cold War because the country with the faster supercomputers can also hold an advantage in developing nuclear weapons and other military technology. "If the other guy can use a supercomputer to simulate and develop a fighter jet or weapon 20% or even 1% better than yours in terms of range, speed and accuracy, it's going to target you first, and then it's checkmate," said Jimmy Goodrich, a senior adviser for technology analysis at Rand, a think tank. The forum that China recently stopped participating in is called the Top500, which ranks the world's 500 fastest supercomputers. While the latest ranking, released in June, says the world's three fastest computers are in the U.S., the reality is probably different.
Hardware

Will Tesla Do a Phone? Yes, Says Morgan Stanley 170

Morgan Stanley, in a note -- seen by Slashdot -- sent to its clients on Wednesday: From our continuing discussions with automotive management teams and industry experts, the car is an extension of the phone. The phone is an extension of the car. The lines between car and phone are truly blurring.

For years, we have been writing about the potential for Tesla to expand into edge compute domains beyond the car, including last October where we described a mobile AI assistant as a 'heavy key.' Following Apple's WWDC, Tesla CEO Elon Musk re-ignited the topic by saying that making such a device is 'not out of the question.' As Mr. Musk continues to invest further into his own LLM/genAI efforts, such as 'Grok,' the potential strategic and userexperience overlap becomes more obvious.

From an automotive perspective, the topic of supercomputing at both the datacenter level and at the edge are highly relevant given the incremental global unit sold is a car that can perform OTA updates of firmware, has a battery with a stored energy equivalent of approx. 2,000 iPhones, and a liquid cooled inference supercomputer as standard kit. What if your phone could tap into your vehicle's compute power and battery supply to run AI applications?

Edge compute and AI have brought to light some of the challenges (battery life, thermal, latency, etc.) of marrying today's smartphones with ever more powerful AI-driven applications. Numerous media reports have discussed OpenAI potentially developing a consumer device specifically designed for AI.

The phone as a (heavy) car key? Any Tesla owner will tell you how they use their smartphone as their primary key to unlock their car as well as running other remote applications while they interact with their vehicles. The 'action button' on the iPhone 15 potentially takes this to a different level of convenience.
IBM

Lynn Conway, Leading Computer Scientist and Transgender Pioneer, Dies At 85 (latimes.com) 155

Lynn Conway, a pioneering computer scientist who made significant contributions to VLSI design and microelectronics, and a prominent advocate for transgender rights, died Sunday from a heart condition. She was 85. Pulitzer Prize-winning journalist Michael Hiltzik remembers Conway in a column for the Los Angeles Times: As I recounted in 2020, I first met Conway when I was working on my 1999 book about Xerox PARC, Dealers of Lightning, for which she was a uniquely valuable source. In 2000, when she decided to come out as transgender, she allowed me to chronicle her life in a cover story for the Los Angeles Times Magazine titled "Through the Gender Labyrinth." That article traced her journey from childhood as a male in New York's strait-laced Westchester County to her decision to transition. Years of emotional and psychological turmoil followed, even as he excelled in academic studies. [Conway earned bachelor's and master's degrees in electrical engineering from Columbia University in 1961, quickly joining a team at IBM to design the world's fastest supercomputer. Despite personal success, she faced significant emotional turmoil, leading to her decision to transition in 1968. Initially supportive, IBM ultimately fired Conway due to their inability to reconcile her transition with the company's conservative image.]

The family went on welfare for three months. Conway's wife barred her from contact with her daughters. She would not see them again for 14 years. Beyond the financial implications, the stigma of banishment from one of the world's most respected corporations felt like an excommunication. She sought jobs in the burgeoning electrical engineering community around Stanford, working her way up through start-ups, and in 1973 she was invited to join Xerox's brand new Palo Alto Research Center, or PARC. In partnership with Caltech engineering professor Carver Mead, Conway established the design rules for the new technology of "very large-scale integrated circuits" (or, in computer shorthand, VLSI). The pair laid down the rules in a 1979 textbook that a generation of computer and engineering students knew as "Mead-Conway."

VLSI fostered a revolution in computer microprocessor design that included the Pentium chip, which would power millions of PCs. Conway spread the VLSI gospel by creating a system in which students taking courses at MIT and other technical institutions could get their sample designs rendered in silicon. Conway's life journey gave her a unique perspective on the internal dynamics of Xerox's unique lab, which would invent the personal computer, the laser printer, Ethernet, and other innovations that have become fully integrated into our daily lives. She could see it from the vantage point of an insider, thanks to her experience working on IBM's supercomputer, and an outsider, thanks to her personal history.

After PARC, she was recruited to head a supercomputer program at the Defense Department's Advanced Research Projects Agency, or DARPA -- sailing through her FBI background check so easily that she became convinced that the Pentagon must have already encountered transgender people in its workforce. A figure of undisputed authority in some of the most abstruse corners of computing, Conway was elected to the National Academy of Engineering in 1989. She joined the University of Michigan as a professor and associate dean in the College of Engineering. In 2002 she married a fellow engineer, Charles Rogers, and with him lived active life -- with a shared passion for white-water canoeing, motocross racing and other adventures -- on a 24-acre homestead not far from Ann Arbor, Mich.
In 2020, Conway received a formal apology from IBM for firing her 52 years earlier. Diane Gherson, an IBM senior vice president, told her, "Thanks to your courage, your example, and all the people who followed in your footsteps, as a society we are now in a better place.... But that doesn't help you, Lynn, probably our very first employee to come out. And for that, we deeply regret what you went through -- and know I speak for all of us."
Biotech

World's First Bioprocessor Uses 16 Human Brain Organoids, Consumes Less Power (tomshardware.com) 48

"A Swiss biocomputing startup has launched an online platform that provides remote access to 16 human brain organoids," reports Tom's Hardware: FinalSpark claims its Neuroplatform is the world's first online platform delivering access to biological neurons in vitro. Moreover, bioprocessors like this "consume a million times less power than traditional digital processors," the company says. FinalSpark says its Neuroplatform is capable of learning and processing information, and due to its low power consumption, it could reduce the environmental impacts of computing. In a recent research paper about its developments, FinalSpakr claims that training a single LLM like GPT-3 required approximately 10GWh — about 6,000 times greater energy consumption than the average European citizen uses in a whole year. Such energy expenditure could be massively cut following the successful deployment of bioprocessors.

The operation of the Neuroplatform currently relies on an architecture that can be classified as wetware: the mixing of hardware, software, and biology. The main innovation delivered by the Neuroplatform is through the use of four Multi-Electrode Arrays (MEAs) housing the living tissue — organoids, which are 3D cell masses of brain tissue...interfaced by eight electrodes used for both stimulation and recording... FinalSpark has given access to its remote computing platform to nine institutions to help spur bioprocessing research and development. With such institutions' collaboration, it hopes to create the world's first living processor.

FinalSpark was founded in 2014, according to Wikipedia's page on wetware computing. "While a wetware computer is still largely conceptual, there has been limited success with construction and prototyping, which has acted as a proof of the concept's realistic application to computing in the future."

Thanks to long-time Slashdot reader Artem S. Tashkinov for sharing the article.
Supercomputing

Intel Aurora Supercomputer Breaks Exascale Barrier 28

Josh Norem reports via ExtremeTech: At the recent International supercomputing conference called ISC 2024, Intel's newest Aurora supercomputer installed at Argonne National Laboratory raised a few eyebrows by finally surpassing the exascale barrier. Before this, only AMD's Frontier system had been able to achieve this level of performance. Intel also achieved what it says is the world's best performance for AI at 10.61 "AI exaflops." Intel reported the news on its blog, stating Aurora was now officially the fastest supercomputer for AI in the world. It shares the distinction in collaboration with Argonne National Laboratory and Hewlett Packard Enterprise (HPE), which both built and houses the system in its current state, which Intel says was at 87% functionality for the recent tests. In the all-important Linpack (HPL) test, the Aurora computer hit 1.012 exaflops, meaning it has almost doubled the performance on tap since its initial "partial run" in late 2023, where it hit just 585.34 petaflops. The company then said it expected to cross the exascale barrier with Aurora eventually, and now it has.

Intel says for the ISC 2024 tests, Aurora was operating with 9,234 nodes. The company notes it ranked second overall in LINPACK, meaning it's still unable to dethrone AMD's Frontier system, which is also an HPE supercomputer. AMD's Frontier was the first supercomputer to break the exascale barrier in June 2022. Frontier sits at around 1.2 exaflops in Linpack, so Intel is knocking on its door but still has a way to go before it can topple it. However, Intel says Aurora came in first in the Linpack-mixed benchmark, reportedly highlighting its unparalleled AI performance. Intel's Aurora supercomputer uses the company's latest CPU and GPU hardware, with 21,248 Sapphire Rapids Xeon CPUs and 63,744 Ponte Vecchio GPUs. When it's fully operational later this year, Intel believes the system will eventually be capable of crossing the 2-exaflop barrier.
Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Businesses

Stability AI Reportedly Ran Out of Cash To Pay Its Bills For Rented Cloud GPUs (theregister.com) 45

An anonymous reader writes: The massive GPU clusters needed to train Stability AI's popular text-to-image generation model Stable Diffusion are apparently also at least partially responsible for former CEO Emad Mostaque's downfall -- because he couldn't find a way to pay for them. According to an extensive expose citing company documents and dozens of persons familiar with the matter, it's indicated that the British model builder's extreme infrastructure costs drained its coffers, leaving the biz with just $4 million in reserve by last October. Stability rented its infrastructure from Amazon Web Services, Google Cloud Platform, and GPU-centric cloud operator CoreWeave, at a reported cost of around $99 million a year. That's on top of the $54 million in wages and operating expenses required to keep the AI upstart afloat.

What's more, it appears that a sizable portion of the cloudy resources Stability AI paid for were being given away to anyone outside the startup interested in experimenting with Stability's models. One external researcher cited in the report estimated that a now-cancelled project was provided with at least $2.5 million worth of compute over the span of four months. Stability AI's infrastructure spending was not matched by revenue or fresh funding. The startup was projected to make just $11 million in sales for the 2023 calendar year. Its financials were apparently so bad that it allegedly underpaid its July 2023 bills to AWS by $1 million and had no intention of paying its August bill for $7 million. Google Cloud and CoreWeave were also not paid in full, with debts to the pair reaching $1.6 million as of October, it's reported.

It's not clear whether those bills were ultimately paid, but it's reported that the company -- once valued at a billion dollars -- weighed delaying tax payments to the UK government rather than skimping on its American payroll and risking legal penalties. The failing was pinned on Mostaque's inability to devise and execute a viable business plan. The company also failed to land deals with clients including Canva, NightCafe, Tome, and the Singaporean government, which contemplated a custom model, the report asserts. Stability's financial predicament spiraled, eroding trust among investors, making it difficult for the generative AI darling to raise additional capital, it is claimed. According to the report, Mostaque hoped to bring in a $95 million lifeline at the end of last year, but only managed to bring in $50 million from Intel. Only $20 million of that sum was disbursed, a significant shortfall given that the processor titan has a vested interest in Stability, with the AI biz slated to be a key customer for a supercomputer powered by 4,000 of its Gaudi2 accelerators.
The report goes on to mention further fundraising challenges, issues retaining employees, and copyright infringement lawsuits challenging the company's future prospects. The full expose can be read via Forbes (paywalled).
Businesses

Microsoft, OpenAI Plan $100 Billlion 'Stargate' AI Supercomputer (reuters.com) 41

According to The Information (paywalled), Microsoft and OpenAI are planning a $100 billion datacenter project that will include an artificial intelligence supercomputer called "Stargate." Reuters reports: The Information reported that Microsoft would likely be responsible for financing the project, which would be 100 times more costly than some of the biggest current data centers, citing people involved in private conversations about the proposal. OpenAI's next major AI upgrade is expected to land by early next year, the report said, adding that Microsoft executives are looking to launch Stargate as soon as 2028. The proposed U.S.-based supercomputer would be the biggest in a series of installations the companies are looking to build over the next six years, the report added.

The Information attributed the tentative cost of $100 billion to a person who spoke to OpenAI CEO Sam Altman about it and a person who has viewed some of Microsoft's initial cost estimates. It did not identify those sources. Altman and Microsoft employees have spread supercomputers across five phases, with Stargate as the fifth phase. Microsoft is working on a smaller, fourth-phase supercomputer for OpenAI that it aims to launch around 2026, according to the report. Microsoft and OpenAI are in the middle of the third phase of the five-phase plan, with much of the cost of the next two phases involving procuring the AI chips that are needed, the report said. The proposed efforts could cost in excess of $115 billion, more than three times what Microsoft spent last year on capital expenditures for servers, buildings and other equipment, the report stated.

Crime

Former Google Engineer Indicted For Stealing AI Secrets To Aid Chinese Companies 28

Linwei Ding, a former Google software engineer, has been indicted for stealing trade secrets related to AI to benefit two Chinese companies. He faces up to 10 years in prison and a $250,000 fine on each criminal count. Reuters reports: Ding's indictment was unveiled a little over a year after the Biden administration created an interagency Disruptive Technology Strike Force to help stop advanced technology being acquired by countries such as China and Russia, or potentially threaten national security. "The Justice Department just will not tolerate the theft of our trade secrets and intelligence," U.S. Attorney General Merrick Garland said at a conference in San Francisco.

According to the indictment, Ding stole detailed information about the hardware infrastructure and software platform that lets Google's supercomputing data centers train large AI models through machine learning. The stolen information included details about chips and systems, and software that helps power a supercomputer "capable of executing at the cutting edge of machine learning and AI technology," the indictment said. Google designed some of the allegedly stolen chip blueprints to gain an edge over cloud computing rivals Amazon.com and Microsoft, which design their own, and reduce its reliance on chips from Nvidia.

Hired by Google in 2019, Ding allegedly began his thefts three years later, while he was being courted to become chief technology officer for an early-stage Chinese tech company, and by May 2023 had uploaded more than 500 confidential files. The indictment said Ding founded his own technology company that month, and circulated a document to a chat group that said "We have experience with Google's ten-thousand-card computational power platform; we just need to replicate and upgrade it." Google became suspicious of Ding in December 2023 and took away his laptop on Jan. 4, 2024, the day before Ding planned to resign.
A Google spokesperson said: "We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets. After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement."
Supercomputing

How a Cray-1 Supercomputer Compares to a Raspberry Pi (roylongbottom.org.uk) 145

Roy Longbottom worked for the U.K. covernment's Central Computer Agency from 1960 to 1993, and "from 1972 to 2022 I produced and ran computer benchmarking and stress testing programs..." Known as the official design authority for the Whetstone benchmark), Longbottom writes that "In 2019 (aged 84), I was recruited as a voluntary member of Raspberry Pi pre-release Alpha testing team."

And this week — now at age 87 — Longbottom has created a web page titled "Cray 1 supercomputer performance comparisons with home computers, phones and tablets." And one statistic really captures the impact of our decades of technological progress.

"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1."


Thanks to long-time Slashdot reader bobdevine for sharing the link.
China

China's Secretive Sunway Pro CPU Quadruples Performance Over Its Predecessor (tomshardware.com) 73

An anonymous reader shares a report: Earlier this year, the National Supercomputing Center in Wuxi (an entity blacklisted in the U.S.) launched its new supercomputer based on the enhanced China-designed Sunway SW26010 Pro processors with 384 cores. Sunway's SW26010 Pro CPU not only packs more cores than its non-Pro SW26010 predecessor, but it more than quadrupled FP64 compute throughput due to microarchitectural and system architecture improvements, according to Chips and Cheese. However, while the manycore CPU is good on paper, it has several performance bottlenecks.

The first details of the manycore Sunway SW26010 Pro CPU and supercomputers that use it emerged back in 2021. Now, the company has showcased actual processors and disclosed more details about their architecture and design, which represent a significant leap in performance, recently at SC23. The new CPU is expected to enable China to build high-performance supercomputers based entirely on domestically developed processors. Each Sunway SW26010 Pro has a maximum FP64 throughput of 13.8 TFLOPS, which is massive. For comparison, AMD's 96-core EPYC 9654 has a peak FP64 performance of around 5.4 TFLOPS.

The SW26010 Pro is an evolution of the original SW26010, so it maintains the foundational architecture of its predecessor but introduces several key enhancements. The new SW26010 Pro processor is based on an all-new proprietary 64-bit RISC architecture and packs six core groups (CG) and a protocol processing unit (PPU). Each CG integrates 64 2-wide compute processing elements (CPEs) featuring a 512-bit vector engine as well as 256 KB of fast local store (scratchpad cache) for data and 16 KB for instructions; one management processing element (MPE), which is a superscalar out-of-order core with a vector engine, 32 KB/32 KB L1 instruction/data cache, 256 KB L2 cache; and a 128-bit DDR4-3200 memory interface.

AMD

AMD-Powered Frontier Remains Fastest Supercomputer in the World (tomshardware.com) 25

The Top500 organization released its semi-annual list of the fastest supercomputers in the world, with the AMD-powered Frontier supercomputer retaining its spot at the top of the list with 1.194 Exaflop/s (EFlop/s) of performance, fending off a half-scale 585.34 Petaflop/s (PFlop/s) submission from the Argonne National Laboratory's Intel-powered Aurora supercomputer. From a report: Argonne's submission, which only employs half of the Aurora system, lands at the second spot on the Top500, unseating Japan's Fugaku as the second-fastest supercomputer in the world. Intel also made inroads with 20 new supercomputers based on its Sapphire Rapids CPUs entering the list, but AMD's EPYC continues to take over the Top500 as it now powers 140 systems on the list -- a 39% year-over-year increase.

Intel and Argonne are currently still working to bring Arora fully online for users in 2024. As such, the Aurora submission represented 10,624 Intel CPUs and 31,874 Intel GPUs working in concert to deliver 585.34 PFlop/s at a total of 24.69 megawatts (MW) of energy. In contrast, AMD's Frontier holds the performance title at 1.194 EFlop/s, which is more than twice the performance of Aurora, while consuming a comparably miserly 22.70 MW of energy (yes, that's less power for the full Frontier supercomputer than half of the Aurora system). Aurora did not land on the Green500, a list of the most power-efficient supercomputers, with this submission, but Frontier continues to hold eighth place on that list. However, Aurora is expected to eventually reach up to 2 EFlop/s of performance when it comes fully online. When complete, Auroroa will have 21,248 Xeon Max CPUs and 63,744 Max Series 'Ponte Vecchio' GPUs spread across 166 racks and 10,624 compute blades, making it the largest known single deployment of GPUs in the world. The system leverages HPE Cray EX â" Intel Exascale Compute Blades and uses HPE's Slingshot-11 networking interconnect.

Slashdot Top Deals