AI

Jensen Huang: AI Has To Do '100 Times More' Computation Now Than When ChatGPT Was Released 32

In an interview with CNBC's Jon Fortt on Wednesday, Nvidia CEO Jensen Huang said next-gen AI will need 100 times more compute than older models as a result of new reasoning approaches that think "about how best to answer" questions step by step. From a report: "The amount of computation necessary to do that reasoning process is 100 times more than what we used to do," Huang told CNBC's Jon Fortt in an interview on Wednesday following the chipmaker's fourth-quarter earnings report. He cited models including DeepSeek's R1, OpenAI's GPT-4 and xAI's Grok 3 as models that use a reasoning process.

Huang pushed back on that idea in the interview on Wednesday, saying DeepSeek popularized reasoning models that will need more chips. "DeepSeek was fantastic," Huang said. "It was fantastic because it open sourced a reasoning model that's absolutely world class." Huang said that company's percentage of revenue in China has fallen by about half due to the export restrictions, adding that there are other competitive pressures in the country, including from Huawei.

Developers will likely search for ways around export controls through software, whether it be for a supercomputer, a personal computer, a phone or a game console, Huang said. "Ultimately, software finds a way," he said. "You ultimately make that software work on whatever system that you're targeting, and you create great software." Huang said that Nvidia's GB200, which is sold in the United States, can generate AI content 60 times faster than the versions of the company's chips that it sells to China under export controls.
Supercomputing

The IRS Is Buying an AI Supercomputer From Nvidia (theintercept.com) 150

According to The Intercept, the IRS is set to purchase an Nvidia SuperPod AI supercomputer to enhance its machine learning capabilities for tasks like fraud detection and taxpayer behavior analysis. From the report: With Elon Musk's so-called Department of Government Efficiency installing itself at the IRS amid a broader push to replace federal bureaucracy with machine-learning software, the tax agency's computing center in Martinsburg, West Virginia, will soon be home to a state-of-the-art Nvidia SuperPod AI computing cluster. According to the previously unreported February 5 acquisition document, the setup will combine 31 separate Nvidia servers, each containing eight of the company's flagship Blackwell processors designed to train and operate artificial intelligence models that power tools like ChatGPT. The hardware has not yet been purchased and installed, nor is a price listed, but SuperPod systems reportedly start at $7 million. The setup described in the contract materials notes that it will include a substantial memory upgrade from Nvidia.

Though small compared to the massive AI-training data centers deployed by companies like OpenAI and Meta, the SuperPod is still a powerful and expensive setup using the most advanced technology offered by Nvidia, whose chips have facilitated the global machine-learning spree. While the hardware can be used in many ways, it's marketed as a turnkey means of creating and querying an AI model. Last year, the MITRE Corporation, a federally funded military R&D lab, acquired a $20 million SuperPod setup to train bespoke AI models for use by government agencies, touting the purchase as a "massive increase in computing power" for the United States.

How exactly the IRS will use its SuperPod is unclear. An agency spokesperson said the IRS had no information to share on the supercomputer purchase, including which presidential administration ordered it. A 2024 report by the Treasury Inspector General for Tax Administration identified 68 different AI-related projects underway at the IRS; the Nvidia cluster is not named among them, though many were redacted. But some clues can be gleaned from the purchase materials. "The IRS requires a robust and scalable infrastructure that can handle complex machine learning (ML) workloads," the document explains. "The Nvidia Super Pod is a critical component of this infrastructure, providing the necessary compute power, storage, and networking capabilities to support the development and deployment of large-scale ML models."

The document notes that the SuperPod will be run by the IRS Research, Applied Analytics, and Statistics division, or RAAS, which leads a variety of data-centric initiatives at the agency. While no specific uses are cited, it states that this division's Compliance Data Warehouse project, which is behind this SuperPod purchase, has previously used machine learning for automated fraud detection, identity theft prevention, and generally gaining a "deeper understanding of the mechanisms that drive taxpayer behavior."

AI

Were DeepSeek's Development Costs Much Higher Than Reported? (msn.com) 49

Nearly three years ago a team of Chinese AI engineers working for DeepSeek's parent company unveiled an earlier AI supercomputer that the Washington Post says was constructed from 10,000 A100 GPUs purchased from Nvidia. Roughly six months later "Washington had banned Nvidia from selling any more A100s to China," the article notes.

Remember that number as you read this. 10,000 A100 GPUs... DeepSeek's new chatbot caused a panic in Silicon Valley and on Wall Street this week, erasing $1 trillion from the stock market. That impact stemmed in large part from the company's claim that it had trained one of its recent models on a minuscule $5.6 million in computing costs and with only 2,000 or so of Nvidia's less-advanced H800 chips.

Nvidia saw its soaring value crater by $589 billion Monday as DeepSeek rocketed to the top of download charts, prompting President Donald Trump to call for U.S. industry to be "laser focused" on competing... But a closer look at DeepSeek reveals that its parent company deployed a large and sophisticated chip set in its supercomputer, leading experts to assess the total cost of the project as much higher than the relatively paltry sum that U.S. markets reacted to this week... Lennart Heim, an AI expert at Rand, said DeepSeek's evident access to [the earlier] supercomputer would have made it easier for the company to develop a more efficient model, requiring fewer chips.

That earlier project "suggests that DeepSeek had a major boost..." according to the article, "with technology comparable to that of the leading U.S. AI companies." And while DeepSeek claims it only spent $5.6 million to train one of its advanced models, "its parent company has said that building the earlier supercomputer had cost 1 billion yuan, or $139 million.") Yet the article also cites the latest insights Friday from chip investment company SemiAnalysis, summarizing their finding that DeepSeek "has spent more than half a billion dollars on GPUs, with total capital expenditures of almost $1.3 billion."

The article notes Thursday remarks by OpenAI CEO Sam Altman that DeepSeek's energy-efficiency claims were "wildly overstated... This is a model at a capability level that we had quite some time ago." And Palmer Luckey called DeepSeek "legitimately impressive" on X but called the $5.6 million training cost figure "bogus" and said the Silicon Valley meltdown was "hysteria." Even with these higher total costs in mind, experts say, U.S. companies are right to be concerned about DeepSeek upending the market. "We know two things for sure: DeepSeek is pricing their services very competitively, and second, the performance of their models is comparable to leading competitors," said Kai-Shen Huang, an AI expert at the Research Institute for Democracy, Society and Emerging Technology, a Taipei-based think tank. "I think DeepSeek's pricing strategy has the potential to disrupt the market globally...."

China's broader AI policy push has helped create an environment conducive for a company like DeepSeek to rise. Beijing announced an ambitious AI blueprint in 2017, with a goal to become a global AI leader by 2030 and promises of funding for universities and private enterprise. Local governments across the nation followed with their own programs to support AI.

Supercomputing

Quantum Computer Built On Server Racks Paves the Way To Bigger Machines (technologyreview.com) 27

An anonymous reader quotes a report from MIT Technology Review: A Canadian startup called Xanadu has built a new quantum computer it says can be easily scaled up to achieve the computational power needed to tackle scientific challenges ranging from drug discovery to more energy-efficient machine learning. Aurora is a "photonic" quantum computer, which means it crunches numbers using photonic qubits -- information encoded in light. In practice, this means combining and recombining laser beams on multiple chips using lenses, fibers, and other optics according to an algorithm. Xanadu's computer is designed in such a way that the answer to an algorithm it executes corresponds to the final number of photons in each laser beam. This approach differs from one used by Google and IBM, which involves encoding information in properties of superconducting circuits.

Aurora has a modular design that consists of four similar units, each installed in a standard server rack that is slightly taller and wider than the average human. To make a useful quantum computer, "you copy and paste a thousand of these things and network them together," says Christian Weedbrook, the CEO and founder of the company. Ultimately, Xanadu envisions a quantum computer as a specialized data center, consisting of rows upon rows of these servers. This contrasts with the industry's earlier conception of a specialized chip within a supercomputer, much like a GPU. [...]

Xanadu's 12 qubits may seem like a paltry number next to IBM's 1,121, but Tiwari says this doesn't mean that quantum computers based on photonics are running behind. In his opinion, the number of qubits reflects the amount of investment more than it does the technology's promise. [...] Xanadu's next goal is to improve the quality of the photons in the computer, which will ease the error correction requirements. "When you send lasers through a medium, whether it's free space, chips, or fiber optics, not all the information makes it from the start to the finish," he says. "So you're actually losing light and therefore losing information." The company is working to reduce this loss, which means fewer errors in the first place. Xanadu aims to build a quantum data center, with thousands of servers containing a million qubits, in 2029.
The company published its work on chip design optimization and fabrication in the journal Nature.
Government

OpenAI Teases 'New Era' of AI In US, Deepens Ties With Government (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica: On Thursday, OpenAI announced that it is deepening its ties with the US government through a partnership with the National Laboratories and expects to use AI to "supercharge" research across a wide range of fields to better serve the public. "This is the beginning of a new era, where AI will advance science, strengthen national security, and support US government initiatives," OpenAI said. The deal ensures that "approximately 15,000 scientists working across a wide range of disciplines to advance our understanding of nature and the universe" will have access to OpenAI's latest reasoning models, the announcement said.

For researchers from Los Alamos, Lawrence Livermore, and Sandia National Labs, access to "o1 or another o-series model" will be available on Venado -- an Nvidia supercomputer at Los Alamos that will become a "shared resource." Microsoft will help deploy the model, OpenAI noted. OpenAI suggested this access could propel major "breakthroughs in materials science, renewable energy, astrophysics," and other areas that Venado was "specifically designed" to advance. Key areas of focus for Venado's deployment of OpenAI's model include accelerating US global tech leadership, finding ways to treat and prevent disease, strengthening cybersecurity, protecting the US power grid, detecting natural and man-made threats "before they emerge," and " deepening our understanding of the forces that govern the universe," OpenAI said.

Perhaps among OpenAI's flashiest promises for the partnership, though, is helping the US achieve a "a new era of US energy leadership by unlocking the full potential of natural resources and revolutionizing the nation's energy infrastructure." That is urgently needed, as officials have warned that America's aging energy infrastructure is becoming increasingly unstable, threatening the country's health and welfare, and without efforts to stabilize it, the US economy could tank. But possibly the most "highly consequential" government use case for OpenAI's models will be supercharging research safeguarding national security, OpenAI indicated. "The Labs also lead a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide," OpenAI noted. "Our partnership will support this work, with careful and selective review of use cases and consultations on AI safety from OpenAI researchers with security clearances."
The announcement follows the launch earlier this week of ChatGPT Gov, "a new tailored version of ChatGPT designed to provide US government agencies with an additional way to access OpenAI's frontier models." It also worked with the Biden administration to voluntarily commit to give officials early access to its latest models for safety inspections.
IT

Nvidia Reveals AI Supercomputer Used Non-Stop For Six Years To Perfect Gaming Graphics (pcgamer.com) 51

Nvidia has dedicated a supercomputer running thousands of its latest GPUs exclusively to improving its DLSS upscaling technology for the past six years, a company executive revealed at CES 2025. Speaking at the RTX Blackwell Editor's Day in Las Vegas, Brian Catanzaro, Nvidia's VP of applied deep learning research, said the system operates continuously to analyze failures and retrain models across hundreds of games.
Linux

Will Nvidia Spark a New Generation of Linux PCs? (zdnet.com) 95

"I know, I know: 'Year of the Linux desktop ... yadda, yadda'," writes Steven Vaughan-Nichols, a ZDNet senior contributing editor. "You've heard it all before. But now there's a Linux-powered PC that many people will want..."

He's talking about Nvidia's newly-announced Project Digits, describing it as "a desktop with AI supercomputer power that runs DGX OS, a customized Ubuntu Linux 22.04 distro." Powered by MediaTek and Nvidia's Grace Blackwell Superchip, Project DIGITS is a $3,000 personal AI that combines Nvidia's Blackwell GPU with a 20-core Grace CPU built on the Arm architecture... At CES, Nvidia CEO Jensen Huang confirmed plans to make this technology available to everyone, not just AI developers. "We're going to make this a mainstream product," Huang said. His statement suggests that Nvidia and MediaTek are positioning themselves to challenge established players — including Intel and AMD — in the desktop CPU market. This move to the desktop and perhaps even laptops has been coming for a while. As early as 2023, Nvidia was hinting that a consumer desktop chip would be in its future... [W]hy not use native Linux as the primary operating system on this new chip family?

Linux, after all, already runs on the Grace Blackwell Superchip. Windows doesn't. It's that simple. Nowadays, Linux runs well with Nvidia chips. Recent benchmarks show that open-source Linux graphic drivers work with Nvidia GPUs as well as its proprietary drivers. Even Linus Torvalds thinks Nvidia has gotten its open-source and Linux act together. In August 2023, Torvalds said, "Nvidia got much more involved in the kernel. Nvidia went from being on my list of companies who are not good to my list of companies who are doing really good work." Canonical, Ubuntu Linux's parent company, has long worked closely with Nvidia. Ubuntu already provides Blackwell drivers.

The article strays into speculation, when it adds "maybe you wouldn't pay three grand for a Project DIGITS PC. But what about a $1,000 Blackwell PC from Acer, Asus, or Lenovo? All three of these companies are already selling MediaTek-powered Chromebooks...."

"The first consumer products featuring this technology are expected to hit the market later this year. I'm looking forward to running Linux on it. Come on in! The operating system's fine."
Technology

US Unveils El Capitan, World's Fastest Supercomputer, For Classified Tasks (axios.com) 44

The world's most powerful supercomputer, capable of 2.79 quintillion calculations per second, has been unveiled at Lawrence Livermore National Laboratory in California, designed primarily to maintain the U.S. nuclear weapons stockpile and run other classified simulations. The $600 million system, named El Capitan, consists of 87 computer racks weighing 1.3 million pounds and draws 30 megawatts of power.

Built by Hewlett-Packard Enterprise using AMD chips, it operates alongside a smaller system called Tuolumne, which ranks tenth globally in computing power. "While we're still exploring the full role AI will play, there's no doubt that it is going to improve our ability to do research and development that we need," said Bradley Wallin, a deputy director at the laboratory.
AI

Nvidia Launches RTX 50 Blackwell GPUs: From the $2,000 RTX 5090 To the $549 RTX (techspot.com) 45

"Nvidia has officially introduced its highly anticipated GeForce 50 Series graphics cards, accompanied by the debut of DLSS 4 technology," writes Slashdot reader jjslash. "The lineup includes four premium GPUs: the RTX 5080 and RTX 5090 are slated for release on January 30, with the RTX 5070 and RTX 5070 Ti following in February. TechSpot recount of the Jensen Huang keynote tries to differentiate between dubious performance claims and actual expected raw output": The new RTX 5090 flagship comes packing significantly more hardware over its predecessor. Not only does this GPU use Nvidia's new Blackwell architecture, but it also packs significantly more CUDA cores, greater memory bandwidth, and a higher VRAM capacity. The SM count has increased from 128 with the RTX 4090 to a whopping 170 with the RTX 5090 -- a 33% increase in the core size. The memory subsystem is overhauled, now featuring GDDR7 technology on a massive 512-bit bus. With this GDDR7 memory clocked at 28 Gbps, memory bandwidth reaches 1,792 GB/s -- a near 80% increase over the RTX 4090's bandwidth. It also includes 32GB of VRAM, the most Nvidia has ever provided on a consumer GPU. [...]

As for the performance claims... Nvidia has - as usual - used its marketing to obscure actual gaming performance. RTX 50 GPUs support DLSS 4 multi-frame generation, which previous-generation GPUs lack. This means RTX 50 series GPUs can generate double the frames of previous-gen models in DLSS-supported games, making them appear up to twice as "fast" as RTX 40 series GPUs. But in reality, while FPS numbers will increase with DLSS 4, latency and gameplay feel may not improve as dramatically. [...] The claim that the RTX 5070 matches the RTX 4090 in performance seems dubious. Perhaps it could match in frame rate with DLSS 4, but certainly not in raw, non-DLSS performance. Based on Nvidia's charts, the RTX 5070 seems 20-30% faster than the RTX 4070 at 1440p. This would place the RTX 5070 slightly ahead of the RTX 4070 Super for about $50 less, or alternatively, 20-30% faster than the RTX 4070 for the same price.
These GeForce 50 series wasn't the only announcement Nvidia made at CES 2025. The chipmaker unveiled a $3,000 personal AI supercomputer, capable of running sophisticated AI models with up to 200 billion parameters. It also announced plans to introduce AI-powered autonomous characters in video games this year, starting with a virtual teammate in the battle royale game PUBG.
AI

Nvidia Unveils $3,000 Personal AI Supercomputer (nvidia.com) 80

Nvidia will begin selling a personal AI supercomputer in May that can run sophisticated AI models with up to 200 billion parameters, the chipmaker has announced. The $3,000 Project Digits system is powered by the new GB10 Grace Blackwell Superchip and can operate from a standard power outlet.

The device delivers 1 petaflop of AI performance and includes 128GB of memory and up to 4TB of storage. Two units can be linked to handle models with 405 billion parameters. "AI will be mainstream in every application for every industry," Nvidia CEO Jensen Huang said. The system runs on Linux-based Nvidia DGX OS and supports PyTorch, Python, and Jupyter notebooks.
AI

Michael Dell Says Adoption of AI PCs is 'Definitely Delayed' (fortune.com) 30

Dell CEO Michael Dell has acknowledged delays in corporate adoption of AI-enabled PCs but remains confident in their eventual widespread uptake, citing his four decades of industry experience with technology transitions.

The PC maker's chief executive told Fortune that while the current refresh cycle is "definitely delayed," adoption is inevitable once sufficient features drive customer demand. Meanwhile, Dell's infrastructure division saw 80% revenue growth last quarter from AI-server sales. The company is supplying servers for xAI's Colossus supercomputer project in Memphis and sees opportunities in "sovereign AI" systems for nations seeking technological independence. "Pick a country ranked by GDP, the [top] 49 other than the U.S., they all need one," Dell said.
AI

Elon Musk's xAI Plans Massive Expansion of AI Supercomputer in Memphis (usnews.com) 135

An anonymous reader shared this report from Reuters: Elon Musk's artificial intelligence startup xAI plans to expand its Memphis, Tennessee, supercomputer to house at least one million graphics processing units (GPUs), the Greater Memphis Chamber said on Wednesday, as xAI races to compete against rivals like OpenAI.

The move represents a massive expansion for the supercomputer called Colossus, which currently has 100,000 GPUs to train xAI's chatbot called Grok. As part of the expansion, Nvidia, which supplies the GPUs, and Dell and Super Micro, which have assembled the server racks for the computer, will establish operations in Memphis, the chamber said in a statement.

The Greater Memphis chamber (an economic development organization) called it "the largest capital investment in the region's history," even saying that xAI "is setting the stage for Memphis to become the global epicenter of artificial intelligence." ("To facilitate this massive undertaking, the Greater Memphis Chamber established an xAI Special Operations Team... This team provides round-the-clock concierge service to the company.")

Reuters calls the supercomputer "a critical component of advancing Musk's AI efforts, as the billionaire has deepened his rivalry against OpenAI..." And the Greater Memphis chamber describes the expansion by Nvidia/Dell/Super Micro as "further solidifying the city's position as the 'Digital Delta'... Memphis has provided the power and velocity necessary for not just xAI to grow and thrive, but making way for other companies as well."
AI

AI's Future and Nvidia's Fortunes Ride on the Race To Pack More Chips Into One Place (yahoo.com) 21

Leading technology companies are dramatically expanding their AI capabilities by building multibillion-dollar "super clusters" packed with unprecedented numbers of Nvidia's AI processors. Elon Musk's xAI recently constructed Colossus, a supercomputer containing 100,000 Nvidia Hopper chips, while Meta CEO Mark Zuckerberg claims his company operates an even larger system for training advanced AI models. The push toward massive chip clusters has helped drive Nvidia's quarterly revenue from $7 billion to over $35 billion in two years, making it the world's most valuable public company.

WSJ adds: Nvidia Chief Executive Jensen Huang said in a call with analysts following its earnings Wednesday that there was still plenty of room for so-called AI foundation models to improve with larger-scale computing setups. He predicted continued investment as the company transitions to its next-generation AI chips, called Blackwell, which are several times as powerful as its current chips.

Huang said that while the biggest clusters for training for giant AI models now top out at around 100,000 of Nvidia's current chips, "the next generation starts at around 100,000 Blackwells. And so that gives you a sense of where the industry is moving."

Supercomputing

'El Capitan' Ranked Most Powerful Supercomputer In the World (engadget.com) 44

Lawrence Livermore National Laboratory's "El Capitan" supercomputer is now ranked as the world's most powerful, exceeding a High-Performance Linpack (HPL) score of 1.742 exaflops on the latest Top500 list. Engadget reports: El Capitan is only the third "exascale" computer, meaning it can perform more than a quintillion calculations in a second. The other two, called Frontier and Aurora, claim the second and third place slots on the TOP500 now. Unsurprisingly, all of these massive machines live within government research facilities: El Capitan is housed at Lawrence Livermore National Laboratory; Frontier is at Oak Ridge National Laboratory; Argonne National Laboratory claims Aurora. [Cray Computing] had a hand in all three systems.

El Capitan has more than 11 million combined CPU and GPU cores based on AMD 4th-gen EPYC processors. These 24-core processors are rated at 1.8GHz each and have AMD Instinct M1300A APUs. It's also relatively efficient, as such systems go, squeezing out an estimated 58.89 Gigaflops per watt. If you're wondering what El Capitan is built for, the answer is addressing nuclear stockpile safety, but it can also be used for nuclear counterterrorism.

Earth

Diamond Dust Could Cool the Planet At a Cost of Mere Trillions (science.org) 98

sciencehabit shares a report from Science Magazine: From dumping iron into the ocean to launching mirrors into space, proposals to cool the planet through 'geoengineering' tend to be controversial -- and sometimes fantastical. A new idea isn't any less far-out, but it may avoid some of the usual pitfalls of strategies to fill the atmosphere with tiny, reflective particles. In a modeling study published this month in Geophysical Research Letters, scientists report that shooting 5 million tons of diamond dust into the stratosphere each year could cool the planet by 1.6C -- enough to stave off the worst consequences of global warming. The scheme wouldn't be cheap, however: experts estimate it would cost nearly $200 trillion over the remainder of this century -- far more than traditional proposals to use sulfur particles. [...]

The researchers modeled the effects of seven compounds, including sulfur dioxide, as well as particles of diamond, aluminum, and calcite, the primary ingredient in limestone. They evaluated the effects of each particle across 45 years in the model, where each trial took more than a week in real-time on a supercomputer. The results showed diamond particles were best at reflecting radiation while also staying aloft and avoiding clumping. Diamond is also thought to be chemically inert, meaning it would not react to form acid rain, like sulfur. To achieve 1.6C of cooling, 5 million tons of diamond particles would need to be injected into the stratosphere each year. Such a large quantity would require a huge ramp up in synthetic diamond production before high-altitude aircraft could sprinkle the ground-up gems across the stratosphere. At roughly $500,000 per ton, synthetic diamond dust would be 2,400 times more expensive than sulfur and cost $175 trillion if deployed from 2035 to 2100, one study estimates.

Supercomputing

Google Identifies Low Noise 'Phase Transition' In Its Quantum Processor (arstechnica.com) 31

An anonymous reader quotes a report from Ars Technica: Back in 2019, Google made waves by claiming it had achieved what has been called "quantum supremacy" -- the ability of a quantum computer to perform operations that would take a wildly impractical amount of time to simulate on standard computing hardware. That claim proved to be controversial, in that the operations were little more than a benchmark that involved getting the quantum computer to behave like a quantum computer; separately, improved ideas about how to perform the simulation on a supercomputer cut the time required down significantly.

But Google is back with a new exploration of the benchmark, described in a paper published in Nature on Wednesday. It uses the benchmark to identify what it calls a phase transition in the performance of its quantum processor and uses it to identify conditions where the processor can operate with low noise. Taking advantage of that, they again show that, even giving classical hardware every potential advantage, it would take a supercomputer a dozen years to simulate things.

United Kingdom

UK Government Shelves $1.66 Billion Tech and AI Plans 35

An anonymous reader shares a report: The new Labour government has shelved $1.66 bn of funding promised by the Conservatives for tech and Artificial Intelligence (AI) projects, the BBC has learned. It includes $1 bn for the creation of an exascale supercomputer at Edinburgh University and a further $640m for AI Research Resource, which funds computing power for AI. Both funds were unveiled less than 12 months ago.

The Department for Science, Innovation and Technology (DSIT) said the money was promised by the previous administration but was never allocated in its budget. Some in the industry have criticised the government's decision. Tech business founder Barney Hussey-Yeo posted on X that reducing investment risked "pushing more entrepreneurs to the US." Businessman Chris van der Kuyl described the move as "idiotic." Trade body techUK said the government now needed to make "new proposals quickly" or the UK risked "losing out" to other countries in what are crucial industries of the future.
China

China Is Getting Secretive About Its Supercomputers 28

For decades, American and Chinese scientists collaborated on supercomputers. But Chinese scientists have become more secretive as the U.S. has tried to hinder China's technological progress, and they have stopped participating altogether in a prominent international supercomputing forum. From a report: The withdrawal marked the end of an era and created a divide that Western scientists say will slow the development of AI and other technologies as countries pursue separate projects. The new secrecy also makes it harder for the U.S. government to answer a question it deems essential to national security: Does the U.S. or China have faster supercomputers? Some academics have taken it upon themselves to hunt for clues about China's supercomputing progress, scrutinizing research papers and cornering Chinese peers at conferences.

Supercomputers have become central to the U.S.-China technological Cold War because the country with the faster supercomputers can also hold an advantage in developing nuclear weapons and other military technology. "If the other guy can use a supercomputer to simulate and develop a fighter jet or weapon 20% or even 1% better than yours in terms of range, speed and accuracy, it's going to target you first, and then it's checkmate," said Jimmy Goodrich, a senior adviser for technology analysis at Rand, a think tank. The forum that China recently stopped participating in is called the Top500, which ranks the world's 500 fastest supercomputers. While the latest ranking, released in June, says the world's three fastest computers are in the U.S., the reality is probably different.
Hardware

Will Tesla Do a Phone? Yes, Says Morgan Stanley 170

Morgan Stanley, in a note -- seen by Slashdot -- sent to its clients on Wednesday: From our continuing discussions with automotive management teams and industry experts, the car is an extension of the phone. The phone is an extension of the car. The lines between car and phone are truly blurring.

For years, we have been writing about the potential for Tesla to expand into edge compute domains beyond the car, including last October where we described a mobile AI assistant as a 'heavy key.' Following Apple's WWDC, Tesla CEO Elon Musk re-ignited the topic by saying that making such a device is 'not out of the question.' As Mr. Musk continues to invest further into his own LLM/genAI efforts, such as 'Grok,' the potential strategic and userexperience overlap becomes more obvious.

From an automotive perspective, the topic of supercomputing at both the datacenter level and at the edge are highly relevant given the incremental global unit sold is a car that can perform OTA updates of firmware, has a battery with a stored energy equivalent of approx. 2,000 iPhones, and a liquid cooled inference supercomputer as standard kit. What if your phone could tap into your vehicle's compute power and battery supply to run AI applications?

Edge compute and AI have brought to light some of the challenges (battery life, thermal, latency, etc.) of marrying today's smartphones with ever more powerful AI-driven applications. Numerous media reports have discussed OpenAI potentially developing a consumer device specifically designed for AI.

The phone as a (heavy) car key? Any Tesla owner will tell you how they use their smartphone as their primary key to unlock their car as well as running other remote applications while they interact with their vehicles. The 'action button' on the iPhone 15 potentially takes this to a different level of convenience.
IBM

Lynn Conway, Leading Computer Scientist and Transgender Pioneer, Dies At 85 (latimes.com) 155

Lynn Conway, a pioneering computer scientist who made significant contributions to VLSI design and microelectronics, and a prominent advocate for transgender rights, died Sunday from a heart condition. She was 85. Pulitzer Prize-winning journalist Michael Hiltzik remembers Conway in a column for the Los Angeles Times: As I recounted in 2020, I first met Conway when I was working on my 1999 book about Xerox PARC, Dealers of Lightning, for which she was a uniquely valuable source. In 2000, when she decided to come out as transgender, she allowed me to chronicle her life in a cover story for the Los Angeles Times Magazine titled "Through the Gender Labyrinth." That article traced her journey from childhood as a male in New York's strait-laced Westchester County to her decision to transition. Years of emotional and psychological turmoil followed, even as he excelled in academic studies. [Conway earned bachelor's and master's degrees in electrical engineering from Columbia University in 1961, quickly joining a team at IBM to design the world's fastest supercomputer. Despite personal success, she faced significant emotional turmoil, leading to her decision to transition in 1968. Initially supportive, IBM ultimately fired Conway due to their inability to reconcile her transition with the company's conservative image.]

The family went on welfare for three months. Conway's wife barred her from contact with her daughters. She would not see them again for 14 years. Beyond the financial implications, the stigma of banishment from one of the world's most respected corporations felt like an excommunication. She sought jobs in the burgeoning electrical engineering community around Stanford, working her way up through start-ups, and in 1973 she was invited to join Xerox's brand new Palo Alto Research Center, or PARC. In partnership with Caltech engineering professor Carver Mead, Conway established the design rules for the new technology of "very large-scale integrated circuits" (or, in computer shorthand, VLSI). The pair laid down the rules in a 1979 textbook that a generation of computer and engineering students knew as "Mead-Conway."

VLSI fostered a revolution in computer microprocessor design that included the Pentium chip, which would power millions of PCs. Conway spread the VLSI gospel by creating a system in which students taking courses at MIT and other technical institutions could get their sample designs rendered in silicon. Conway's life journey gave her a unique perspective on the internal dynamics of Xerox's unique lab, which would invent the personal computer, the laser printer, Ethernet, and other innovations that have become fully integrated into our daily lives. She could see it from the vantage point of an insider, thanks to her experience working on IBM's supercomputer, and an outsider, thanks to her personal history.

After PARC, she was recruited to head a supercomputer program at the Defense Department's Advanced Research Projects Agency, or DARPA -- sailing through her FBI background check so easily that she became convinced that the Pentagon must have already encountered transgender people in its workforce. A figure of undisputed authority in some of the most abstruse corners of computing, Conway was elected to the National Academy of Engineering in 1989. She joined the University of Michigan as a professor and associate dean in the College of Engineering. In 2002 she married a fellow engineer, Charles Rogers, and with him lived active life -- with a shared passion for white-water canoeing, motocross racing and other adventures -- on a 24-acre homestead not far from Ann Arbor, Mich.
In 2020, Conway received a formal apology from IBM for firing her 52 years earlier. Diane Gherson, an IBM senior vice president, told her, "Thanks to your courage, your example, and all the people who followed in your footsteps, as a society we are now in a better place.... But that doesn't help you, Lynn, probably our very first employee to come out. And for that, we deeply regret what you went through -- and know I speak for all of us."

Slashdot Top Deals