Linux

Linux Kernel 6.10 Released (omgubuntu.co.uk) 15

"The latest version of the Linux kernel adds an array of improvements," writes the blog OMG Ubuntu, " including a new memory sealing system call, a speed boost for AES-XTS encryption on Intel and AMD CPUs, and expanding Rust language support within the kernel to RISC-V." Plus, like in all kernel releases, there's a glut of groundwork to offer "initial support" for upcoming CPUs, GPUs, NPUs, Wi-Fi, and other hardware (that most of us don't use yet, but require Linux support to be in place for when devices that use them filter out)...

Linux 6.10 adds (after much gnashing) the mseal() system call to prevent changes being made to portions of the virtual address space. For now, this will mainly benefit Google Chrome, which plans to use it to harden its sandboxing. Work is underway by kernel contributors to allow other apps to benefit, though. A similarly initially-controversial change merged is a new memory-allocation profiling subsystem. This helps developers fine-tune memory usage and more readily identify memory leaks. An explainer from LWN summarizes it well.

Elsewhere, Linux 6.10 offers encrypted interactions with trusted platform modules (TPM) in order to "make the kernel's use of the TPM reasonably robust in the face of external snooping and packet alteration attacks". The documentation for this feature explains: "for every in-kernel operation we use null primary salted HMAC to protect the integrity [and] we use parameter encryption to protect key sealing and parameter decryption to protect key unsealing and random number generation." Sticking with security, the Linux kernel's Landlock security module can now apply policies to ioctl() calls (Input/Output Control), restricting potential misuse and improving overall system security.

On the networking side there's significant performance improvements to zero-copy send operations using io_uring, and the newly-added ability to "bundle" multiple buffers for send and receive operations also offers an uptick in performance...

A couple of months ago Canonical announced Ubuntu support for the RISC-V Milk-V Mars single-board computer. Linux 6.10 mainlines support for the Milk-V Mars, which will make that effort a lot more viable (especially with the Ubuntu 24.10 kernel likely to be v6.10 or newer). Others RISC-V improvements abound in Linux 6.10, including support for the Rust language, boot image compression in BZ2, LZ4, LZMA, LZO, and Zstandard (instead of only Gzip); and newer AMD GPUs thanks to kernel-mode FPU support in RISC-V.

Phoronix has their own rundown of Linux 6.10, plus a list of some of the highlights, which includes:
  • The initial DRM Panic infrastructure
  • The new Panthor DRM driver for newer Arm Mali graphics
  • Better AMD ROCm/AMDKFD support for "small" Ryzen APUs and new additions for AMD Zen 5.
  • AMD GPU display support on RISC-V hardware thanks to RISC-V kernel mode FPU
  • More Intel Xe2 graphics preparations
  • Better IO_uring zero-copy performance
  • Faster AES-XTS disk/file encryption with modern Intel and AMD CPUs
  • Continued online repair work for XFS
  • Steam Deck IMU support
  • TPM bus encryption and integrity protection

Graphics

Arm Announces an Open-Source Graphics Upscaler For Mobile Phones (theverge.com) 6

Arm is launching its Arm Accuracy Super Resolution (ASR) upscaler that "can make games look better, while lowering power consumption on your phone," according to The Verge. "It's also making the upscaling technology available to developers under an MIT open-source license." From the reprot: Arm based its technology on AMD's FidelityFX Super Resolution 2 (FSR 2), which uses temporal upscaling to make PC games look better and boost frame rates. Unlike spatial upscaling, which upscales an image based on a single frame, temporal upscaling involves using multiple frames to generate a higher-quality image.

You can see just how Arm ASR stacks up to AMD's FSR 2 and Qualcomm's GSR tech in [this chart] created by Arm. Arm claims ASR produced 53 percent higher frame rates than rendering at native resolution on a device with an Arm Immortalis-G720 GPU and 2800 x 1260 display, beating AMD FSR 2. It also tested ASR on a device using MediaTek's Dimensity 9300 chip and found that rendering at 540p and upscaling with ASR used much less power than running a game at native 1080p resolution.

Graphics

Affinity Tempts Adobe Users with 6-Month Free Trial of Creative Suite (theverge.com) 39

Serif, the design software developer behind Affinity, has introduced a six-month free trial for its creative suite, offering Affinity Photo, Designer, and Publisher on Mac, Windows PC, and iPad. This move, along with a 50% discount on perpetual licenses, aims to attract Adobe users and reassure them of Affinity's commitment to its one-time purchase pricing model despite its recent acquisition by Canva. The Verge reports: Affinity uses a one-time purchase pricing model that has earned it a loyal fanbase among creatives who are sick of paying for recurring subscriptions. Prices start at $69.99 for Affinity's individual desktop apps or $164.99 for the entire suite, with a separate deal currently offering customers 50 percent off all perpetual licenses.

This discount, alongside the six-month free trial, is potentially geared at soothing concerns that Affinity would change its pricing model after being acquired by Canva earlier this year. "We're saying 'try everything and pay nothing' because we understand making a change can be a big step, particularly for busy professionals," said Affinity CEO Ashley Hewson. "Anyone who takes the trial is under absolutely no obligation to buy."

China

Nvidia Forecasted To Make $12 Billion Selling GPUs In China (theregister.com) 4

Nvidia is expected to earn $12 billion from GPU sales to China in 2024, despite U.S. trade restrictions. Research firm SemiAnalysis says the GPU maker will ship over 1 million units of its new H20 model to the Chinese market, "with each one said to cost between $12,000 and $13,000 apiece," reports The Register. From the report: This figure is said by SemiAnalysis to be nearly double what Huawei is likely to sell of its rival accelerator, the Ascend 910B, as reported by The Financial Times. If accurate, this would seem to contradict earlier reports that Nvidia had moved to cut the price of its products for the China market. This was because buyers were said to be opting instead for domestically made kit for accelerating AI workloads. The H20 GPU is understood to be the top performing model out of three Nvidia GPUs specially designed for the Chinese market to comply with rules introduced by the Biden administration last year that curb performance.

In contrast, Huawei's Ascend 910B is claimed to have performance on a par with that of Nvidia's A100 GPU. It is believed to be an in-house design manufactured by Chinese chipmaker SMIC using a 7nm process technology, unlike the older Ascend 910 product. If this forecast proves accurate, it will be a relief for Nvidia, which earlier disclosed that its sales in China delivered a "mid-single digit percentage" of revenue for its Q4 of FY2024, and was forecast to do the same in Q1 of FY 2025. In contrast, the Chinese market had made up between 20 and 25 percent of the company's revenue in recent years, until the export restrictions landed.

Games

Kien, the Most-Delayed Video Game in History, Released After 22 Years (theguardian.com) 24

An Italian video game, 22 years in the making, has finally hit the market, setting a record for the longest development time in gaming history. "Kien," an action platformer for Nintendo's Game Boy Advance, began development in 2002 by a group of five inexperienced enthusiasts, The Guardian reports. Only one, Fabio Belsanti, saw the project through to completion. The game, inspired by 15th-century Tuscan manuscripts and early Japanese graphics, offers a challenging, nonlinear fantasy experience. It's now available on a translucent gray cartridge, complete with a printed manual -- a rarity in modern gaming. Belsanti's company, AgeOfGames, survived the delay by creating educational games. The recent boom in retro gaming finally made Kien's release feasible, he said.
Businesses

French Antitrust Regulators Preparing Nvidia Charges (reuters.com) 28

French antitrust regulators are preparing to charge Nvidia for allegedly anti-competitive practices, Reuters reported Monday, citing sources. From the report: The French so-called statement of objections or charge sheet would follow dawn raids in the graphics cards sector in September last year which sources said targeted Nvidia. The world's largest maker of chips used both for artificial intelligence and for computer graphics has seen demand for its chips jump following the release of the generative AI application ChatGPT, triggering regulatory scrutiny on both sides of the Atlantic.
AI

Is AI's Demand for Energy Really 'Insatiable'? (arstechnica.com) 56

Bloomberg and The Washington Post "claim AI power usage is dire," writes Slashdot reader NoWayNoShapeNoForm. But Ars Technica "begs to disagree with those speculations."

From Ars Technica's article: The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI's "insatiable" demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous "some [people]" in reporting that "some worry whether there will be enough electricity to meet [the power demands] from any source." Digging into the best available numbers and projections available, though, it's hard to see AI's current and near-future environmental impact in such a dire light... While the headline focus of both Bloomberg and The Washington Post's recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet "data centers" as a whole...

Bloomberg asks one source directly "why data centers were suddenly sucking up so much power" and gets back a blunt answer: "It's AI... It's 10 to 15 times the amount of electricity." Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry's current mania for generative AI. If you squint at Bloomberg's graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Ars Technica first cites Dutch researcher Alex de Vries's estimate that in a few years the AI sector could use between 85 and 134 TWh of power. But another study estimated in 2018 that PC gaming already accounted for 75 TWh of electricity use per year, while "the IEA estimates crypto mining ate up 110 TWh of electricity in 2022." More to the point, de Vries' AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA's recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as "AI").
The future is also hard to predict, the article concludes. "If customers don't respond to the hype by actually spending significant money on generative AI at some point, the tech-marketing machine will largely move on, as it did very recently with the metaverse and NFTs..."
Unix

X Window System Turns 40 52

Ancient Slashdot reader ewhac writes: On June 19, 1984, Robert Scheifler announced on MIT's Project Athena mailing list a new graphical windowing system he'd put together. Having cribbed a fair bit of code from the existing windowing toolkit called W, Scheifler named his new system X, thus giving birth to the X Window System. Scheifler prophetically wrote at the time, "The code seems fairly solid at this point, although there are still some deficiencies to be fixed up."

The 1980's and 1990's saw tremendous activity in the development of graphical displays and user interfaces, and X was right in the middle of it all, alongside Apple, Sun, Xerox, Apollo, Silicon Graphics, NeXT, and many others. Despite the fierce, well-funded competition, and heated arguments about how many buttons a mouse should have, X managed to survive, due in large part to its Open Source licensing and its flexible design, allowing it to continue to work well even as graphical hardware rapidly advanced. As such, it was ported to dozens of platforms over the years (including a port to the Amiga computer by Dale Luck in the late 1980's). 40 years later, despite its warts, inconsistencies, age, and Wayland promising for the last ten years to be coming Real Soon Now, X remains the windowing system for UNIX-like platforms.
KDE

KDE Plasma 6.1 Released (kde.org) 42

"The KDE community announced the latest release of their popular desktop environment: Plasma 6.1," writes longtime Slashdot reader jrepin. From the announcement: While Plasma 6.0 was all about getting the migration to the underlying Qt 6 frameworks correct, Plasma 6.1 is where developers start implementing the features that will take you desktop to a new level. In this release, you will find features that go far beyond subtle changes to themes and tweaks to animations (although there is plenty of those too). Among some of the new features in this release you will find improved remote desktop support with a new built-in server, overhauled and streamlined desktop edit mode, restoration of open applications from the previous session on Wayland, synchronization of keyboard LED colors with the desktop accent color, making mouse cursor bigger and easier to find by shaking it, edge barriers (a sticky area for mouse cursor near the edges between screens), explicit sync support eliminates flickering and glitches for NVidia graphics card users on Wayland, and triple buffering support for smoother animations and screen rendering. The changelog for Plasma 6.1 is available here.
XBox (Games)

Upcoming Games Include More Xbox Sequels - and a Medieval 'Doom' (polygon.com) 32

Announced during Microsoft's Xbox Games Showcase, Doom: The Dark Ages is id Software's next foray back into hell. [Also available for PS5 and PC.] Doom: The Dark Ages is a medieval spin on the Doom franchise, taking the Doom Slayer back to the beginning. It's coming to Xbox Game Pass on day one, sometime in 2025.

Microsoft's first trailer for Doom: The Dark Ages shows the frenetic, precision gameplay we've come to expect from the franchise — there's a lot of blasting and shooting and a chainsaw. Oh, and the Doom Slayer can ride a dragon?

"Before he became a hero he was the super weapon of gods and kings," says the trailer (which showcases the game's crazy-good graphics...) The 2020 game Doom Eternal sold 3 million copies in its first month, according to Polygon, with its game director telling the site in 2021 that "our hero is somewhat timeless — I mean, literally, he's immortal. So we could tell all kinds of stories..."

Other upcoming Xbox games were revealed too. Engadget is excited about the reboot of the first-person shooter Perfect Dark (first released in 2000, but now set in the near future). There's also Gears of War: E-Day, Indiana Jones and the Great Circle, State of Decay 3, and Assassin's Creed Shadows, according to Xbox.com — plus "the announcement of three new Xbox Series X|S console options." [Engadget notes it's the first time Microsoft has offered a cheaper all-digital Xbox Series X with no disc drive.] "And on top of all that, we also brought the gameplay reveal of a brand-new Call of Duty game with Call of Duty: Black Ops 6."

Meanwhile, Friday's Summer Game Fest 2024 featured Star Wars Outlaws footage (which according to GamesRadar takes place between Empire Strikes Back and Return of the Jedi, featuring not just card games with Lando Calrissian but also Jabba the Hutt and a frozen Han Solo.) Engadget covered all the announcements from Game Fest, including the upcoming game Mixtape, which Engadget calls a "reality-bending adventure" with "a killer '80s soundtrack" about three cusp-of-adulthood teenagers who "Skate. Party. Avoid the law. Make out. Sneak out. Hang out..." for Xbox/PS5/PC.
Graphics

Nvidia Takes 88% of the GPU Market Share (xda-developers.com) 83

As reported by Jon Peddie Research, Nvidia now holds 88% of the GPU market after its market share jumped 8% in its most recent quarter. "This jump shaves 7% off of AMD's share, putting it down to 19% total," reports XDA Developers. "And if you're wondering where that extra 1% went, it came from all of Intel's market share, squashing it down to 0%." From the report: Dr. Jon Peddie, president of Jon Peddie Research, mentions how the GPU market hasn't really looked "normal" since the 2007 recession. Ever since then, everything from the crypto boom to COVID has messed with the usual patterns. Usually, the first quarter of a year shows a bit of a dip in GPU sales, but because of AI's influence, it may seem like that previous norm may be forever gone: "Therefore, one would expect Q2'24, a traditional quarter, to also be down. But, all the vendors are predicting a growth quarter, mostly driven by AI training systems in hyperscalers. Whereas AI trainers use a GPU, the demand for them can steal parts from the gaming segment. So, for Q2, we expect to see a flat to low gaming AIB result and another increase in AI trainer GPU shipments. The new normality is no normality."
Ubuntu

Canonical Launches Ubuntu Core 24 (ubuntu.com) 5

Canonical, the company behind Ubuntu, has released Ubuntu Core 24, a version of its operating system designed for edge devices and the Internet of Things (IoT). The new release comes with a 12-year Long Term Support commitment and features that enable secure, reliable, and efficient deployment of intelligent devices.

Ubuntu Core 24 introduces validation sets for custom image creation, offline remodelling for air-gapped environments, and new integrations for GPU operations and graphics support. It also offers device management integrations with Landscape and Microsoft Azure IoT Edge. The release is expected to benefit various industries, including automation, healthcare, and robotics, Canonical said.
AMD

AMD Unveils Ryzen AI and 9000 Series Processors, Plus Radeon PRO W7900 Dual Slot (betanews.com) 41

The highlight of AMD's presentation Sunday at Computex 2024 was "the introduction of AMD's Ryzen AI 300 Series processors for laptops and the Ryzen 9000 Series for desktops," writes Slashdot reader BrianFagioli (sharing his report at Beta News): AMD's Ryzen AI 300 Series processors, designed for next-generation AI laptops, come with AMD's latest XDNA 2 architecture. This includes a Neural Processing Unit (NPU) that delivers 50 TOPS of AI processing power, significantly enhancing the AI capabilities of laptops. Among the processors announced were the Ryzen AI 9 HX 370, which features 12 cores and 24 threads with a boost frequency of 5.1 GHz, and the Ryzen AI 9 365 with 10 cores and 20 threads, boosting up to 5.0 GHz...

In the desktop segment, the Ryzen 9000 Series processors, based on the "Zen 5" architecture, demonstrated an average 16% improvement in IPC performance over their predecessors built on the "Zen 4" architecture. The Ryzen 9 9950X stands out with 16 cores and 32 threads, reaching up to 5.7 GHz boost frequency and equipped with 80MB of cache... AMD also reaffirmed its commitment to the AM4 platform by introducing the Ryzen 9 5900XT and Ryzen 7 5800XT processors. These models are compatible with existing AM4 motherboards, providing an economical upgrade path for users.

The article adds that AMD also unveiled its Radeon PRO W7900 Dual Slot workstation graphics card — priced at $3,499 — "further broadening its impact on high-performance computing...

"AMD also emphasized its strategic partnerships with leading OEMs such as Acer, ASUS, HP, Lenovo, and MSI, who are set to launch systems powered by these new AMD processors." And there's also a software collaboration with Microsoft, reportedly "to enhance the capabilities of AI PCs, thus underscoring AMD's holistic approach to integrating AI into everyday computing."
Hardware

Arm Says Its Next-Gen Mobile GPU Will Be Its Most 'Performant and Efficient' (theverge.com) 29

IP core designer Arm announced its next-generation CPU and GPU designs for flagship smartphones: the Cortex-X925 CPU and Immortalis G925 GPU. Both are direct successors to the Cortex-X4 and Immortalis G720 that currently power MediaTek's Dimensity 9300 chip inside flagship smartphones like the Vivo X100 and X100 Pro and Oppo Find X7. From a report: Arm changed the naming convention for its Cortex-X CPU design to highlight what it says is a much faster CPU design. It claims the X925's single-core performance is 36 percent faster than the X4 (when measured in Geekbench). Arm says it increased the AI workload performance by 41 percent, time to token, with up to 3MB of private L2 cache. The Cortex-X925 brings a new generation of Cortex-A microarchitectures ("little" cores) with it, too: the Cortex-A725, which Arm says has 35 percent better performance efficiency than last-gen's A720 and a 15 percent more power-efficient Cortex-A520.

Arm's new Immortalis G925 GPU is its "most performant and efficient GPU" to date, it says. It's 37 percent faster on graphics applications compared to the last-gen G720, with improved ray-tracing performance with intricate objects by 52 percent and improved AI and ML workloads by 34 percent -- all while using 30 percent less power. For the first time, Arm will offer "optimized layouts" of its new CPU and GPU designs that it says will be easier for device makers to "drop" or implement into their own system on chip (SoC) layouts. Arm says this new physical implementation solution will help other companies get their devices to market faster, which, if true, means we could see more devices with Arm Cortex-X925 and / or Immortalis G925 than the few that shipped with its last-gen ones.

Nintendo

Ubuntu 24.04 Now Runs on the Nintendo Switch (Unofficially) (omgubuntu.co.uk) 6

"The fact it's possible at all is a credit to the ingenuity of the open-source community," writes the blog OMG Ubuntu: Switchroot is an open-source project that allows Android and Linux-based distros like Ubuntu to run on the Nintendo Switch — absolutely not something Nintendo approves of much less supports, endorses, or encourages, etc! I covered the loophole that made this possible back in 2018. Back then the NVIDIA Tegra X1-powered Nintendo Switch was still new and Linux support for much of the console's internal hardware in a formative state (a polite way to say 'not everything worked'). But as the popularity of Nintendo's handheld console ballooned (to understate it) so the 'alternative OS' Switch scene grew, and before long Linux support for Switch hardware was in full bloom...

A number of Linux for Switchroot (L4S) distributions have since been released, designated as Linux for Tegra (L4T) builds. As these can boot from a microSD card it's even possible to dualboot the Switch OS with Linux, which is neat! Recently, a fresh set of L4T Ubuntu images were released based on the newest Ubuntu 24.04 LTS release. These builds work on all Switch versions, from the OG (exploit-friendly) unit through to newer, patched models (where a modchip is required)...

I'm told all of the Nintendo Switch internal hardware now works under Linux, including Wi-Fi, Bluetooth, sleep mode, accelerated graphics, the official dock... Everything, basically. And despite being a 7 year old ARM device the performance is said to remain decent.

"Upstream snafus have delayed the release of builds with GNOME Shell..."
Businesses

Nvidia Reports a 262% Jump In Sales, 10-1 Stock Split (cnbc.com) 11

Nvidia reported fiscal first-quarter earnings surpassing expectations with strong forecasts, indicating sustained demand for its AI chips. Following the news, the company's stock rose over 6% in extended trading. Nvidia also said it was splitting its stock 10 to 1. CNBC reports: Nvidia said it expected sales of $28 billion in the current quarter. Wall Street was expecting earnings per share of $5.95 on sales of $26.61 billion, according to LSEG. Nvidia reported net income for the quarter of $14.88 billion, or $5.98 per share, compared with $2.04 billion, or 82 cents, in the year-ago period. [...] Nvidia said its data center category rose 427% from the year-ago quarter to $22.6 billion in revenue. Nvidia CFO Colette Kress said in a statement that it was due to shipments of the company's "Hopper" graphics processors, which include the company's H100 GPU.

Nvidia also highlighted strong sales of its networking parts, which are increasingly important as companies build clusters of tens of thousands of chips that need to be connected. Nvidia said that it had $3.2 billion in networking revenue, primarily its Infiniband products, which was over three times higher than last year's sales. Nvidia, before it became the top supplier to big companies building AI, was known primarily as a company making hardware for 3D gaming. The company's gaming revenue was up 18% during the quarter to $2.65 billion, which Nvidia attributed to strong demand.

The company also sells chips for cars and chips for advanced graphics workstations, which remain much smaller than its data center business. The company reported $427 million in professional visualization sales, and $329 million in automotive sales. Nvidia said it bought back $7.7 billion worth of its shares and paid $98 million in dividends during the quarter. Nvidia also said that it's increasing its quarterly cash dividend from 4 cents per share to 10 cents on a pre-split basis. After the split, the dividend will be a penny a share.

Graphics

Microsoft Paint Is Getting an AI-Powered Image Generator (engadget.com) 41

Microsoft Paint is getting a new image generator tool called Cocreator that can generate images based on text prompts and doodles. Engadget reports: During a demo at its Surface event, the company showed off how Cocreator combines your own drawings with text prompts to create an image. There's also a "creativity slider" that allows you to control how much you want AI to take over compared with your original art. As Microsoft pointed out, the combination of text prompts and your own brush strokes enables faster edits. It could also help provide a more precise rendering than what you'd be able to achieve with DALL-E or another text-to-image generator alone.
Ubuntu

Ubuntu 24.10 to Default to Wayland for NVIDIA Users (omgubuntu.co.uk) 76

An anonymous reader shared this report from the blog OMG Ubuntu: Ubuntu first switched to using Wayland as its default display server in 2017 before reverting the following year. It tried again in 2021 and has stuck with it since. But while Wayland is what most of us now log into after installing Ubuntu, anyone doing so on a PC or laptop with an NVIDIA graphics card present instead logs into an Xorg/X11 session.

This is because NVIDIA's proprietary graphics drivers (which many, especially gamers, opt for to get the best performance, access to full hardware capabilities, etc) have not supported Wayland as well as as they could've. Past tense as, thankfully, things have changed in the past few years. NVIDIA's warmed up to Wayland (partly as it has no choice given that Wayland is now standard and a 'maybe one day' solution, and partly because it wants to: opportunities/benefits/security).

With the NVIDIA + Wayland sitch' now in a better state than before — but not perfect — Canonical's engineers say they feel confident enough in the experience to make the Ubuntu Wayland session default for NVIDIA graphics card users in Ubuntu 24.10.

Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Hardware

Apple Announces M4 With More CPU Cores and AI Focus (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: In a major shake-up of its chip roadmap, Apple has announced a new M4 processor for today's iPad Pro refresh, barely six months after releasing the first MacBook Pros with the M3 and not even two months after updating the MacBook Air with the M3. Apple says the M4 includes "up to" four high-performance CPU cores, six high-efficiency cores, and a 10-core GPU. Apple's high-level performance estimates say that the M4 has 50 percent faster CPU performance and four times as much graphics performance. Like the GPU in the M3, the M4 also supports hardware-accelerated ray-tracing to enable more advanced lighting effects in games and other apps. Due partly to its "second-generation" 3 nm manufacturing process, Apple says the M4 can match the performance of the M2 while using just half the power.

As with so much else in the tech industry right now, the M4 also has an AI focus; Apple says it's beefing up the 16-core Neural Engine (Apple's equivalent of the Neural Processing Unit that companies like Qualcomm, Intel, AMD, and Microsoft have been pushing lately). Apple says the M4 runs up to 38 trillion operations per second (TOPS), considerably ahead of Intel's Meteor Lake platform, though a bit short of the 45 TOPS that Qualcomm is promising with the Snapdragon X Elite and Plus series. The M3's Neural Engine is only capable of 18 TOPS, so that's a major step up for Apple's hardware. Apple's chips since 2017 have included some version of the Neural Engine, though to date, those have mostly been used to enhance and categorize photos, perform optical character recognition, enable offline dictation, and do other oddities. But it may be that Apple needs something faster for the kinds of on-device large language model-backed generative AI that it's expected to introduce in iOS and iPadOS 18 at WWDC next month.
A separate report from the Wall Street Journal says Apple is developing a custom chip to run AI software in datacenters. "Apple's server chip will likely be focused on running AI models, also known as inference, rather than in training AI models, where Nvidia is dominant," reports Reuters.

Further reading: Apple Quietly Kills the Old-school iPad and Its Headphone Jack

Slashdot Top Deals