Operating Systems

Linux Kernel 6.14 Is a Big Leap Forward In Performance, Windows Compatibility (zdnet.com) 34

An anonymous reader quotes a report from ZDNet, written by Steven Vaughan-Nichols: Despite the minor delay, Linux 6.14 arrives packed with cutting-edge features and improvements to power upcoming Linux distributions, such as the forthcoming Ubuntu 25.04 and Fedora 42. The big news for desktop users is the improved NTSYNC driver, especially those who like to play Windows games or run Windows programs on Linux. This driver is designed to emulate Windows NT synchronization primitives. What that feature means for you and me is that it will significantly improve the performance of Windows programs running on Wine and Steam Play. [...] Gamers always want the best possible graphics performance, so they'll also be happy to see that Linux now supports recently launched AMD RDNA 4 graphics cards. This approach includes support for the AMD Radeon RX 9070 XT and RX 9070 graphics cards. Combine this support with the recently improved open-source RADV driver and AMD gamers should see the best speed yet on their gaming rigs.

Of course, the release is not just for gamers. Linux 6.14 also includes several AMD and Intel processor enhancements. These boosts focus on power management, thermal control, and compute performance optimizations. These updates are expected to improve overall system efficiency and performance. This release also comes with the AMDXDNA driver, which provides official support for AMD's neural processing units based on the XDNA architecture. This integration enables efficient execution of AI workloads, such as convolutional neural networks and large language models, directly on supported AMD hardware. While Rust has faced some difficulties in recent months in Linux, more Rust programming language abstractions have been integrated into the kernel, laying the groundwork for future drivers written in Rust. [...] Besides drivers, Miguel Ojeda, Rust for Linux's lead developer, said recently that the introduction of the macro for smart pointers with Rust 1.84: derive(CoercePointee) is an "important milestone on the way to building a kernel that only uses stable Rust functions." This approach will also make integrating C and Rust code easier. We're getting much closer to Rust being grafted into Linux's tree.

In addition, Linux 6.14 supports Qualcomm's latest Snapdragon 8 Elite mobile processor, enhancing performance and stability for devices powered by this chipset. That support means you can expect to see much faster Android-based smartphones later this year. This release includes a patch for the so-called GhostWrite vulnerability, which can be used to root some RISC-V processors. This fix will block such attacks. Additionally, Linux 6.14 includes improvements for the copy-on-write Btrfs file system/logical volume manager. These primarily read-balancing methods offer flexibility for different RAID hardware configurations and workloads. Additionally, support for uncached buffered I/O optimizes memory usage on systems with fast storage devices.
Linux 6.14 is available for download here.
AMD

Lisa Su Says Radeon RX 9000 Series Is AMD's Most Successful GPU Launch Ever (techspot.com) 32

"In a conversation with Tony Yu from Asus China, AMD CEO Lisa Su shared that the Radeon RX 9000 series graphics cards have quickly become a huge hit, breaking records as AMD's top-selling GPUs within just a week of release," writes Slashdot reader jjslash. TechSpot reports: AMD CEO Lisa Su has confirmed that the company's new Radeon RX 9000 graphics cards have been a massive success, selling 10 times more units than their predecessors in just one week on the market. Su also stated that more RDNA 4 cards are on the way, but did not confirm whether the lineup will include the rumored Radeon RX 9060. When asked about the limited availability of the new cards, Su said that AMD is ramping up production to ensure greater supply at retailers worldwide. She also expressed hope that increased availability would help stabilize pricing by discouraging scalping and price gouging.
Open Source

Developer Loads Steam On a $100 ARM Single Board Computer (interfacinglinux.com) 24

"There's no shortage of videos showing Steam running on expensive ARM single-board computers with discrete GPUs," writes Slashdot reader VennStone. "So I thought it would be worthwhile to make a guide for doing it on (relatively) inexpensive RK3588-powered single-board computers, using Box86/64 and Armbian." The guides I came across were out of date, had a bunch of extra steps thrown in, or were outright incorrect... Up first, we need to add the Box86 and Box64 ARM repositories [along with dependencies, ARMHF architecture, and the Mesa graphics driver]...
The guide closes with a multi-line script and advice to "Just close your eyes and run this. It's not pretty, but it will download the Steam Debian package, extract the needed bits, and set up a launch script." (And then the final step is sudo reboot now.)

"At this point, all you have to do is open a terminal, type 'steam', and tap Enter. You'll have about five minutes to wait... Check out the video to see how some of the tested games perform." At 720p, performance is all over the place, but the games I tested typically managed to stay above 30 FPS. This is better than I was expecting from a four-year-old SOC emulating x86 titles under ARM.

Is this a practical way to play your Steam games? Nope, not even a little bit. For now, this is merely an exercise in ludicrous neatness. Things might get a wee bit better, considering Collabora is working on upstream support for RK3588 and Valve is up to something ARM-related, but ya know, "Valve Time"...

"You might be tempted to enable Steam Play for your Windows games, but don't waste your time. I mean, you can try, but it ain't gonna work."
IT

Nvidia Sells RTX GPUs From a 'Food Truck' (pcworld.com) 33

Nvidia is selling its scarce RTX 5080 and 5090 graphics cards from a pop-up "food truck" at its GPU Technology Conference, where attendees paying over $1,000 for tickets can purchase the coveted hardware alongside merchandise. The company has only 2,000 cards available (1,000 each of RTX 5080 and 5090), released in small batches at random times during the three-day conference which concludes tomorrow.
GNOME

GNOME 48 Released (9to5linux.com) 60

prisoninmate writes: GNOME 48 desktop environment has been released after six months of development with major new features that have been expected for more than four years, such as dynamic triple buffering, HDR support, and much more. 9to5Linux reports:

"Highlights of GNOME 48 include dynamic triple buffering to boost the performance on low-end GPUs, such as Intel integrated graphics or Raspberry Pi computers, Wayland color management protocol support, new Adwaita fonts, HDR (High Dynamic Range) support, and a new Wellbeing feature with screen time tracking.

"GNOME 48 also introduces a new GNOME Display Control (gdctl) utility to view the active monitor configuration and set new monitor configuration using command line arguments, implements a11y keyboard monitoring support, adds output luminance settings, and it now centers new windows by default."

Transportation

GM Taps Nvidia To Boost Its Self-Driving Projects 11

General Motors is partnering with Nvidia to enhance its self-driving and manufacturing capabilities by leveraging Nvidia's AI chips, software, and simulation tools. "GM says it will apply several of Nvidia's products to its business, such as the Omniverse 3D graphics platform which will run simulations on virtual assembly lines with an eye on reducing downtime and improving efficiency," reports The Verge. "The automaker also plans to equip its next-generation vehicles with Nvidia's 'AI brain' for advanced driver assistance and autonomous driving. And it will employ the chipmaker's AI training software to make its vehicle assembly line robots better at certain tasks, like precision welding and material handling." From the report: GM already uses Nvidia's GPUs to train its AI software for simulation and validation. Today's announcement was about expanding those use cases into improving its manufacturing operations and autonomous vehicles, GM CEO Mary Barra said in a statement. (Dave Richardson, GM's senior VP of Software and Services Engineering will be joining NVIDIA's Norm Marks for a fireside chat at the conference.) "AI not only optimizes manufacturing processes and accelerates virtual testing but also helps us build smarter vehicles while empowering our workforce to focus on craftsmanship," Barra said. "By merging technology with human ingenuity, we unlock new levels of innovation in vehicle manufacturing and beyond."

GM will adopt Nvidia's in-car software products to build next-gen vehicles with autonomous driving capabilities. That includes the company's Drive AGX system-on-a-chip (SoC), similar to Tesla's Full Self-Driving chip or Intel's Mobileye EyeQ. The SoC runs the "safety-certified" DriveOS operating system, built on the Blackwell GPU architecture, which is capable of delivering 1,000 trillion operations per second (TOPS) of high-performance compute, the company says. [...] In a briefing with reporters, Ali Kani, Nvidia's vice president and general manager of automotive, described the chipmaking company's automotive business as still in its "infancy," with the expectation that it will only bring in $5 billion this year. (Nvidia reported over $130 billion in revenue in 2024 for all its divisions.)

Nvidia's chips are in less than 1 percent of the billions of cars on the road today, he added. But the future looks promising. The company is also announcing deals with Tier 1 auto supplier Magna, which helped build Sony's Afeela concept, to use Drive AGX in the company's next-generation advanced driver assist software. "We believe automotive is a trillion dollar opportunity for Nvidia," Kani said.
Graphics

GIMP 3.0 Released (9to5linux.com) 52

GIMP 3.0 has been released after over a decade of development. Highlights include a refined GTK3 interface with scroll wheel tab navigation, a new splash screen, improved HiDPI icon support, enhanced color management, a stable public API, and support for more file formats. 9to5Linux reports: GIMP 3.0 also brings improvements to non-destructive editing by introducing an optional "Merge Filters" checkbox at the bottom of NDE filters that merges down the filter immediately after it's committed, along with non-destructive filters on layer groups and the implementation of storing version of filters in GIMP's XCF project files. Among other noteworthy changes, the GEGL and babl components have been updated with new features and many improvements, such as Inner Glow, Bevel, and GEGL Styles filters, some plugins saw small enhancements, and it's now possible to export images with different settings while leaving the original image unchanged.

There's also a new PDB call that allows Script-Fu writers to use labels to specify filter properties, a brand new named-argument syntax, support for loading 16-bits-per-channel LAB PSD files, support for loading DDS images with BC7 support, early-binding CMYK support, and support for PSB and JPEG-XL image formats. On top of that, GIMP 3.0 introduces new auto-expanding layer boundary and snapping options, an updated search pop-up to show the menu path for all entries while making individual filters searchable, a revamped alignment tool, and support for "layer sets," replacing the older concept of linked layers.
You can download GIMP 3.0 from the official website.
Open Source

Startup Claims Its Upcoming (RISC-V ISA) Zeus GPU is 10X Faster Than Nvidia's RTX 5090 (tomshardware.com) 69

"The number of discrete GPU developers from the U.S. and Western Europe shrank to three companies in 2025," notes Tom's Hardware, "from around 10 in 2000." (Nvidia, AMD, and Intel...) No company in the recent years — at least outside of China — was bold enough to engage into competition against these three contenders, so the very emergence of Bolt Graphics seems like a breakthrough. However, the major focuses of Bolt's Zeus are high-quality rendering for movie and scientific industries as well as high-performance supercomputer simulations. If Zeus delivers on its promises, it could establish itself as a serious alternative for scientific computing, path tracing, and offline rendering. But without strong software support, it risks struggling against dominant market leaders.
This week the Sunnyvale, California-based startup introduced its Zeus GPU platform designed for gaming, rendering, and supercomputer simulations, according to the article. "The company says that its Zeus GPU not only supports features like upgradeable memory and built-in Ethernet interfaces, but it can also beat Nvidia's GeForce RTX 5090 by around 10 times in path tracing workloads, according to slide published by technology news site ServeTheHome." There is one catch: Zeus can only beat the RTX 5090 GPU in path tracing and FP64 compute workloads. It's not clear how well it will handle traditional rendering techniques, as that was less of a focus. In speaking with Bolt Graphics, the card does support rasterization, but there was less emphasis on that aspect of the GPU, and it may struggle to compete with the best graphics cards when it comes to gaming. And when it comes to data center options like Nvidia's Blackwell B200, it's an entirely different matter.

Unlike GPUs from AMD, Intel, and Nvidia that rely on proprietary instruction set architectures, Bolt's Zeus relies on the open-source RISC-V ISA, according to the published slides. The Zeus core relies on an open-source out-of-order general-purpose RVA23 scalar core mated with FP64 ALUs and the RVV 1.0 (RISC-V Vector Extension Version 1.0) that can handle 8-bit, 16-bit, 32-bit, and 64-bit data types as well as Bolt's additional proprietary extensions designed for acceleration of scientific workloads... Like many processors these days, Zeus relies on a multi-chiplet design... Unlike high-end GPUs that prioritize bandwidth, Bolt is evidently focusing on greater memory size to handle larger datasets for rendering and simulations. Also, built-in 400GbE and 800GbE ports to enable faster data transfer across networked GPUs indicates the data center focus of Zeus.

High-quality rendering, real-time path tracing, and compute are key focus areas for Zeus. As a result, even the entry-level Zeus 1c26-32 offers significantly higher FP64 compute performance than Nvidia's GeForce RTX 5090 — up to 5 TFLOPS vs. 1.6 TFLOPS — and considerably higher path tracing performance: 77 Gigarays vs. 32 Gigarays. Zeus also features a larger on-chip cache than Nvidia's flagship — up to 128MB vs. 96MB — and lower power consumption of 120W vs. 575W, making it more efficient for simulations, path tracing, and offline rendering. However, the RTX 5090 dominates in AI workloads with its 105 FP16 TFLOPS and 1,637 INT8 TFLOPS compared to the 10 FP16 TFLOPS and 614 INT8 TFLOPS offered by a single-chiplet Zeus...

The article emphasizes that Zeus "is only running in simulation right now... Bolt Graphics says that the first developer kits will be available in late 2025, with full production set for late 2026."

Thanks to long-time Slashdot reader arvn for sharing the news.
Microsoft

Microsoft Admits GitHub Hosted Malware That Infected Almost a Million Devices (theregister.com) 17

Microsoft has spotted a malvertising campaign that downloaded nastyware hosted on GitHub and exposed nearly a million devices to information thieves. From a report: Discovered by Microsoft Threat Intelligence late last year, the campaign saw pirate vid-streaming websites embed malvertising redirectors to generate pay-per-view or pay-per-click revenue from malvertising platforms. "These redirectors subsequently routed traffic through one or two additional malicious redirectors, ultimately leading to another website, such as a malware or tech support scam website, which then redirected to GitHub" according to Microsoft's threat research team.

GitHub hosted a first-stage payload that installed code that dropped two other payloads. One gathered system configuration info such as data on memory size, graphics capabilities, screen resolution, the operating system present, and user paths. Third-stage payloads varied but most "conducted additional malicious activities such as command and control (C2) to download additional files and to exfiltrate data, as well as defense evasion techniques."

Apple

Apple Unveils iPad Air With M3 Chip (apple.com) 42

Apple today announced a significant update to its iPad Air lineup, integrating the M3 chip previously reserved for higher-end devices. The new tablets, available in both 11-inch ($599) and 13-inch ($799) configurations, deliver substantial performance gains: nearly 2x faster than M1-equipped models and 3.5x faster than A14 Bionic versions.

The M3 brings Apple's advanced graphics architecture to the Air for the first time, featuring dynamic caching, hardware-accelerated mesh shading, and ray tracing. The chip includes an 8-core CPU delivering 35% faster multithreaded performance over M1, paired with a 9-core GPU offering 40% faster graphics. The Neural Engine processes AI workloads 60% faster than M1, the company said. Apple also introduced a redesigned Magic Keyboard ($269/$319) with function row and larger trackpad.
DRM

'Why Can't We Screenshot Frames From DRM-Protected Video on Apple Devices?' (daringfireball.net) 82

Apple users noticed a change in 2023, "when streaming platforms like Netflix, HBO Max, Amazon Prime, and the Criterion Channel imposed a quiet embargo on the screenshot," noted the film blog Screen Slate: At first, there were workarounds: users could continue to screenshot by using the browser Brave or by downloading extensions or third-party tools like Fireshot. But gradually, the digital-rights-management tech adapted and became more sophisticated. Today, it is nearly impossible to take a screenshot from the most popular streaming services, at least not on a Macintosh computer. The shift occurred without remark or notice to subscribers, and there's no clear explanation as to why or what spurred the change...

For PC users, this story takes a different, and happier, turn. With the use of Snipping Tool — a utility exclusive to Microsoft Windows, users are free to screen grab content from all streaming platforms. This seems like a pointed oversight, a choice on the part of streamers to exclude Mac users (though they make up a tiny fraction of the market) because of their assumed cultural class.

"I'm not entirely sure what the technical answer to this is," tech blogger John Gruber wrote this weekend, "but on MacOS, it seemingly involves the GPU and video decoding hardware..." These DRM blackouts on Apple devices (you can't capture screenshots from DRM video on iPhones or iPads either) are enabled through the deep integration between the OS and the hardware, thus enabling the blackouts to be imposed at the hardware level. And I don't think the streaming services opt into this screenshot prohibition other than by "protecting" their video with DRM in the first place. If a video is DRM-protected, you can't screenshot it; if it's not, you can.

On the Mac, it used to be the case that DRM video was blacked-out from screen capture in Safari, but not in Chrome (or the dozens of various Chromium-derived browsers). But at some point a few years back, you stopped being able to capture screenshots from DRM videos in Chrome, too -- by default. But in Chrome's Settings page, under System, if you disable "Use graphics acceleration when available" and relaunch Chrome, boom, you can screenshot everything in a Chrome window, including DRM video...

What I don't understand is why Apple bothered supporting this in the first place for hardware-accelerated video (which is all video on iOS platforms -- there is no workaround like using Chrome with hardware acceleration disabled on iPhone or iPad). No one is going to create bootleg copies of DRM-protected video one screenshotted still frame at a time -- and even if they tried, they'd be capturing only the images, not the sound. And it's not like this "feature" in MacOS and iOS has put an end to bootlegging DRM-protected video content.

Gruber's conclusion? "This 'feature' accomplishes nothing of value for anyone, including the streaming services, but imposes a massive (and for most people, confusing and frustrating) hindrance on honest people simply trying to easily capture high-quality (as opposed to, say, using their damn phone to take a photograph of their reflective laptop display) screenshots of the shows and movies they're watching."
Movies

Blender-Rendered Movie 'Flow' Wins Oscar for Best Animated Feature, Beating Pixar (blender.org) 72

It's a feature-length film "rendered on a free and open-source software platform called Blender," reports Reuters. And it just won the Oscar for best animated feature film, beating movies from major studios like Disney/Pixar and Dreamworks.

In January Blender.org called Flow "the manifestation of Blender's mission, where a small, independent team with a limited budget is able to create a story that moves audiences worldwide, and achieve recognition with over 60 awards, including a Golden Globe for Best Animation and two Oscar nominations." The entire project cost just $3.7 million, reports NPR — though writer/director Gints Zilbalodis tells Blender.org that it took about five and a half years.

"I think a certain level of naivety is necessary when starting a project," Zilbalodis tells Blender. "If I had known how difficult it would be, I might never have started. But because I didn't fully grasp the challenges ahead, I just dove in and figured things out along the way..." Zilbalodis: [A]fter making a few shorts, I realized that I'm not good at drawing, and I switched to 3D because I could model things, and move the camera... After finishing my first feature Away, I decided to switch to Blender [from Maya] in 2019, mainly because of EEVEE... It took a while to learn some of the stuff, but it was actually pretty straightforward. Many of the animators in Flow took less than a week to switch to Blender...

I've never worked in a big studio, so I don't really know exactly how they operate. But I think that if you're working on a smaller indie-scale project, you shouldn't try to copy what big studios do. Instead, you should develop a workflow that best suits you and your smaller team.

You can get a glimpse of their animation style in Flow's official trailer.

NPR says that ultimately Flow's images "possess a kinetic elegance. They have the alluring immersiveness of a video game..."
AMD

AMD Reveals RDNA 4 GPU Architecture Powering Next Gen Radeon RX 9070 Cards (hothardware.com) 24

Long-time Slashdot reader MojoKid writes: AMD took the wraps of its next gen RDNA 4 consumer graphics architecture Friday, which was designed to enhance efficiency over the previous generation, while also optimizing performance for today's more taxing ray-traced gaming and AI workloads. RDNA 4 features next generation Ray Tracing engines, dedicated hardware for AI and ML workloads, better bandwidth utilization, and multimedia improvements for both gaming and content creation. AMD's 3rd generation Ray Accelerators in RDNA offer 2x the peak throughput of RDNA 3 and add support for a new feature called Oriented Bounding Boxes, that results in more efficient GPU utilization. 3rd Generation Matrix Accelerators are also present, which offer improved performance, along with support for 8-bit float data types, with structured sparsity.

The first cards featuring RDNA 4, the Radeon RX 9070 and 9070 XT go on sale next week, with very competitive MSRPs below $600, and are expected to do battle with NVIDIA's GeForce RTX 5070-class GPUs

The article calls it "a significant step forward" for AMD, adding that next week is "going to be very busy around here. NVIDIA is launching the final, previously announced member of the RTX 50 series and AMD will unleash the 9070 and 9070 XT."
AI

OpenAI Sam Altman Says the Company Is 'Out of GPUs' (techcrunch.com) 53

An anonymous reader quotes a report from TechCrunch: OpenAI CEO Sam Altman said that the company was forced to stagger the rollout of its newest model, GPT-4.5, because OpenAI is "out of GPUs." In a post on X, Altman said that GPT-4.5, which he described as "giant" and "expensive," will require "tens of thousands" more GPUs before additional ChatGPT users can gain access. GPT-4.5 will come first to subscribers to ChatGPT Pro starting Thursday, followed by ChatGPT Plus customers next week.

Perhaps in part due to its enormous size, GPT-4.5 is wildly expensive. OpenAI is charging $75 per million tokens (~750,000 words) fed into the model and $150 per million tokens generated by the model. That's 30x the input cost and 15x the output cost of OpenAI's workhorse GPT-4o model. "We've been growing a lot and are out of GPUs," Altman wrote. "We will add tens of thousands of GPUs next week and roll it out to the Plus tier then [] This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."

Businesses

Technicolor Begins To Shut Down Operations (variety.com) 22

Technicolor Group has filed for a court recovery procedure in France after failing to secure new investors, putting its VFX brands, including MPC, The Mill, Mikros Animation, and Technicolor Games, at risk of closure. Variety reports: A total shutdown of MPC and Technicolor's operations would affect thousands of visual effects workers in countries include the U.S., UK, Canada and India. The turn in business has raised the alarm and sparked sadness within the VFX community. Parot's memo explains, "In each country, the appropriate framework for orderly protection and way forward is currently being put in place to allow, when possible, to remain in business continuity."

Technicolor has already started to shut down U.S. operations. On Friday, it began alerting customers and employees, sending U.S. employees a WARN notice as required by law for large companies that anticipate closings and mass layoffs. At least one recovery effort already started for roughly 100 U.S. employees of The Mill. The creative leadership and most of the creative staff that was Technicolor's The Mill U.S. is joining forces with Dream Machine FX to launch a new venture, Arc Creative, Variety reported exclusively on Monday. A statement from the artists explains they they are working to launch the new entity amid "the complexities of Technicolor's Chapter 7 proceedings."

Questions remain about how studios will finish upcoming projects that are currently housed at MPC, which include Disney's live-action remake of "Lilo and Stitch" and Paramount's "Mission: Impossible -- The Final Reckoning," as well as Mikros' work, such as Paramount and Nickelodeon's upcoming "Teenage Mutant Ninja Turtles" sequel.

Hardware

Framework Moves Into Desktops, 2-In-1 Laptops (tomshardware.com) 57

At its "Second Gen" event today, Framework detailed three new computers: an updated Framework Laptop 13 with AMD Ryzen AI 300, a 4.5-liter Mini-ITX desktop powered by Ryzen AI Max, and a colorful, convertible Framework Laptop 12 designed with students in mind. The latter is defined by Framework as a "defining product." Tom's Hardware reports: Framework Desktop: The Framework Desktop is a 4.5L Mini-ITX machine using AMD's Ryzen AI Max "Strix Halo" chips with Radeon 8060S graphics. While this is a mobile chip, Framework says putting it in a desktop chassis gets it to 120W sustained power and 140W boost "while staying quiet and cool." Framework says this should allow 1440p gaming on intense titles, as well as workstation-class projects and local AI. [...] The base model, with a Ryzen AI Max 385 and 32GB of RAM, starts at $1,099, while the top-end machine with a Ryzen AI Max+ 395 with 128GB of RAM begins at $1,999. Framework is only doing "DIY" editions here, so you'll have to get your own storage drive and bring your own operating system (the company is calling it "the easiest PC you'll ever build"). The mainboard on its own will be available from $799. Pre-orders are open now, and Framework expects to ship sometime in Q3.

Framework Laptop 12: The Laptop 12 is designed to bring the flexibility from the Laptop 12 but make it smaller, cheaper, and in more colors (with an optional stylus to match). These machines are made of ABS plastic molded in thermoplastic polyurethane, all around a metal frame. Framework says that it's "our easiest product ever to repair," but that more information on that will come closer to its launch in mid-2025. I'm really looking forward to this repair guide. It comes in five colorways: lavender, sage, gray, black, and bubblegum. The laptop will come with 13th Gen Intel Core i3 and i5 processors, which aren't the latest, but better than entry-level junk. You'll get up to 48GB of RAM, 2TB of storage, and Wi-Fi 6E. It has a 1920 x 1200 touch screen that the company claims will surpass 400 nits of brightness. There's no pricing information yet, and Framework says there's more to share on pricing and specs later in the year. Pre-orders will open in April ahead of the mid-year launch.

Framework Laptop 13: The Framework Laptop 13 is getting a significant refresh with AMD Ryzen AI 300 Series. It doesn't look all that different on the outside, with a 13.5-inch design that largely resembles the one from way back in 2021. But there are new features. Beyond the processors, the Framework Laptop 3 is getting bumped up to Wi-Fi 7 and is getting a new thermal system, a "next-generation" keyboard, and new colorways for the Expansion Cards and bezels (though I still don't know why you would want a bezel in anything other than black). [...] The new Framework Laptop 13 with AMD Ryzen AI 300 starts at $899 for a DIY Edition without storage or an OS, and $1,099 for a pre-built model. If you're buying the mainboard to put in an old system, that's $449. (Framework is keeping the Ryzen 7040 systems around starting at $749). No word for now on any new Intel models.

Microsoft

Microsoft Trims More CPUs From Windows 11 Compatibility List (theregister.com) 95

Microsoft has updated its CPU compatibility list for Windows 11 24H2, excluding pre-11th-generation Intel processors for OEMs building new PCs. The Register reports: Windows 11 24H2 has been available to customers for months, yet Microsoft felt compelled in its February update to confirm that builders, specifically, must use Intel's 11th-generation or later silicon when building brand new PCs to run its most recent OS iteration. "These processors meet the design principles around security, reliability, and the minimum system requirements for Windows 11," Microsoft says.

Intel's 11th-generation chips arrived in 2020 and were discontinued last year. It would be surprising, if not unheard of, for OEMs to build machines with unsupported chips. Intel has already transitioned many pre-11th generation chips to "a legacy software support model," so Microsoft's decision to omit the chips from the OEM list is understandable. However, this could be seen as a creeping problem. Chips made earlier than that were present very recently, in the list of supported Intel processors for Windows 11 22H2 and 23H2.

This new OEM list may add to worries of some users looking at the general hardware compatibility specs for Windows 11 and wondering if the latest information means that even the slightly newer hardware in their org's fleet will soon no longer meet the requirements of Microsoft's flagship operating system. It's a good question, and the answer -- currently -- appears to be that those "old" CPUs are still suitable. Microsoft has a list of hardware compatibility requirements that customers can check, and they have not changed much since the outcry when they were first published.

Graphics

Nvidia Ends 32-Bit CUDA App Support For GeForce RTX 50 Series (tomshardware.com) 45

Nvidia has confirmed on its forums that the RTX 50 series GPUs no longer support 32-bit PhysX. Tom's Hardware reports: As far as we know, there are no 64-bit games with integrated PhysX technology, thus terminating the tech entirely on RTX 50 series GPUs and newer. RTX 40 series and older will still be able to run 32-bit CUDA applications and thus PhysX, but regardless, the technology is now officially retired, starting with Blackwell. [...]

The only way now to run PhysX on RTX 50 series GPUs (or newer) is to install a secondary RTX 40 series or older graphics card and slave it to PhysX duty in the Nvidia control panel. As far as we are aware, Nvidia has not disabled this sort of functionality. But the writing is on the wall for PhysX, and we doubt there will be any future games that attempt to use the API.

Graphics

Why A Maintainer of the Linux Graphics Driver Nouveau Stepped Down (phoronix.com) 239

For over a decade Karol Herbst has been a developer on the open-source Nouveau driver, a reverse-engineered NVIDIA graphics driver for Linux. "He went on to become employed by Red Hat," notes Phoronix. "While he's known more these days for his work on the Mesa 3D Graphics Library and the Rusticl OpenCL driver for it, he's still remained a maintainer of the Nouveau kernel driver."

But Saturday Herbst stepped down as a nouveau kernel maintainer, in a mailing list message that begins "I was pondering with myself for a while if I should just make it official that I'm not really involved in the kernel community anymore, neither as a reviewer, nor as a maintainer." (Another message begins "I often thought about at least contributing some patches again once I find the time, but...")

Their resignation message hints at some long-running unhappiness. "I got burned out enough by myself caring about the bits I maintained, but eventually I had to realize my limits. The obligation I felt was eating me from inside. It stopped being fun at some point and I reached a point where I simply couldn't continue the work I was so motivated doing as I've did in the early days." And they point to one specific discussion on the kernel mailing list February 8th as "The moment I made up my mind."

It happened in a thread about whether Rust would create difficulty for maintainers. (Someone had posted that "The all powerful sub-system maintainer model works well if the big technology companies can employ omniscient individuals in these roles, but those types are a bit hard to come by.") In response, someone else had posted "I'll let you in a secret. The maintainers are not 'all-powerful'. We are the 'thin blue line' that is trying to keep the code to be maintainable and high quality. Like most leaders of volunteer organization, whether it is the Internet Engineerint Task Force (the standards body for the Internet), we actually have very little power. We can not *command* people to work on retiring technical debt, or to improve testing infrastructure, or work on some particular feature that we'd very like for our users. All we can do is stop things from being accepted..."

Saturday Herbst wrote: The moment I made up my mind about this was reading the following words written by a maintainer within the kernel community:

"we are the thin blue line"

This isn't okay. This isn't creating an inclusive environment. This isn't okay with the current political situation especially in the US. A maintainer speaking those words can't be kept. No matter how important or critical or relevant they are. They need to be removed until they learn. Learn what those words mean for a lot of marginalized people. Learn about what horrors it evokes in their minds.

I can't in good faith remain to be part of a project and its community where those words are tolerated. Those words are not technical, they are a political statement. Even if unintentionally, such words carry power, they carry meanings one needs to be aware of. They do cause an immense amount of harm.

The phrase thin blue line "typically refers to the concept of the police as the line between law-and-order and chaos," according to Wikipedia, but more recently became associated with a"countermovement" to the Black Lives Matter movement and "a number of far-right movements in the U.S."

Phoronix writes: Lyude Paul and Danilo Krummrich both of Red Hat remain Nouveau kernel maintainers. Red Hat developers are also working on developing NOVA as the new Rust-based open-source NVIDIA kernel driver leveraging the GSP interface for Turing GPUs and newer.
AMD

Nvidia Delays the RTX 5070 Till After AMD's Reveal (theverge.com) 38

An anonymous reader shares a report: As always, the most important Nvidia graphics card is the one you can actually buy, and Nvidia's talked a big game for its RTX 5070, making the dubious but nuanced claim it can deliver RTX 4090 performance for just $549. On February 28th, AMD will get its chance to intercept with the Radeon RX 9070 and 9070 XT, in a streaming event it just announced today. But Nvidia has now made its own wiggle room, delaying the launch of the RTX 5070 from February to March 5th, its product page reveals today. Nvidia will ship its $749 RTX 5070 Ti ahead of AMD's event, though, on February 20th, a week from today.

Slashdot Top Deals