AMD

AMD Unveils Ryzen AI and 9000 Series Processors, Plus Radeon PRO W7900 Dual Slot (betanews.com) 41

The highlight of AMD's presentation Sunday at Computex 2024 was "the introduction of AMD's Ryzen AI 300 Series processors for laptops and the Ryzen 9000 Series for desktops," writes Slashdot reader BrianFagioli (sharing his report at Beta News): AMD's Ryzen AI 300 Series processors, designed for next-generation AI laptops, come with AMD's latest XDNA 2 architecture. This includes a Neural Processing Unit (NPU) that delivers 50 TOPS of AI processing power, significantly enhancing the AI capabilities of laptops. Among the processors announced were the Ryzen AI 9 HX 370, which features 12 cores and 24 threads with a boost frequency of 5.1 GHz, and the Ryzen AI 9 365 with 10 cores and 20 threads, boosting up to 5.0 GHz...

In the desktop segment, the Ryzen 9000 Series processors, based on the "Zen 5" architecture, demonstrated an average 16% improvement in IPC performance over their predecessors built on the "Zen 4" architecture. The Ryzen 9 9950X stands out with 16 cores and 32 threads, reaching up to 5.7 GHz boost frequency and equipped with 80MB of cache... AMD also reaffirmed its commitment to the AM4 platform by introducing the Ryzen 9 5900XT and Ryzen 7 5800XT processors. These models are compatible with existing AM4 motherboards, providing an economical upgrade path for users.

The article adds that AMD also unveiled its Radeon PRO W7900 Dual Slot workstation graphics card — priced at $3,499 — "further broadening its impact on high-performance computing...

"AMD also emphasized its strategic partnerships with leading OEMs such as Acer, ASUS, HP, Lenovo, and MSI, who are set to launch systems powered by these new AMD processors." And there's also a software collaboration with Microsoft, reportedly "to enhance the capabilities of AI PCs, thus underscoring AMD's holistic approach to integrating AI into everyday computing."
Hardware

Arm Says Its Next-Gen Mobile GPU Will Be Its Most 'Performant and Efficient' (theverge.com) 29

IP core designer Arm announced its next-generation CPU and GPU designs for flagship smartphones: the Cortex-X925 CPU and Immortalis G925 GPU. Both are direct successors to the Cortex-X4 and Immortalis G720 that currently power MediaTek's Dimensity 9300 chip inside flagship smartphones like the Vivo X100 and X100 Pro and Oppo Find X7. From a report: Arm changed the naming convention for its Cortex-X CPU design to highlight what it says is a much faster CPU design. It claims the X925's single-core performance is 36 percent faster than the X4 (when measured in Geekbench). Arm says it increased the AI workload performance by 41 percent, time to token, with up to 3MB of private L2 cache. The Cortex-X925 brings a new generation of Cortex-A microarchitectures ("little" cores) with it, too: the Cortex-A725, which Arm says has 35 percent better performance efficiency than last-gen's A720 and a 15 percent more power-efficient Cortex-A520.

Arm's new Immortalis G925 GPU is its "most performant and efficient GPU" to date, it says. It's 37 percent faster on graphics applications compared to the last-gen G720, with improved ray-tracing performance with intricate objects by 52 percent and improved AI and ML workloads by 34 percent -- all while using 30 percent less power. For the first time, Arm will offer "optimized layouts" of its new CPU and GPU designs that it says will be easier for device makers to "drop" or implement into their own system on chip (SoC) layouts. Arm says this new physical implementation solution will help other companies get their devices to market faster, which, if true, means we could see more devices with Arm Cortex-X925 and / or Immortalis G925 than the few that shipped with its last-gen ones.

Nintendo

Ubuntu 24.04 Now Runs on the Nintendo Switch (Unofficially) (omgubuntu.co.uk) 6

"The fact it's possible at all is a credit to the ingenuity of the open-source community," writes the blog OMG Ubuntu: Switchroot is an open-source project that allows Android and Linux-based distros like Ubuntu to run on the Nintendo Switch — absolutely not something Nintendo approves of much less supports, endorses, or encourages, etc! I covered the loophole that made this possible back in 2018. Back then the NVIDIA Tegra X1-powered Nintendo Switch was still new and Linux support for much of the console's internal hardware in a formative state (a polite way to say 'not everything worked'). But as the popularity of Nintendo's handheld console ballooned (to understate it) so the 'alternative OS' Switch scene grew, and before long Linux support for Switch hardware was in full bloom...

A number of Linux for Switchroot (L4S) distributions have since been released, designated as Linux for Tegra (L4T) builds. As these can boot from a microSD card it's even possible to dualboot the Switch OS with Linux, which is neat! Recently, a fresh set of L4T Ubuntu images were released based on the newest Ubuntu 24.04 LTS release. These builds work on all Switch versions, from the OG (exploit-friendly) unit through to newer, patched models (where a modchip is required)...

I'm told all of the Nintendo Switch internal hardware now works under Linux, including Wi-Fi, Bluetooth, sleep mode, accelerated graphics, the official dock... Everything, basically. And despite being a 7 year old ARM device the performance is said to remain decent.

"Upstream snafus have delayed the release of builds with GNOME Shell..."
Businesses

Nvidia Reports a 262% Jump In Sales, 10-1 Stock Split (cnbc.com) 11

Nvidia reported fiscal first-quarter earnings surpassing expectations with strong forecasts, indicating sustained demand for its AI chips. Following the news, the company's stock rose over 6% in extended trading. Nvidia also said it was splitting its stock 10 to 1. CNBC reports: Nvidia said it expected sales of $28 billion in the current quarter. Wall Street was expecting earnings per share of $5.95 on sales of $26.61 billion, according to LSEG. Nvidia reported net income for the quarter of $14.88 billion, or $5.98 per share, compared with $2.04 billion, or 82 cents, in the year-ago period. [...] Nvidia said its data center category rose 427% from the year-ago quarter to $22.6 billion in revenue. Nvidia CFO Colette Kress said in a statement that it was due to shipments of the company's "Hopper" graphics processors, which include the company's H100 GPU.

Nvidia also highlighted strong sales of its networking parts, which are increasingly important as companies build clusters of tens of thousands of chips that need to be connected. Nvidia said that it had $3.2 billion in networking revenue, primarily its Infiniband products, which was over three times higher than last year's sales. Nvidia, before it became the top supplier to big companies building AI, was known primarily as a company making hardware for 3D gaming. The company's gaming revenue was up 18% during the quarter to $2.65 billion, which Nvidia attributed to strong demand.

The company also sells chips for cars and chips for advanced graphics workstations, which remain much smaller than its data center business. The company reported $427 million in professional visualization sales, and $329 million in automotive sales. Nvidia said it bought back $7.7 billion worth of its shares and paid $98 million in dividends during the quarter. Nvidia also said that it's increasing its quarterly cash dividend from 4 cents per share to 10 cents on a pre-split basis. After the split, the dividend will be a penny a share.

Graphics

Microsoft Paint Is Getting an AI-Powered Image Generator (engadget.com) 41

Microsoft Paint is getting a new image generator tool called Cocreator that can generate images based on text prompts and doodles. Engadget reports: During a demo at its Surface event, the company showed off how Cocreator combines your own drawings with text prompts to create an image. There's also a "creativity slider" that allows you to control how much you want AI to take over compared with your original art. As Microsoft pointed out, the combination of text prompts and your own brush strokes enables faster edits. It could also help provide a more precise rendering than what you'd be able to achieve with DALL-E or another text-to-image generator alone.
Ubuntu

Ubuntu 24.10 to Default to Wayland for NVIDIA Users (omgubuntu.co.uk) 76

An anonymous reader shared this report from the blog OMG Ubuntu: Ubuntu first switched to using Wayland as its default display server in 2017 before reverting the following year. It tried again in 2021 and has stuck with it since. But while Wayland is what most of us now log into after installing Ubuntu, anyone doing so on a PC or laptop with an NVIDIA graphics card present instead logs into an Xorg/X11 session.

This is because NVIDIA's proprietary graphics drivers (which many, especially gamers, opt for to get the best performance, access to full hardware capabilities, etc) have not supported Wayland as well as as they could've. Past tense as, thankfully, things have changed in the past few years. NVIDIA's warmed up to Wayland (partly as it has no choice given that Wayland is now standard and a 'maybe one day' solution, and partly because it wants to: opportunities/benefits/security).

With the NVIDIA + Wayland sitch' now in a better state than before — but not perfect — Canonical's engineers say they feel confident enough in the experience to make the Ubuntu Wayland session default for NVIDIA graphics card users in Ubuntu 24.10.

Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Hardware

Apple Announces M4 With More CPU Cores and AI Focus (arstechnica.com) 66

An anonymous reader quotes a report from Ars Technica: In a major shake-up of its chip roadmap, Apple has announced a new M4 processor for today's iPad Pro refresh, barely six months after releasing the first MacBook Pros with the M3 and not even two months after updating the MacBook Air with the M3. Apple says the M4 includes "up to" four high-performance CPU cores, six high-efficiency cores, and a 10-core GPU. Apple's high-level performance estimates say that the M4 has 50 percent faster CPU performance and four times as much graphics performance. Like the GPU in the M3, the M4 also supports hardware-accelerated ray-tracing to enable more advanced lighting effects in games and other apps. Due partly to its "second-generation" 3 nm manufacturing process, Apple says the M4 can match the performance of the M2 while using just half the power.

As with so much else in the tech industry right now, the M4 also has an AI focus; Apple says it's beefing up the 16-core Neural Engine (Apple's equivalent of the Neural Processing Unit that companies like Qualcomm, Intel, AMD, and Microsoft have been pushing lately). Apple says the M4 runs up to 38 trillion operations per second (TOPS), considerably ahead of Intel's Meteor Lake platform, though a bit short of the 45 TOPS that Qualcomm is promising with the Snapdragon X Elite and Plus series. The M3's Neural Engine is only capable of 18 TOPS, so that's a major step up for Apple's hardware. Apple's chips since 2017 have included some version of the Neural Engine, though to date, those have mostly been used to enhance and categorize photos, perform optical character recognition, enable offline dictation, and do other oddities. But it may be that Apple needs something faster for the kinds of on-device large language model-backed generative AI that it's expected to introduce in iOS and iPadOS 18 at WWDC next month.
A separate report from the Wall Street Journal says Apple is developing a custom chip to run AI software in datacenters. "Apple's server chip will likely be focused on running AI models, also known as inference, rather than in training AI models, where Nvidia is dominant," reports Reuters.

Further reading: Apple Quietly Kills the Old-school iPad and Its Headphone Jack
Games

Veteran PC Game 'Sopwith' Celebrates 40th Anniversary (github.io) 42

Longtime Slashdot reader sfraggle writes: Biplane shoot-'em up, Sopwith, is celebrating 40 years today since its first release back in 1984. The game is one of the oldest PC games still in active development today, originating as an MS-DOS game for the original IBM PC. The 40th anniversary site has a detailed history of how the game was written as a tech demo for the now-defunct Imaginet networking system. There is also a video interview with its original authors. "The game involves piloting a Sopwith biplane, attempting to bomb enemy buildings while avoiding fire from enemy planes and various other obstacles," reads the Wiki page. "Sopwith uses four-color CGA graphics and music and sound effects use the PC speaker. A sequel with the same name, but often referred to as Sopwith 2, was released in 1985."

You can play Sopwith in your browser here.
Ubuntu

Ubuntu 24.04 Yields a 20% Performance Advantage Over Windows 11 On Ryzen 7 Framework Laptop (phoronix.com) 63

Michael Larabel reports via Phoronix: With the Framework 16 laptop one of the performance pieces I've been meaning to carry out has been seeing out Linux performs against Microsoft Windows 11 for this AMD Ryzen 7 7840HS powered modular/upgradeable laptop. Recently getting around to it in my benchmarking queue, I also compared the performance of Ubuntu 23.10 to the near final Ubuntu 24.04 LTS on this laptop up against a fully-updated Microsoft Windows 11 installation. The Framework 16 review unit as a reminder was configured with the 8-core / 16-thread AMD Ryzen 7 7840HS Zen 4 SoC with Radeon RX 7700S graphics, a 512GB SN810 NVMe SSD, MediaTek MT7922 WiFi, and a 2560 x 1600 display.

In the few months of testing out the Framework 16 predominantly under Linux it's been working out very well. With also having a Windows 11 partition as shipped by Framework, after updating that install it made for an interesting comparison against the Ubuntu 23.10 and Ubuntu 24.04 performance. The same Framework 16 AMD laptop was used throughout all of the testing for looking at the out-of-the-box performance across Microsoft Windows 11, Ubuntu 23.10, and the near-final state of Ubuntu 24.04. [...]

Out of 101 benchmarks carried out on all three operating systems with the Framework 16 laptop, Ubuntu 24.04 was the fastest in 67% of those tests, the prior Ubuntu 23.10 led in 22% (typically with slim margins to 24.04), and then Microsoft Windows 11 was the front-runner just 10% of the time... If taking the geomean of all 101 benchmark results, Ubuntu 23.10 was 16% faster than Microsoft Windows 11 while Ubuntu 24.04 enhanced the Ubuntu Linux performance by 3% to yield a 20% advantage over Windows 11 on this AMD Ryzen 7 7840HS laptop. Ubuntu 24.04 is looking very good in the performance department and will see its stable release next week.

PlayStation (Games)

Sony's PS5 Pro is Real and Developers Are Getting Ready For It (theverge.com) 25

Sony is getting ready to release a more powerful PS5 console, possibly by the end of this year. After reports of leaked PS5 Pro specifications surfaced recently, The Verge has obtained a full list of specs for the upcoming console. From the report: Sources familiar with Sony's plans tell me that developers are already being asked to ensure their games are compatible with this upcoming console, with a focus on improving ray tracing. Codenamed Trinity, the PlayStation 5 Pro model will include a more powerful GPU and a slightly faster CPU mode. All of Sony's changes point to a PS5 Pro that will be far more capable of rendering games with ray tracing enabled or hitting higher resolutions and frame rates in certain titles. Sony appears to be encouraging developers to use graphics features like ray tracing more with the PS5 Pro, with games able to use a "Trinity Enhanced" (PS5 Pro Enhanced) label if they "provide significant enhancements."

Sony expects GPU rendering on the PS5 Pro to be "about 45 percent faster than standard PlayStation 5," according to documents outlining the upcoming console. The PS5 Pro GPU will be larger and use faster system memory to help improve ray tracing in games. Sony is also using a "more powerful ray tracing architecture" in the PS5 Pro, where the speed here is up to three times better than the regular PS5. "Trinity is a high-end version of PlayStation 5," reads one document, with Sony indicating it will continue to sell the standard PS5 after this new model launches. Sony is expecting game developers to have a single package that will support both the PS5 and PS5 Pro consoles, with existing games able to be patched for higher performance.

Google

With Vids, Google Thinks It Has the Next Big Productivity Tool For Work (theverge.com) 56

For decades, work has revolved around documents, spreadsheets, and slide decks. Word, Excel, PowerPoint; Pages, Numbers, Keynote; Docs, Sheets, Slides. Now Google is proposing to add another to that triumvirate: an app called Vids that aims to help companies and consumers make collaborative, shareable video more easily than ever. From a report: Google Vids is very much not an app for making beautiful movies... or even not-that-beautiful movies. It's meant more for the sorts of things people do at work: make a pitch, update the team, explain a complicated concept. The main goal is to make everything as easy as possible, says Kristina Behr, Google's VP of product management for the Workspace collaboration apps. "The ethos that we have is, if you can make a slide, you can make a video in Vids," she says. "No video production is required."

Based on what I've seen of Vids so far, it appears to be roughly what you'd get if you transformed Google Slides into a video app. You collect assets from Drive and elsewhere and assemble them in order -- but unlike the column of slides in the Slides sidebar, you're putting together a left-to-right timeline for a video. Then, you can add voiceover or film yourself and edit it all into a finished video. A lot of those finished videos, I suspect, will look like recorded PowerPoint presentations or Meet calls or those now-ubiquitous training videos where a person talks to you from a small circle in the bottom corner while graphics play on the screen. There will be lots of clip art-heavy product promos, I'm sure. But in theory, you can make almost anything in Vids. ou can either do all this by yourself or prompt Google's Gemini AI to make a first draft of the video for you. Gemini can build a storyboard; it can write a script; it can read your script aloud with text-to-speech; it can create images for you to use in the video. The app has a library of stock video and audio that users can add to their own Vids, too.

AMD

AMD To Open Source Micro Engine Scheduler Firmware For Radeon GPUs 23

AMD plans to document and open source its Micro Engine Scheduler (MES) firmware for GPUs, giving users more control over Radeon graphics cards. From a report: It's part of a larger effort AMD confirmed earlier this week about making its GPUs more open source at both a software level in respect to the ROCm stack for GPU programming and a hardware level. Details were scarce with this initial announcement, and the only concrete thing it introduced was a GitHub tracker.

However, yesterday AMD divulged more details, specifying that one of the things it would be making open source was the MES firmware for Radeon GPUs. AMD says it will be publishing documentation for MES around the end of May, and will then release the source code some time afterward. For one George Hotz and his startup, Tiny Corp, this is great news. Throughout March, Hotz had agitated for AMD to make MES open source in order to fix issues he was experiencing with his RX 7900 XTX-powered AI server box. He had talked several times to AMD representatives, and even the company's CEO, Lisa Su.
Software

Rickroll Meme Immortalized In Custom ASIC That Includes 164 Hardcoded Programs (theregister.com) 9

Matthew Connatser reports via The Register: An ASIC designed to display the infamous Rickroll meme is here, alongside 164 other assorted functions. The project is a product of Matthew Venn's Zero to ASIC Course, which offers prospective chip engineers the chance to "learn to design your own ASIC and get it fabricated." Since 2020, Zero to ASIC has accepted several designs that are incorporated into a single chip called a multi-project wafer (MPW), a cost-saving measure as making one chip for one design would be prohibitively expensive. Zero to ASIC has two series of chips: MPW and Tiny Tapeout. The MPW series usually includes just a handful of designs, such as the four on MPW8 submitted in January 2023. By contrast, the original Tiny Tapeout chip included 152 designs, and Tiny Tapeout 2 (which arrived last October) had 165, though could bumped up to 250. Of the 165 designs, one in particular may strike a chord: Design 145, or the Secret File, made by engineer and YouTuber Bitluni. His Secret File design for the Tiny Tapeout ASIC is designed to play a small part of Rick Astley's music video for Never Gonna Give You Up, also known as the Rickroll meme.

Bitluni was a late inclusion on the Tiny Tapeout 2 project, having been invited just three days before the submission deadline. He initially just made a persistence-of-vision controller, which was revised twice for a total of three designs. "At the end, I still had a few hours left, and I thought maybe I should also upload a meme project," Bitluni says in his video documenting his ASIC journey. His meme of choice was of course the Rickroll. One might even call it an Easter egg. However, given that there were 250 total plots for each design, there wasn't a ton of room for both the graphics processor and the file it was supposed to render, a short GIF of the music video. Ultimately, this had to be shrunk from 217 kilobytes to less than half a kilobyte, making its output look similar to games on the Atari 2600 from 1977. Accessing the Rickroll rendering processor and other designs isn't simple. Bitluni created a custom circuit board to mount the Tiny Tapeout 2 chip, creating a device that could then be plugged into a motherboard capable of selecting specific designs on the ASIC. Unfortunately for Bitluni, his first PCB had a design error on it that he had to correct, but the revised version worked and was able to display the Rickroll GIF in hardware via a VGA port.

News

Taiwan Quake Puts World's Most Advanced Chips at Risk (msn.com) 99

Taiwan's biggest earthquake in 25 years has disrupted production at the island's semiconductor companies, raising the possibility of fallout for the technology industry and perhaps the global economy. From a report: The potential repercussions are significant because of the critical role Taiwan plays in the manufacture of advanced chips, the foundation of technologies from artificial intelligence and smartphones to electric vehicles.

The 7.4-magnitude earthquake led to the collapse of at least 26 buildings, four deaths and the injury of 57 people across Taiwan, with much of the fallout still unknown. Taiwan Semiconductor Manufacturing Co., the world's largest maker of advanced chips for customers like Apple and Nvidia, halted some chipmaking machinery and evacuated staff. Local rival United Microelectronics also stopped machinery at some plants and evacuated certain facilities at its hubs of Hsinchu and Tainan.

Taiwan is the leading producer of the most advanced semiconductors in the world, including the processors at the heart of the latest iPhones and the Nvidia graphics chips that train AI models like OpenAI's ChatGPT. TSMC has become the tech linchpin because it's the most advanced in producing complex chips. Taiwan is the source of an estimated 80% to 90% of the highest-end chips -- there is effectively no substitute. Jan-Peter Kleinhans, director of the technology and geopolitics project at Berlin-based think tank Stiftung Neue Verantwortung, has called Taiwan "potentially the most critical single point of failure" in the semiconductor industry.

The Matrix

'Yes, We're All Trapped in the Matrix Now' (cnn.com) 185

"As you're reading this, you're more likely than not already inside 'The Matrix'," according to a headline on the front page of CNN.com this weekend.

It linked to an opinion piece by Rizwan Virk, founder of MIT's startup incubator/accelerator program. He's now a doctoral researcher at Arizona State University, where his profile identifies him as an "entrepreneur, video game pioneer, film producer, venture capitalist, computer scientist and bestselling author." Virk's 2019 book was titled "The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are in a Video Game." In the decades since [The Matrix was released], this idea, now called the simulation hypothesis, has come to be taken more seriously by technologists, scientists and philosophers. The main reason for this shift is the stunning improvements in computer graphics, virtual and augmented reality (VR and AR) and AI. Taking into account three developments just this year from Apple, Neuralink and OpenAI, I can now confidently state that as you are reading this article, you are more likely than not already inside a computer simulation. This is because the closer our technology gets to being able to build a fully interactive simulation like the Matrix, the more likely it is that someone has already built such a world, and we are simply inside their video game world...

In 2003, Oxford philosopher Nick Bostrom imagined a "technologically mature" civilization could easily create a simulated world. The logic, then, is that if any civilization ever reaches this point, it would create not just one but a very large number of simulations (perhaps billions), each with billions of AI characters, simply by firing up more servers. With simulated worlds far outnumbering the "real" world, the likelihood that we are in a simulation would be significantly higher than not. It was this logic that prompted Elon Musk to state, a few years ago, that the chances that we are not in a simulation (i.e. that we are in base reality) was "one in billions." It's a theory that is difficult to prove — but difficult to disprove as well. Remember, the simulations would be so good that you wouldn't be able to tell the difference between a physical and a simulated world. Either the signals are being beamed directly into your brain, or we are simply AI characters inside the simulation...

Recent developments in Silicon Valley show that we could get to the simulation point very soon. Just this year, Apple released its Vision Pro headset — a mixed-reality (including augmented and virtual reality) device that, if you believe initial reviews (ranging from mildly positive to ecstatic), heralds the beginning of a new era of spatial computing — or the merging of digital and physical worlds... we can see a direct line to being able to render a realistic fictional world around us... Just last month, OpenAI released Sora AI, which can now generate highly realistic videos that are pretty damn difficult to distinguish from real human videos. The fact that AI can so easily fool humans visually as well as through text (and according to some, has already passed the well-known Turing Test) shows that we are not far from fully immersive worlds populated with simulated AI characters that seem (and perhaps even think they are) conscious. Already, millions of humans are chatting with AI characters, and millions of dollars are pouring into making AI characters more realistic. Some of us may be players of the game, who have forgotten that we allowed the signal to be beamed into our brain, while others, like Neo or Morpheus or Trinity in "The Matrix," may have been plugged in at birth...

The fact that we are approaching the simulation point so soon in our future means that the likelihood that we are already inside someone else's advanced simulation goes up exponentially. Like Neo, we would be unable to tell the difference between a simulated and a physical world. Perhaps the most appropriate response to that is another of Reeves' most famous lines from that now-classic sci-fi film: Woah.

The author notes that the idea of being trapped inside a video game already "had been articulated by one of the Wachowskis' heroes, science fiction author Philip K. Dick, who stated, all the way back in 1977, 'We are living in a computer programmed reality.'" A few years ago, I interviewed Dick's wife Tessa and asked her what he would have thought of "The Matrix." She said his first reaction would have been that he loved it; however, his second reaction would most likely have been to call his agent to see if he could sue the filmmakers for stealing his ideas.
Graphics

Canva Acquires Affinity To Fill the Adobe-Sized Holes In Its Design Suite (theverge.com) 31

Web-based design platform Canva has acquired the Affinity creative software suite for an undisclosed sum, though Bloomberg reports that it's valued at "several hundred million [British] pounds." The Verge reports that the acquisition helps the company "[position] itself as a challenger to Adobe's grip over the digital design industry." From the report: Canva announced the deal on Tuesday, which gives the company ownership over Affinity Designer, Photo, and Publisher -- three popular creative applications for Windows, Mac, and iPad that provide similar features to Adobe's Illustrator, Photoshop, and InDesign software, respectively. [T]he acquisition makes sense as the Australian-based company tries to attract more creative professionals. As of January this year, Canva's design platform attracted around 170 million monthly global users. That's a lot of people who probably aren't using equivalent Adobe software like Express, but unlike Adobe, Canva doesn't have its own design applications that target creative professionals like illustrators, photographers, and video editors.

Affinity apps are used by over three million global users according to Canva -- that's a fraction of Adobe's user base, but Affinity shouldn't be underestimated here. The decision to make its Affinity applications a one-time-purchase with no ongoing subscription fees has earned it a loyal fanbase, especially with creatives who are actively looking for alternatives to Adobe's subscription-based design ecosystem. In an interview with the Sydney Morning Herald, Canva co-founder Cameron Adams said that Affinity applications will remain separate from Canva's platform, but that some small integrations should be expected over time. "Our product teams have already started chatting and we have some immediate plans for lightweight integration, but we think the products themselves will always be separate," said Adams.

Open Source

OpenTTD (Unofficial Remake of 'Transport Tycoon Deluxe' Game) Turns 20 (openttd.org) 17

In 1995 Scottish video game designer Chris Sawyer created the business simulator game Transport Tycoon Deluxe — and within four years, Wikipedia notes, work began on the first version of an open source version that's still being actively developed. "According to a study of the 61,154 open-source projects on SourceForge in the period between 1999 and 2005, OpenTTD ranked as the 8th most active open-source project to receive patches and contributions. In 2004, development moved to their own server."

Long-time Slashdot reader orudge says he's been involved for almost 25 years. "Exactly 21 years ago, I received an ICQ message (look it up, kids) out of the blue from a guy named Ludvig Strigeus (nicknamed Ludde)." "Hello, you probably don't know me, but I've been working on a project to clone Transport Tycoon Deluxe for a while," he said, more or less... Ludde made more progress with the project [written in C] over the coming year, and it looks like we even attempted some multiplayer games (not too reliable, especially over my dial-up connection at the time). Eventually, when he was happy with what he had created, he agreed to allow me to release the game as open source. Coincidentally, this happened exactly a year after I'd first spoken to him, on the 6th March 2004...

Things really got going after this, and a community started to form with enthusiastic developers fixing bugs, adding in new features, and smoothing off the rough edges. Ludde was, I think, a bit taken aback by how popular it proved, and even rejoined the development effort for a while. A read through the old changelogs reveals just how many features were added over a very short period of time. Quick wins like higher vehicle limits came in very quickly, and support for TTDPatch's NewGRF format started to be functional just four months later. Large maps, improved multiplayer, better pathfinders, improved TTDPatch compatibility, and of course, ports to a great many different operating systems, such as Mac OS X, BeOS, MorphOS and OS/2. It was a very exciting time to be a TTD fan!

Within six years, ambitious projects to create free replacements for the original TTD graphics, sounds and music sets were complete, and OpenTTD finally had its 1.0 release. And while we may not have the same frantic addition of new features we had in 2004, there have still been massive improvements to the code, with plenty of exciting new features over the years, with major releases every year since 2008. he move to GitHub in 2018 and the release of OpenTTD on Steam in 2021 have also re-energised development efforts, with thousands of people now enjoying playing the game regularly. And development shows no signs of slowing down, with the upcoming OpenTTD 14.0 release including over 40 new features!

"Personally, I would like to say thank you to everyone who has supported OpenTTD development over the past two decades..." they write, adding "Finally, of course, I'd like to thank you, the players! None of us would be here if people weren't still playing the game.

"Seeing how the first twenty years have gone, I can't wait to see what the next twenty years have in store. :)"
IT

HDMI Forum Rejects Open-Source HDMI 2.1 Driver Support Sought By AMD (phoronix.com) 114

Michael Larabel, reporting at Phoronix: One of the limitations of AMD's open-source Linux graphics driver has been the inability to implement HDMI 2.1+ functionality on the basis of legal requirements by the HDMI Forum. AMD engineers had been working to come up with a solution in conjunction with the HDMI Forum for being able to provide HDMI 2.1+ capabilities with their open-source Linux kernel driver, but it looks like those efforts for now have concluded and failed. For three years there has been a bug report around 4K@120Hz being unavailable via HDMI 2.1 on the AMD Linux driver. Similarly, there have been bug reports like 5K @ 240Hz not possible either with the AMD graphics driver on Linux.

As covered back in 2021, the HDMI Forum closing public specification access is hurting open-source support. AMD as well as the X.Org Foundation have been engaged with the HDMI Forum to try to come up with a solution to be able to provide open-source implementations of the now-private HDMI specs. AMD Linux engineers have spent months working with their legal team and evaluating all HDMI features to determine if/how they can be exposed in their open-source driver. AMD had code working internally and then the past few months were waiting on approval from the HDMI Forum. Sadly, the HDMI Forum has turned down AMD's request for open-source driver support.

KDE

KDE Plasma 6 Released (kde.org) 35

"Today, the KDE Community is announcing a new major release of Plasma 6.0 and Gear 24.02," writes longtime Slashdot reader jrepin. "The new version brings new windows and desktop overview effects, improved color management, a cleaner theme, better overall performance, and much more." From the announcement: KDE Plasma is a modern, feature-rich desktop environment for Linux-based operating systems. Known for its sleek design, customizable interface, and extensive set of applications, it is also open source, devoid of ads, and makes protecting your privacy and personal data a priority.

With Plasma 6, the technology stack has undergone two major upgrades: a transition to the latest version of the application framework, Qt 6, and a migration to the modern Linux graphics platform, Wayland. We will continue providing support for the legacy X11 session for users who prefer to stick with it for now. [...] KDE Gear 24.02 brings many applications to Qt 6. In addition to the changes in Breeze, many applications adopted a more frameless look for their interface.

Slashdot Top Deals