×
Open Source

Intel CTO Wants Developers To Build Once, Run On Any GPU (venturebeat.com) 58

Greg Lavender, CTO of Intel, spoke to VentureBeat about the company's efforts to help developers build applications that can run on any operating system. From the report: "Today in the accelerated computing and GPU world, you can use CUDA and then you can only run on an Nvidia GPU, or you can go use AMD's CUDA equivalent running on an AMD GPU,â Lavender told VentureBeat. "You can't use CUDA to program an Intel GPU, so what do you use?" That's where Intel is contributing heavily to the open-source SYCL specification (SYCL is pronounced like "sickle") that aims to do for GPU and accelerated computing what Java did decades ago for application development. Intel's investment in SYCL is not entirely selfless and isn't just about supporting an open-source effort; it's also about helping to steer more development toward its recently released consumer and data center GPUs. SYCL is an approach for data parallel programming in the C++ language and, according to Lavender, it looks a lot like CUDA.

To date, SYCL development has been managed by the Khronos Group, which is a multi-stakeholder organization that is helping to build out standards for parallel computing, virtual reality and 3D graphics. On June 1, Intel acquired Scottish development firm Codeplay Software, which is one of the leading contributors to the SYCL specification. "We should have an open programming language with extensions to C++ that are being standardized, that can run on Intel, AMD and Nvidia GPUs without changing your code," Lavender said. Lavender is also a realist and he knows that there is a lot of code already written specifically for CUDA. That's why Intel developers built an open-source tool called SYCLomatic, which aims to migrate CUDA code into SYCL. Lavender claimed that SYCLomatic today has coverage for approximately 95% of all the functionality that is present in CUDA. He noted that the 5% SYCLomatic doesn't cover are capabilities that are specific to Nvidia hardware.

With SYCL, Lavender said that there are code libraries that developers can use that are device independent. The way that works is code is written by a developer once, and then SYCL can compile the code to work with whatever architecture is needed, be it for an Nvidia, AMD or Intel GPU. Looking forward, Lavender said that he's hopeful that SYCL can become a Linux Foundation project, to further enable participation and growth of the open-source effort. [...] "We should have write once, run everywhere for accelerated computing, and then let the market decide which GPU they want to use, and level the playing field," Lavender said.

Open Source

Linux 6.0 Arrives With Support For Newer Chips, Core Fixes, and Oddities (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: A stable version of Linux 6.0 is out, with 15,000 non-merge commits and a notable version number for the kernel. And while major Linux releases only happen when the prior number's dot numbers start looking too big -- there is literally no other reason" -- there are a lot of notable things rolled into this release besides a marking in time. Most notable among them could be a patch that prevents a nearly two-decade slowdown for AMD chips, based on workaround code for power management in the early 2000s that hung around for far too long. [...]

Intel's new Arc GPUs are supported in their discrete laptop form in 6.0 (though still experimental). Linux blog Phoronix notes that Intel's ARC GPUs all seem to run on open source upstream drivers, so support should show up for future Intel cards and chipsets as they arrive on the market. Linux 6.0 includes several hardware drivers of note: fourth-generation Intel Xeon server chips, the not-quite-out 13th-generation Raptor Lake and Meteor Lake chips, AMD's RDNA 3 GPUs, Threadripper CPUs, EPYC systems, and audio drivers for a number of newer AMD systems. One small, quirky addition points to larger things happening inside Linux. Lenovo's ThinkPad X13s, based on an ARM-powered Qualcomm Snapdragon chip, get some early support in 6.0. ARM support is something Linux founder Linus Torvalds is eager to see [...].

Among other changes you can find in Linux 6.0, as compiled by LWN.net (in part one and part two):
- ACPI and power management improvements for Sapphire Rapids CPUs
- Support for SMB3 file transfer inside Samba, while SMB1 is further deprecated
- More work on RISC-V, OpenRISC, and LoongArch technologies
- Intel Habana Labs Gaudi2 support, allowing hardware acceleration for machine-learning libraries
- A "guest vCPU stall detector" that can tell a host when a virtual client is frozen
Ars' Kevin Purdy notes that in 2022, "there are patches in Linux 6.0 to help Atari's Falcon computers from the early 1990s (or their emulated descendants) better handle VGA modes, color, and other issues."

Not included in this release are Rust improvements, but they "are likely coming in the next point release, 6.1," writes Purdy.
Debian

Debian Chooses Reasonable, Common Sense Solution To Dealing With Non-Free Firmware (phoronix.com) 65

Michael Larabel writes via Phoronix: Debian developers have been figuring out an updated stance to take on non-free firmware considering the increasing number of devices now having open-source Linux drivers but requiring closed-source firmware for any level of functionality. The voting on the non-free firmware matter has now concluded and the votes tallied... The debian votes option 5 as winning: "Change SC for non-free firmware in installer, one installer."

Basically the Debian Installer media will now be allowed to include non-free firmware and to automatically load/use it where necessary while informing the user of it, etc. Considering the state of the hardware ecosystem these days, it's reasonable and common sense since at least users will be able to easily make use of their graphics cards, network adapters, and more. Plus a number of modern CPU security mitigations also requiring the updated closed-source microcode. So all in, I am personally happy with this decision as it will allow for a more pleasant experience for Debian on modern systems and one akin to what is found with other Linux distributions.
The solution is described in full via the Debian Wiki.
AMD

Rewritten OpenGL Drivers Make AMD's GPUs 'Up To 72%' Faster in Some Pro Apps (arstechnica.com) 23

Most development effort in graphics drivers these days, whether you're talking about Nvidia, Intel, or AMD, is focused on new APIs like DirectX 12 or Vulkan, increasingly advanced upscaling technologies, and specific improvements for new game releases. But this year, AMD has also been focusing on an old problem area for its graphics drivers: OpenGL performance. From a report: Over the summer, AMD released a rewritten OpenGL driver that it said would boost the performance of Minecraft by up to 79 percent (independent testing also found gains in other OpenGL games and benchmarks, though not always to the same degree). Now those same optimizations are coming to AMD's officially validated GPU drivers for its Radeon Pro-series workstation cards, providing big boosts to professional apps like Solidworks and Autodesk Maya. "The AMD Software: PRO Edition 22.Q3 driver has been tested and approved by Dell, HP, and Lenovo for stability and is available through their driver downloads," the company wrote in its blog post. "AMD continues to work with software developers to certify the latest drivers." Using a Radeon Pro W6800 workstation GPU, AMD says that its new drivers can improve Solidworks rendering speeds by up to 52 or 28 percent at 4K and 1080p resolutions, respectively. Autodesk Maya performance goes up by 34 percent at 4K or 72 percent at the default resolution. The size of the improvements varies based on the app and the GPU, but AMD's testing shows significant, consistent improvements across the board on the Radeon Pro W6800, W6600, and W6400 GPUs, improvements that AMD says will help those GPUs outpace analogous Nvidia workstation GPUs like the RTX A5000 and A2000 and the Nvidia T600.
GNOME

Apple M1 Linux GPU DRM Driver Now Running GNOME, Various Apps (phoronix.com) 44

Developer Asahi Lina with the Asahi Linux project was successfully able to get GNOME running on the Apple M1, including "Firefox with YouTube video playback, the game Neverball, various KDE applications, and more," reports Phoronix. From the report: This is some great progress especially with the driver being written in Rust -- the first within the Direct Rendering Manager subsystem -- and lots of work there with the Rust infrastructure in early form. It won't be until at least Linux 6.2 before this driver could be mainlined while we'll see how quickly it tries to go mainline before it can commit to a stable user-space interface. At the moment there is also a significant driver "hack" involved but will hopefully be sorted out soon. Over in user-space, the AGX Gallium3D driver continues being worked on for OpenGL support with hopes of having OpenGL 2.1 completed by year's end. Obviously it will be longer before seeing the Apple graphics suitable for modern gaming with Vulkan, etc but progress is being made across the board in reverse-engineered, open-source Apple Silicon support under Linux. You can watch a video of the driver working here.
Intel

Intel's 13th-Gen 'Raptor Lake' CPUs Are Official, Launch October 20 (arstechnica.com) 45

Codenamed Raptor Lake, Intel says it has made some improvements to the CPU architecture and the Intel 7 manufacturing process, but the strategy for improving their performance is both time-tested and easy to understand: add more cores, and make them run at higher clock speeds. From a report: Intel is announcing three new CPUs today, each with and without integrated graphics (per usual, the models with no GPUs have an "F" at the end): the Core i9-13900K, Core i7-13700K, and Core i5-13600K will launch on October 20 alongside new Z790 chipsets and motherboards. They will also work in all current-generation 600-series motherboards as long as your motherboard maker has provided a BIOS update, and will continue to support both DDR4 and DDR5 memory.

Raptor Lake uses the hybrid architecture that Intel introduced in its 12th-generation Alder Lake chips last year -- a combination of large performance cores (P-cores) that keep games and other performance-sensitive applications running quickly, plus clusters of smaller efficiency cores (E-cores) that use less power -- though in our testing across laptops and desktops, it's clear that "efficiency" is more about the number of cores can be fit into a given area on a CPU die, and less about lower overall system power consumption. There have been a handful of other additions as well. The amount of L2 cache per core has been nearly doubled, going from 1.25MB to 2MB per P-core and from 2MB to 4MB per E-core cluster (E-cores always come in clusters of four). The CPUs will officially support DDR5-5600 RAM, up from a current maximum of DDR5-4800, though that DDR5-4800 maximum can easily be surpassed with XMP memory kits in 12th-generation motherboards. The maximum officially supported DDR4 RAM speed remains DDR4-3200, though the caveat about XMP applies there as well. As far as core counts and frequencies go, the Core i5 and Core i7 CPUs each pick up one extra E-core cluster, going from four E-cores to eight. The Core i9 gets two new E-core clusters, boosting the core count from eight all the way up to 16. All E-cores have maximum boost clocks that are 400MHz higher than they were before.

United States

New York City's Empty Offices Reveal a Global Property Dilemma (bloomberg.com) 134

An anonymous reader quotes a report from Bloomberg: In the heart of midtown Manhattan lies a multibillion-dollar problem for building owners, the city and thousands of workers. Blocks of decades-old office towers sit partially empty, in an awkward position: too outdated to attract tenants seeking the latest amenities, too new to be demolished or converted for another purpose. It's a situation playing out around the globe as employers adapt to flexible work after the Covid-19 pandemic and rethink how much space they need. Even as people are increasingly called back to offices for at least some of the week, vacancy rates have soared in cities from Hong Kong to London and Toronto.

"There's no part of the world that is untouched by the growth of hybrid working," said Richard Barkham, global chief economist for commercial real estate firm CBRE Group Inc. In some cases, companies are simply cutting back on space to reduce their real estate costs. Others are relocating to shiny new towers with top-of-the-line amenities to attract talent and employees who may be reluctant to leave the comforts of working from home. Left behind are older buildings outside of prime locations. The US is likely to have a slower office-market recovery than Asia and Europe because it began the pandemic with a higher vacancy rate, and long-term demand is expected to drop around 10% or more, Barkham said. New York, America's biggest office real estate market, is at the center of the issue.

A study this year by professors at Columbia University and New York University estimated that lower tenant demand because of remote work may cut 28%, or $456 billion, off the value of offices across the US. About 10% of that would be in New York City alone. The implications of obsolete buildings stretch across the local economy. Empty offices have led to a cascade of shuttered restaurants and other street-level businesses that depended on daytime worker traffic. And falling building values mean less property-tax revenue for city coffers. A strip on Manhattan's Third Avenue, from 42nd to 59th streets, shows the problem of older properties in stark terms. While New York leasing demand has bounced back toward pre-pandemic levels, the corridor has 29% of office space available for tenants, nearly double the amount four years ago and above the city's overall rate of 19%, according to research from brokerage firm Savills.
"There's no easy fix for landlords, who rely on rental income to pay down debt," notes the report. "Some cities are exploring options to turn downtown offices to residential buildings: Calgary, for instance, has an incentive program for such redevelopments. While New York has had some conversions, the hefty costs and zoning and architectural restrictions make it a difficult proposition."
Medicine

Cybersickness Could Spell an Early Death For the Metaverse 135

An anonymous reader quotes a report from the Daily Beast: Luis Eduardo Garrido couldn't wait to test out his colleague's newest creation. Garrido, a psychology and methodology researcher at Pontificia Universidad Catolica Madre y Maestra in the Dominican Republic, drove two hours between his university's campuses to try a virtual reality experience that was designed to treat obsessive-compulsive disorder and different types of phobias. But a couple of minutes after he put on the headset, he could tell something was wrong. "I started feeling bad," Garrido told The Daily Beast. He was experiencing an unsettling bout of dizziness and nausea. He tried to push through but ultimately had to abort the simulation almost as soon as he started. "Honestly, I don't think I lasted five minutes trying out the application," he said.

Garrido had contracted cybersickness, a form of motion sickness that can affect users of VR technology. It was so severe that he worried about his ability to drive home, and it took hours for him to recover from the five-minute simulation. Though motion sickness has afflicted humans for thousands of years, cybersickness is a much newer condition. While this means that many of its causes and symptoms are understood, other basic questions -- like how common cybersickness is, and whether there are ways to fully prevent it -- are only just starting to be studied. After Garrido's experience, a colleague told him that only around 2 percent of people feel cybersickness. But at a presentation for prospective students, Garrido watched as volunteers from the audience walked to the front of an auditorium to demo a VR headset -- only to return shakily to their seats. "I could see from afar that they were getting sweaty and kind of uncomfortable," he recalled. "I said to myself, 'Maybe I'm not the only one.'"

As companies like Meta (nee Facebook) make big bets that augmented reality and virtual reality technology will go mainstream, the tech industry is still trying to figure out how to better recruit users to the metaverse, and get them to stay once there. But experts worry that cybersickness could derail these plans for good unless developers find some remedies soon.
"The issue is actually something of a catch-22: In order to make VR more accessible and affordable, companies are making devices smaller and running them on less powerful processors," adds the report. "But these changes introduce dizzying graphics -- which inevitably causes more people to experience cybersickness."

"At the same time, a growing body of research suggests cybersickness is vastly more pervasive than previously thought -- perhaps afflicting more than half of all potential users." When Garrido conducted his own study of 92 people, the results indicated that more than 65 percent of people experienced symptoms of cybersickness -- a sharp contrast to the 2 percent estimate Garrido had been told.

He says that these results should be concerning for developers. "If people have this type of bad experience with something, they're not going to try it again," Garrido said.
Hardware

Nvidia Announces Next-Gen RTX 4090 and RTX 4080 GPUs (theverge.com) 178

Nvidia is officially announcing its RTX 40-series GPUs today. After months of rumors and some recent teasing from Nvidia, the RTX 4090 and RTX 4080 are now both official. The RTX 4090 arrives on October 12th priced at $1,599, with the RTX 4080 priced starting at $899 and available in November. Both are powered by Nvidia's next-gen Ada Lovelace architecture. From a report: The RTX 4090 is the top-end card for the Lovelace generation. It will ship with a massive 24GB of GDDR6X memory. Nvidia claims it's 2-4x faster than the RTX 3090 Ti, and it will consume the same amount of power as that previous generation card. Nvidia recommends a power supply of at least 850 watts based on a PC with a Ryzen 5900X processor. Inside the giant RTX 4090 there are 16,384 CUDA Cores, a base clock of 2.23GHz that boosts up to 2.52GHz, 1,321 Tensor-TFLOPs, 191 RT-TFLOPs, and 83 Shader-TFLOPs.

Nvidia is actually offering the RTX 4080 in two models, one with 12GB of GDDR6X memory and another with 16GB of GDDR6X memory, and Nvidia claims it's 2-4x faster than the existing RTX 3080 Ti. The 12GB model will start at $899 and include 7,680 CUDA Cores, 7,680 CUDA Cores, a 2.31GHz base clock that boosts up to 2.61GHz, 639 Tensor-TFLOPs, 92 RT-TFLOPs, and 40 Shader-TFLOPs. The 16GB model of the RTX 4080 isn't just a bump to memory, though. Priced starting at $1,199 it's more powerful with 9,728 CUDA Cores, a base clock of 2.21GHz that boosts up to 2.51GHz, 780 Tensor-TFLOPs, 113 RT-TFLOPs, and 49 Shader-TFLOPs of power. The 12GB RTX 4080 model will require a 700 watt power supply, with the 16GB model needing at least 750 watts. Both RTX 4080 models will launch in November.
Further reading: Nvidia Puts AI at Center of Latest GeForce Graphics Card Upgrade.
Books

'Linux IP Stacks Commentary' Book Tries Free Online Updates (satchell.net) 13

Recently the authors of Elements of Publishing shared an update. "After ten years in print, our publisher decided against further printings and has reverted the rights to us. We are publishing Elements of Programming in two forms: a free PDF and a no-markup paperback."

And that's not the only old book that's getting a new life on the web...

22 years ago, long-time Slashdot reader Stephen T. Satchell (satch89450) co-authored Linux IP Stacks Commentary, a book commenting the TCP/IP code in Linux kernel 2.0.34. ("Old-timers will remember the Lion's Unix Commentary, the book published by University xerographic copies on the sly. Same sort of thing.") But the print edition struggled to update as frequently as the Linux kernel itself, and Satchell wrote a Slashdot post exploring ways to fund a possible update.

At the time Slashdot's editors noted that "One of the largest complaints about Linux is that there is a lack of high-profile documentation. It would be sad if this publication were not made simply because of the lack of funds (which some people would see as a lack of interest) necessary to complete it." But that's how things seemed to end up — until Satchell suddenly reappeared to share this update from 2022: When I was released from my last job, I tried retirement. Wasn't for me. I started going crazy with nothing significant to do. So, going through old hard drives (that's another story), I found the original manuscript files, plus the page proof files, for that two-decade-old book. Aha! Maybe it's time for an update. But how to keep it fresh, as Torvalds continues to release new updates of the Linux kernel?

Publish it on the Web. Carefully.

After four months (and three job interviews) I have the beginnings of the second edition up and available for reading. At the moment it's an updated, corrected, and expanded version of the "gray matter", the exposition portions of the first edition....

The URL for the alpha-beta version of this Web book is satchell.net/ipstacks for your reading pleasure. The companion e-mail address is up and running for you to provide feedback. There is no paywall.

But there's also an ingenious solution to the problem of updating the text as the code of the kernel keeps changing: Thanks to the work of Professor Donald Knuth (thank you!) on his WEB and CWEB programming languages, I have made modifications, to devise a method for integrating code from the GIT repository of the Linux kernel without making any modifications (let alone submissions) to said kernel code. The proposed method is described in the About section of the Web book. I have scaffolded the process and it works. But that's not the hard part.

The hard part is to write the commentary itself, and crib some kind of Markup language to make the commentary publishing quality. The programs I write will integrate the kernel code with the commentary verbiage into a set of Web pages. Or two slightly different sets of web pages, if I want to support a mobile-friendly version of the commentary.

Another reason for making it a web book is that I can write it and publish it as it comes out of my virtual typewriter. No hard deadlines. No waiting for the printers. And while this can save trees, that's not my intent. The back-of-the-napkin schedule calls for me to to finish the expository text in September, start the Python coding for generating commentary pages at the same time, and start the writing the commentary on the Internet Control Message Protocol in October. By then, Linus should have version 6.0.0 of the Linux kernel released.

I really, really, really don't want to charge readers to view the web book. Especially as it's still in the virtual typewriter. There isn't any commentary (yet). One thing I have done is to make it as mobile-friendly as I can, because I suspect the target audience will want to read this on a smartphone or tablet, and not be forced to resort to a large-screen laptop or desktop. Also, the graphics are lightweight to minimize the cost for people who pay by the kilopacket. (Does anywhere in the world still do this? Inquiring minds want to know.)

I host this web site on a Protectli appliance in my apartment, so I don't have that continuing expense. The power draw is around 20 watts. My network connection is AT&T fiber — and if it becomes popular I can always upgrade the upstream speed.

The thing is, the cat needs his kibble. I still want to know if there is a source of funding available.

Also, is it worthwhile to make the pages available in a zip file? Then a reader could download a snapshot of the book, and read it off-line.

Bitcoin

GPU Mining No Longer Profitable After Ethereum Merge (tomshardware.com) 163

Just one day after the Ethereum Merge, where the cryptocoin successfully switched from Proof of Work (PoW) to Proof of Stake (PoS), profitability of GPU mining has completely collapsed. Tom's Hardware reports: That means the best graphics cards should finally be back where they belonged, in your gaming PC, just as god intended. That's a quick drop, considering yesterday there were still a few cryptocurrencies that were technically profitable. Looking at WhatToMine, and using the standard $0.10 per kWh, the best-case results are with the GeForce RTX 3090 and Radeon RX 6800 and 6800 XT. Those are technically showing slightly positive results, to the tune of around $0.06 per day after power costs. However, that doesn't factor in the cost of the PC power, or the wear and tear on your graphics card.

Even at a slightly positive net result, it would still take over 20 years to break even on the cost of an RX 6800. We say that tongue-in-cheek, because if there's one thing we know for certain, it's that no one can predict what the cryptocurrency market will look like even one year out, never mind 20 years in the future. It's a volatile market, and there are definitely lots of groups and individuals hoping to figure out a way to Make GPU Mining Profitable Again (MGMPA hats inbound...)

Of the 21 current generation graphics cards from the AMD RX 6000-series and the Nvidia RTX 30-series, only five are theoretically profitable right now, and those are all just barely in the black. This is using data from NiceHash and WhatToMine, so perhaps there are ways to tune other GPUs to get into the net positive, but the bottom line is that no one should be using GPUs for mining right now, and certainly not buying more GPUs for mining purposes. [You can see a full list of the current profitability of the current generation graphics cards here.]

Graphics

EVGA Abandons the GPU Market, Reportedly Citing Conflicts With Nvidia (tomshardware.com) 72

UnknowingFool writes: After a decades long partnership with Nvidia, EVGA has announced they are ending their relationship. Citing conflicts with Nvidia, EVGA CEO Andrew Han said the company will not partner with Intel nor AMD, and will be exiting the GPU market completely. The company will continue to make existing RTX 30-series cards until their stock runs out but will not release a 4000 series card. YouTube channels JayZTwoCents and GamersNexus broke the news after sitting down with EVGA CEO Andrew Han to discuss his frustrations with Nvidia as a partner. Jon Peddie Research also published a brief article on the matter.
Graphics

Canva, the $26 Billion Design Startup, Launches a Productivity Suite To Take On Google Docs, Microsoft Office (fortune.com) 20

Canva, the Australian graphic design business valued at $26 billion, is introducing a new suite of digital workplace products that "represent a direct challenge to Google Docs, Microsoft Office, and Adobe, whose digital tools are mainstays of the modern workplace," reports Fortune. However, Cliff Obrecht, Canva co-founder and COO, claims that Canva isn't trying to compete with these corporate behemoths. "Instead, he sees Canva as a visual-first companion to these tools," reports TechCrunch.

"We're not trying to compete head-to-head with Google Docs," Obrecht told TechCrunch. "Our products are inherently visual, so we take a very visual lens on, what does a visual document look like? How do you turn that boring document that's all text based into something engaging?" Fortune reports: With the launch, Canva hopes to transform itself from a mainly consumer-focused brand often used by individual teams to design social media graphics and presentations to a critical business tool -- and, in the process, crack open the productivity management software market valued at $47.3 billion and growing at 13% a year, according to Grand View Research. "Visual communication is becoming an increasingly critical skill for teams of every size across almost every industry," cofounder and CEO Melanie Perkins said in a statement. "We're bringing simple design products to the workplace to empower every employee, at every organization, and on every device." The product offerings include Canva Docs, Canva Websites, Canva Whiteboards and Data Visualization -- all of which are interoperable, "so if you make a presentation, you can turn it into a document or a website too," notes TechCrunch.

"Canva also plans to launch its API in beta, enabling developers to more easily integrate with the worksuite. Plus, Canva is launching a creator program where highly-vetted designers can sell templates, photos and designs to Canva users."
Intel

Intel Reveals Specs of Arc GPU (windowscentral.com) 23

Intel has dripped out details about its upcoming Arc graphics cards over the last few months, but until recently, we didn't have full specifications for the GPUs. That changed when Intel dropped a video and a post breaking down the full Arc A-series. From a report: The company shared the spec sheets of the Arc A380, Arc A580, Arc 750, and Arc A770. It also explained the naming structure of the new GPUs along with other details. Just about the only major piece of information we're still missing is the release date for the cards. At the top end of the range, Intel's Arc A770 will have 32 Xe cores, 32 ray-tracing units, and a graphics clock of 2100MHz. That GPU will be available with either 8GB or 16GB of memory. Sitting just below the Arc A770, the Arc A750 will have 28 Xe cores, 28 ray-tracing units, and 8GB of memory. The Intel Arc A580 will sit in the middle between the company's high-end GPUs and the Intel Arc A380.
Intel

Asus Packs 12-Core Intel i7 Into a Raspberry Pi-Sized Board (theregister.com) 30

An anonymous reader quotes a report from The Register: The biz's GENE-ADP6, announced this week, can pack as much as a 12-core/16-thread Intel processor with Iris Xe graphics into a 3.5-inch form factor. The diminutive system is aimed at machine-vision applications and can be configured with your choice of Intel silicon including Celeron, or Core i3, i5, or a choice of 10 or 12-core i7 processors. As with other SBCs we've seen from Aaeon and others, the processors aren't socketed so you won't be upgrading later. This device is pretty much aimed at embedded and industrial use, mind. All five SKUs are powered by Intel's current-gen Alder Lake mobile processor family, including a somewhat unusual 5-core Celeron processor that pairs a single performance core with four efficiency cores. However, only the i5 and i7 SKUs come equipped with Intel's Iris Xe integrated graphics. The i3 and Celeron are stuck on UHD graphics. The board can be equipped with up to 64GB of DDR5 memory operating at up to 4800 megatransfers/sec by way of a pair of SODIMM modules.

For I/O the board features a nice set of connectivity including a pair of NICs operating at 2.5 Gbit/sec and 1 Gbit/sec, HDMI 2.1 and Display Port 1.4, three 10Gbit/sec-capable USB 3.2 Gen 2 ports, and a single USB-C port that supports up to 15W of power delivery and display out. For those looking for additional connectivity for their embedded applications, the system also features a plethora of pin headers for USB 2.0, display out, serial interfaces, and 8-bit GPIO. Storage is provided by your choice of a SATA 3.0 interface or a m.2 mSATA/NVMe SSD. Unlike Aaeon's Epic-TGH7 announced last month, the GENE-ADP6 is too small to accommodate a standard PCIe slot, but does feature a FPC connector, which the company says supports additional NVMe storage or external graphics by way of a 4x PCIe 4.0 interface.

Intel

Intel Details 12th Gen Core SoCs Optimized For Edge Applications (theregister.com) 6

Intel has made available versions of its 12th-generation Core processors optimized for edge and IoT applications, claiming the purpose-built chips enable smaller form factor designs, but with the AI inferencing performance to analyze data right at the edge. The Register reports: The latest members of the Alder Lake family, the 12th Gen Intel Core SoC processors for IoT edge (formerly Alder Lake PS) combine the performance profile and power envelope of the mobile chips but the LGA socket flexibility of the desktop chips, according to Intel, meaning they can be mounted directly on a system board or in a socket for easy replacement. Delivered as a multi-chip package, the new processors combine the Alder Lake cores with an integrated Platform Controller Hub (PCH) providing I/O functions and integrated Iris Xe graphics with up to 96 graphics execution units. [...]

Intel VP and general manager of the Network and Edge Compute Division Jeni Panhorst said in a statement that the new processors were designed for a wide range of vertical industries. "As the digitization of business processes continues to accelerate, the amount of data created at the edge and the need for it to be processed and analyzed locally continues to explode," she said. Another key capability for managing systems deployed in edge scenarios is that these processors include Intel vPro features, which include remote management capabilities built into the hardware at the silicon level, so an IT admin can reach into a system and perform actions such as changing settings, applying patches or rebooting the platform.

The chips support up to eight PCIe 4.0 lanes, and four Thunderbolt 4/USB4 lanes, with up to 64GB of DDR5 or DDR4 memory, and the graphics are slated to deliver four 4K displays or one 8K display. Operating system support includes Windows 10 IoT Enterprise 2021 Long Term Servicing Channel (LTSC) and Linux options. Intel said the new SoCs are aimed at a broad range of industries, including point-of-sale kit in the retail, banking, and hospitality sectors, industrial PCs and controllers for the manufacturing industry, plus healthcare.

AMD

AMD Launches Zen 4 Ryzen 7000 CPUs (tomshardware.com) 156

AMD unveiled its 5nm Ryzen 7000 lineup today, outlining the details of four new models that span from the 16-core $699 Ryzen 9 7950X flagship, which AMD claims is the fastest CPU in the world, to the six-core $299 Ryzen 5 7600X, the lowest bar of entry to the first family of Zen 4 processors. Tom's Hardware reports: Ryzen 7000 marks the first 5nm x86 chips for desktop PCs, but AMD's newest chips don't come with higher core counts than the previous-gen models. However, frequencies stretch up to 5.7 GHz - an impressive 800 MHz improvement over the prior generation -- paired with an up to 13% improvement in IPC from the new Zen 4 microarchitecture. That results in a 29% improvement in single-threaded performance over the prior-gen chips. That higher performance also extends out to threaded workloads, with AMD claiming up to 45% more performance in some threaded workloads. AMD says these new chips power huge generational gains over the prior-gen Ryzen 5000 models, with 29% faster gaming and 44% more performance in productivity apps. Going head-to-head with Intel's chips, AMD claims the high-end 7950X is 11% faster overall in gaming than Intel's fastest chip, the 12900K, and that even the low-end Ryzen 5 7600X beats the 12900K by 5% in gaming. It's noteworthy that those claims come with a few caveats [...].

The Ryzen 7000 processors come to market on September 27, and they'll be joined by new DDR5 memory products that support new EXPO overclocking profiles. AMD's partners will also offer a robust lineup of motherboards - the chips will snap into new Socket AM5 motherboards that AMD says it will support until 2025+. These motherboards support DDR5 memory and the PCIe 5.0 interface, bringing the Ryzen family up to the latest connectivity standards. The X670 Extreme and standard X670 chipsets arrive first in September, while the more value-oriented B650 options will come to market in October. That includes the newly announced B650E chipset that brings full PCIe 5.0 connectivity to budget motherboards, while the B650 chipset slots in as a lower-tier option. The Ryzen 7000 lineup also brings integrated RDNA 2 graphics to all of the processors in the stack, a first for the Ryzen family.

Social Networks

'Facebook Misinformation Is Bad Enough. The Metaverse Will Be Worse' (rand.org) 53

The Rand Corporation is an American (nonprofit) think tank. And veliath (Slashdot reader #5,435) spotted their recent warning about "a plausible scenario that could soon take place in the metaverse." A political candidate is giving a speech to millions of people. While each viewer thinks they are seeing the same version of the candidate, in virtual reality they are actually each seeing a slightly different version. For each and every viewer, the candidate's face has been subtly modified to resemble the viewer.... The viewers are unaware of any manipulation of the image. Yet they are strongly influenced by it: Each member of the audience is more favorably disposed to the candidate than they would have been without any digital manipulation.

This is not speculation. It has long been known that mimicry can be exploited as a powerful tool for influence. A series of experiments by Stanford researchers has shown that slightly changing the features of an unfamiliar political figure to resemble each voter made people rate politicians more favorably. The experiments took pictures of study participants and real candidates in a mock-up of an election campaign. The pictures of each candidate were modified to resemble each participant. The studies found that even if 40 percent of the participant's features were blended into the candidate's face, the participants were entirely unaware the image had been manipulated.

In the metaverse, it's easy to imagine this type of mimicry at a massive scale.

At the heart of all deception is emotional manipulation. Virtual reality environments, such as Facebook's (now Meta's) metaverse, will enable psychological and emotional manipulation of its users at a level unimaginable in today's media.... We are not even close to being able to defend users against the threats posed by this coming new medium.... In VR, body language and nonverbal signals such as eye gaze, gestures, or facial expressions can be used to communicate intentions and emotions. Unlike verbal language, we often produce and perceive body language subconsciously....

We must not wait until these technologies are fully realized to consider appropriate guardrails for them. We can reap the benefits of the metaverse while minimizing its potential for great harm.

They recommend developing technology that detect the application of this kind of VR manipulation.

"Society did not start paying serious attention to classical social media — meaning Facebook, Twitter, and the like — until things got completely out of hand. Let us not make the same mistake as social media blossoms into the metaverse."
Businesses

The GPU Shortage is Over. The GPU Surplus Has Arrived (arstechnica.com) 76

A year ago, it was nearly impossible to buy a GeForce GPU for its intended retail price. Now, the company has the opposite problem. From a report: Nvidia CEO Jensen Huang said during the company's Q2 2023 earnings call yesterday that the company is dealing with "excess inventory" of RTX 3000-series GPUs ahead of its next-gen RTX 4000 series release later this year. To deal with this, according to Huang, Nvidia will reduce the number of GPUs it sells to manufacturers of graphics cards and laptops so that those manufacturers can clear out their existing inventory. Huang also says Nvidia has "instituted programs to price position our current products to prepare for next-generation products."

When translated from C-suite to English, this means the company will be cutting the prices of current-generation GPUs to make more room for next-generation ones. Those price cuts should theoretically be passed along to consumers somehow, though that will be up to Nvidia's partners. Nvidia announced earlier this month that it would be missing its quarterly projections by $1.4 billion, mainly due to decreased demand for its gaming GPUs. Huang said that "sell-through" of GPUs, or the number of cards being sold to users, had still "increased 70 percent since pre-COVID," though the company still expects year-over-year revenue from GPUs to decline next quarter.

Desktops (Apple)

Devs Make Progress Getting MacOS Venture Running On Unsupported, Decade-Old Macs (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Skirting the official macOS system requirements to run new versions of the software on old, unsupported Macs has a rich history. Tools like XPostFacto and LeopardAssist could help old PowerPC Macs run newer versions of Mac OS X, a tradition kept alive in the modern era by dosdude1's patchers for Sierra, High Sierra, Mojave, and Catalina. For Big Sur and Monterey, the OpenCore Legacy Patcher (OCLP for short) is the best way to get new macOS versions running on old Macs. It's an offshoot of the OpenCore Hackintosh bootloader, and it's updated fairly frequently with new features and fixes and compatibility for newer macOS versions. The OCLP developers have admitted that macOS Ventura support will be tough, but they've made progress in some crucial areas that should keep some older Macs kicking for a little bit longer.

[...] First, while macOS doesn't technically include system files for pre-AVX2 Intel CPUs, Apple's Rosetta 2 software does still include those files, since Rosetta 2 emulates the capabilities of a pre-AVX2 x86 CPU. By extracting and installing those files in Ventura, you can re-enable support on Ivy Bridge and older CPUs without AVX2 instructions. And this week, Grymalyuk showed off another breakthrough: working graphics support on old Metal-capable Macs, including machines as old as the 2014 5K iMac, the 2012 Mac mini, and even the 2008 cheese grater-style Mac Pro tower. The OCLP team still has other challenges to surmount, not least of which will involve automating all of these hacks so that users without a deep technical understanding of macOS's underpinnings can continue to set up and use the bootloader. Grymalyuk still won't speculate about a timeframe for official Ventura support in OCLP. But given the progress that has been made so far, it seems likely that people with 2012-and-newer Macs should still be able to run Ventura on their Macs without giving up graphics acceleration or other important features.

Slashdot Top Deals