China

Nvidia's Great Wall of GPUs: China's Hoarding Spree (tomshardware.com) 50

Press2ToContinue writes: 01.AI, a Chinese AI startup, has stockpiled enough Nvidia AI and HPC GPUs to last 18 months, in anticipation of a U.S. export ban. Looks like 01.AI is taking "goo big or go home" to a new level with their GPU shopping spree. They're basically the dragon from "The Hobbit," but instead of gold, they're hoarding Nvidia chips. Maybe they're planning the ultimate LAN party or just really into extreme Minecraft graphics. Either way, it's like they say: "In the land of tech embargoes, the one with the secret GPU stash is king." Or in this case, playing 4D chess while the rest of us are stuck figuring out which port the HDMI cable goes into. "We have stockpiled a lot of Nvidia chips," said 01.AI founder Kai-Fu Lee in an interview with Bloomberg. "The jury is out on whether China in 1.5 years can make equivalent or nearly as good chips."

"We will have two parallel universes. Americans will supply their products and technologies to the U.S. and other countries and Chinese companies will build for China and whoever else uses Chinese products. The reality is that they will not compete very much in the same marketplace."
AMD

AMD Begins Polaris and Vega GPU Retirement Process, Reduces Ongoing Driver Support (anandtech.com) 19

As AMD is now well into their third generation of RDNA architecture GPUs, the sun has been slowly setting on AMD's remaining Graphics Core Next (GCN) designs, better known by the architecture names of Polaris and Vega. From a report: In recent weeks the company dropped support for those GPU architectures in their open source Vulkan Linux driver, AMDVLK, and now we have confirmation that the company is slowly winding down support for these architectures in their Windows drivers as well. Under AMD's extended driver support schedule for Polaris and Vega, the drivers for these architectures will no longer be kept at feature parity with the RDNA architectures. And while AMD will continue to support Polaris and Vega for some time to come, that support is being reduced to security updates and "functionality updates as available."

For AMD users keeping a close eye on their driver releases, they'll likely recognize that AMD already began this process back in September -- though AMD hasn't officially documented the change until now. As of AMD's September Adrenaline 23.9 driver series, AMD split up the RDNA and GCN driver packages, and with that they have also split the driver branches between the two architectures. As a result, only RDNA cards are receiving new features and updates as part of AMD's mainline driver branch (currently 23.20), while the GCN cards have been parked on a maintenance driver branch - 23.19.

Government

'Stupid' Daylight Saving Time Ritual Continues. But Why? (nbcnews.com) 241

Many Americans want to abolish Daylight Saving Time, reports NBC News: Since 2018, nearly all states have passed or entertained legislation that would drop the twice-a-year time shift. And 19 states have passed laws or resolutions in support of year-round daylight saving time, according to data from the National Conference of State Legislatures. But there's a caveat: Nothing can change until Congress addresses a 1960s-era law blocking such action.
"This ritual of changing time twice a year is stupid," U.S. Senator Marco Rubio said in March, reintroducing legislation to end Daylight Saving Time. In an official statement the Senator announced that "Locking the clock has overwhelming bipartisan and popular support. This Congress, I hope that we can finally get this done."

But according to the Hill, "Both the House and Senate versions of the Sunshine Protection Act of 2023 haven't appeared to go far. The Senate bill has been read twice and referred to a committee, while the House bill has only been referred to a subcommittee."

While America waits, another medical association has come out in favor of ending Daylight Saving Time, reports NBC News: The American Academy of Sleep Medicine is a medical association whose professionals advocate for policies that improve sleep health. On Tuesday, the academy released a statement calling on the U.S. to eliminate daylight saving time completely, stating that standard time best supports health and safety, as it aligns with people's natural circadian rhythm. Undergoing the time switch itself raises the most concerns. Research shows that after the "spring forward" time change, workplace injuries, car crash deaths and heart attack risk have all increased. One 2023 study found that a week after transitioning from the time change, people reported more dissatisfaction with sleep and higher rates of insomnia.
Privacy

Mozilla Launches Annual Digital Privacy 'Creep-o-Meter'. This Year's Status: 'Very Creepy' (mozilla.org) 60

"In 2023, the state of our digital privacy is: Very Creepy." That's the verdict from Mozilla's first-ever "Annual Consumer Creep-o-Meter," which attempts to set benchmarks for digital privacy and identify trends: Since 2017, Mozilla has published 15 editions of *Privacy Not Included, our consumer tech buyers guide. We've reviewed over 500 gadgets, apps, cars, and more, assessing their security features, what data they collect, and who they share that data with. In 2023, we compared our most recent findings with those of the past five years. It quickly became clear that products and companies are collecting more personal data than ever before — and then using that information in shady ways...

Products are getting more secure, but also a lot less private. More companies are meeting Mozilla's Minimum Security Standards like using encryption and providing automatic software updates. That's good news. But at the same time, companies are collecting and sharing users' personal data like never before. And that's bad news. Many companies now view their hardware or software as a means to an end: collecting that coveted personal data for targeted advertising and training AI. For example: The mental health app BetterHelp shares your data with advertisers, social media platforms, and sister companies. The Japanese car manufacturer Nissan collects a wide range of information, including sexual activity, health diagnosis data, and genetic information — but doesn't specify how.

An increasing number of products can't be used offline. In the past, the privacy conscious could always buy a connected device but turn off connectivity, making it "dumb." That's no longer an option in many cases. The number of connected devices that require apps and can't be used offline are increasing. This trend, coupled with the first, means it's harder and harder to keep your data private.

Privacy policies also need improvement. "Legalese, ambiguity, and policies that sprawl across multiple documents and URLs are the status quo. And it's getting worse, not better. Companies use these policies as a shield, not an actual resource for consumers." They note that Toyota has more than 10 privacy policy documents, and that it would actually take five hours to read all the privacy documents the Meta Quest Pro VR headset.

In the end they advise opting out of data collection when possible, enabling security features, and "If you're not comfortable with a product's privacy, don't buy it. And, speak up. Over the years, we've seen companies respond to consumer demand for privacy, like when Apple reformed app tracking and Zoom made end-to-end encryption a free feature."

You can also take a quiz that calculates your own privacy footprint (based on whether you're using consumer tech products like the Apple Watch, Nintendo Switch, Nook, or Telegram). Mozilla's privacy advocates award the highest marks to privacy-protecting products like Signal, Sonos' SL Speakers, and the Pocketbook eReader (an alternative to Amazon's Kindle. (Although 100% of the cars reviewed by Mozilla "failed to meet our privacy and security standards.")

The graphics on the site help make its point. As you move your mouse across the page, the cartoon eyes follow its movement...
United States

US Chip Curbs Give Huawei a Chance To Fill the Nvidia Void In China (reuters.com) 23

An anonymous reader quotes a report from Reuters: U.S. measures to limit the export of advanced artificial intelligence (AI) chips to China may create an opening for Huawei to expand in its $7 billion home market as the curbs force Nvidia to retreat, analysts say. While Nvidia has historically been the leading provider of AI chips in China with a market share exceeding 90%, Chinese firms including Huawei have been developing their own versions of Nvidia's best-selling chips, including the A100 and the H100 graphics processing units (GPU).

Huawei's Ascend AI chips are comparable to Nvidia's in terms of raw computing power, analysts and some AI firms such as China's iFlyTek say, but they still lag behind in performance. Jiang Yifan, chief market analyst at brokerage Guotai Junan Securities, said another key limiting factor for Chinese firms was the reliance of most projects on Nvidia's chips and software ecosystem, but that could change with the U.S. restrictions. "This U.S. move, in my opinion, is actually giving Huawei's Ascend chips a huge gift," Jiang said in a post on his social media Weibo account. This opportunity, however, comes with several challenges.

Many cutting edge AI projects are built with CUDA, a popular programming architecture Nvidia has pioneered, which has in turn given rise to a massive global ecosystem that has become capable of training highly sophisticated AI models such as OpenAI's GPT-4. Huawei own version is called CANN, and analysts say it is much more limited in terms of the AI models it is capable of training, meaning that Huawei's chips are far from a plug-and-play substitute for Nvidia. Woz Ahmed, a former chip design executive turned consultant, said that for Huawei to win Chinese clients from Nvidia, it must replicate the ecosystem Nvidia created, including supporting clients to move their data and models to Huawei's own platform. Intellectual property rights are also a problem, as many U.S. firms already hold key patents for GPUs, Ahmed said. "To get something that's in the ballpark, it is 5 or 10 years," he added.

Open Source

OpenBSD 7.4 Released (phoronix.com) 8

Long-time Slashdot reader Noryungi writes: OpenBSD 7.4 has been officially released. The 55th release of this BSD operating system, known for being security oriented, brings a lot of new things, including dynamic tracer, pfsync improvements, loads of security goodies and virtualization improvements. Grab your copy today! As mentioned by Phoronix's Michael Larabel, some of the key highlights include:

- Dynamic Tracer (DT) and Utrace support on AMD64 and i386 OpenBSD
- Power savings for those running OpenBSD 7.4 on Apple Silicon M1/M2 CPUs by allowing deep idle states when available for the idle loop and suspend
- Support for the PCIe controller found on Apple M2 Pro/Max SoCs
- Allow updating AMD CPU Microcode updating when a newer patch is available
- A workaround for the AMD Zenbleed CPU bug
- Various SMP improvements
- Updating the Direct Rendering Manager (DRM) graphics driver support against the upstream Linux 6.1.55 state
- New drivers for supporting various Qualcomm SoC features
- Support for soft RAID disks was improved for the OpenBSD installer
- Enabling of Indirect Branch Tracking (IBT) on x86_64 and Branch Target Identifier (BTI) on ARM64 for capable processors

You can download and view all the new changes via OpenBSD.org.
Hardware

First Mini-PC With Solid-State Active Cooling System Launches (newatlas.com) 19

Chinese multinational Zotac has announced a mini-PC built around two solid-state active cooling chips called the AirJet Pro and AirJet Mini. They're designed by a company called Frore Systems. New Atlas reports: The AirJet tech is described as a self-contained active heat sink featuring membranes inside that vibrate at ultrasonic frequency, generating "a powerful flow of air" that's pushed through vents at the top of the unit. These "high-velocity pulsating jets" remove heat from the processor and push it out through an integrated spout. Back at Computex 2023 in May, Zotac's new Zbox mini-PC was announced as the first recipient of Frore's cooling technology, in the shape of two near-silent AirJet Minis. Now The Zbox PI430AJ has launched to "select regions." Zotac reckons that the active cooling modules can only be heard if the user places an ear against the Zbox's housing.

The processor of choice for this "world's first" device is an Intel Core i3-N300 octacore chip that can clock up to 3.8 GHz. This features integrated UHD graphics, and is supported by 8 GB of LPDDR5 RAM. The Windows flavor comes with 512 GB of SSD storage, while users who opt for the barebones version will need to install their own. The 114.8 x 76 x 23.8-mm (4.52 x 2.99 x 0.95-in) mini-PC sports two USB 3.2 Type-A ports plus one USB-C, HDMI and DisplayPort, Ethernet LAN and a combo headphone/microphone jack. Bluetooth 5.2 and Wi-Fi 6 are cooked in for wireless needs.

AMD

AMD Pulls Graphics Driver After 'Anti-Lag+' Triggers Counter-Strike 2 Bans (arstechnica.com) 93

AMD has taken down the latest version of its AMD Adrenalin Edition graphics driver after Counter-Strike 2-maker Valve warned that players using its Anti-Lag+ technology would result in a ban under Valve's anti-cheat rules. From a report: AMD first introduced regular Anti-Lag mitigation in its drivers back in 2019, limiting input lag by reducing the amount of queued CPU work when the processor was getting too far ahead of the GPU frame processing. But the newer Anti-Lag+ system -- which was first rolled out for a handful of games last month -- updates this system by "applying frame alignment within the game code itself," according to AMD. That method leads to additional lag reduction of up to 10 ms, according to AMD's data. That additional lag reduction could offer players a bit of a competitive advantage in these games (with the usual arguments about whether that advantage is "unfair" or not). But it's Anti-Lag+'s particular method of altering the "game code itself" that sets off warning bells for the Valve Anti-Cheat (VAC) system. After AMD added Anti-Lag+ support for Counter-Strike 2 in a version 23.10.1 update last week, VAC started issuing bans to unsuspecting AMD users that activated the feature.

"AMD's latest driver has made their 'Anti-Lag/+' feature available for CS2, which is implemented by detouring engine dll functions," Valve wrote on social media Friday. "If you are an AMD customer and play CS2, DO NOT ENABLE ANTI-LAG/+; any tampering with CS code will result in a VAC ban." Beyond Valve, there are also widespread reports of Anti-Lag+ triggering crashes or account bans in competitive online games like Modern Warfare 2 and Apex Legends. But Nvidia users haven't reported any similar problems with the company's Reflex system, which uses SDK-level code adjustments to further reduce input lag in games including Counter-Strike 2.

Desktops (Apple)

LabView App Abandons the Mac After Four Decades (appleinsider.com) 74

An anonymous reader quotes a report from AppleInsider: Having been created on a Mac in the 1980s, LabView has now announced that its latest macOS update will be the final release for the platform. LabView is a visual programming language tool that lets users connect virtual measurement equipment together to input and process data. AppleInsider staffers have seen it used across a variety of industries and applications to help design a complex monitoring system, or automate a test sequence.

It's been 40 years since Dr James Truchard and Jeff Kodosky began work on it and founded their firm, National Instruments. The first release of the software was in October 1986 where it was a Mac exclusive. In a 2019 interview, Jeff Kodosky said this was because "it was the only computer that had a 32-bit operating system, and it had the graphics we needed." Now National Instruments has told all current users that they have released an updated Mac version -- but it will be the last.

National Instruments says it will cease selling licenses for the Mac version in March 2024, and will also stop support. LabView has also been sold as a subscription and National Instruments says it will switch users to a "perpetual licesse for your continued use," though seemingly only if specifically requested. As yet, there have been few reactions on the NI.com forums. However, one post says "This came as a shocker to us as the roadmap still indicates support."
National Instruments says LabVIEW "will continue to be available on Windows and Linux OSes."
Graphics

Higher Quality AV1 Video Encoding Now Available For Radeon Graphics On Linux (phoronix.com) 3

Michael Larabel reports via Phoronix: For those making use of GPU-accelerated AV1 video encoding with the latest AMD Radeon graphics hardware on Linux, the upcoming Mesa 23.3 release will support the high-quality AV1 preset for offering higher quality encodes. Merged this week to Mesa 23.3 are the RadeonSI Video Core Next (VCN) changes for supporting the high quality AV1 encoding mode preset.

Mesa 23.3 will be out as stable later this quarter for those after slightly higher quality AV1 encode support for Radeon graphics on this open-source driver stack alongside many other recent Mesa driver improvements especially on the Vulkan side with Radeon RADV and Intel ANV.

AI

Adobe's Next-Gen Firefly 2 Offers Vector Graphics, More Control and Photorealistic Renders (engadget.com) 6

Andrew Tarantola reports vai Engadget: Just seven months after its beta debut, Adobe's Firefly generative AI is set to receive a trio of new models as well as more than 100 new features and capabilities, company executives announced at the Adobe Max 2023 event on Tuesday. The Firefly Image 2 model promises higher fidelity generated images and more granular controls for users and the Vector model will allow graphic designers to rapidly generate vector images, a first for the industry. The Design model for generating print and online advertising layouts offers another first: text-to-template generation.

Firefly Image 2 is the updated version of the existing text-to-image system. Like its predecessor, this one is trained exclusively on licensed and public domain content to ensure that its output images are safe for commercial use. It also accommodates text prompts in any of 100 languages. Adobe's AI already works across modalities, from still images, video and audio to design elements and font effects. As of Tuesday, it also generates vector art thanks to the new Firefly Vector model. Currently available in beta, this new model will also offer Generative Match, which will recreate a given artistic style in its output images. This will enable users to stay within bounds of the brand's guidelines, quickly spin up new designs using existing images and their aesthetics, as well as seamless, tileable fill patterns and vector gradients.

The final, Design model, is geared heavily towards advertising and marketing professionals for use in generating print and online copy templates using Adobe Express. Users will be able to generate images in Firefly then port them to express for use in a layout generated from the user's natural language prompt. Those templates can be generated in any of the popular aspect ratios and are fully editable through conventional digital methods. The Firefly web application will also receive three new features: Generative Match, as above, for maintaining consistent design aesthetics across images and assets. Photo Settings will generate more photorealistic images (think: visible, defined pores) as well as enable users to tweak images using photography metrics like depth of field, blur and field of view. The system's depictions of plant foliage will reportedly also improve under this setting. Prompt Guidance will even rewrite whatever hackneyed prose you came up with into something it can actually work from, reducing the need for the wholesale re-generation of prompted images.

Python

7% of Python Developers Are Still Using Python 2, Annual Survey Finds (infoworld.com) 53

"Python 3 was by far the choice over Python 2 in a late-2022 survey of more than 23,000 Python developers," reports InfoWorld, "but the percentage of respondents using Python 2 actually ticked up compared to the previous year." Results of the sixth annual Python Developers Survey, conducted by the Python Software Foundation and software tools maker JetBrains, were released September 27. The Python Developers Survey 2022 report indicates that 93% of respondents had adopted Python 3, while only 7% were still using Python 2. In the 2021 survey, though, 95% used Python 3 while 5% used Python 2. In 2020, Python 3 held a 94% to 6% edge. Dating back to 2017, 75% used Python 3 and 25% used Python 2...

The 2022 report said 29% of respondents still use Python 2 for data analysis, 24% use Python 2 for computer graphics, and 23% used Python 2 for devops. The survey also found that 45% of respondents are still using Python 3.10, which arrived two years ago, while just 2% still use Python 3.5 or lower. (Python 3.11 was released October 24, 2022, right when the survey was being conducted.)

Other findings from the survey:
  • 21% said they used Python for work only, while 51% said they used it for work and personal/educational use or side projects, and 21% said they used Python only for personal projects.
  • 85% of respondents said Python was their main language (rather than a secondary language).
  • The survey also gives the the top "secondary languages" for the surveyed Python developers as JavaScript (37%), HTML/CSS (37%), SQL (35%), Bash/Shell (32%), and then C/C++ (27%).
  • When asked what they used Python for most, 22% said "Web Development", 18% said "Data Analysis," 12% said "Machine Learning," and 10% said "DevOps/System Administration/Writing Automation Scripts."

AI

OpenAI is Exploring Making Its Own AI Chips (reuters.com) 22

OpenAI, the company behind ChatGPT, is exploring making its own AI chips and has gone as far as evaluating a potential acquisition target, Reuters reported Friday, citing people familiar with the company's plans. From the report: The company has not yet decided to move ahead, according to recent internal discussions described to Reuters. However, since at least last year it discussed various options to solve the shortage of expensive AI chips that OpenAI relies on, according to people familiar with the matter. These options have included building its own AI chip, working more closely with other chipmakers including Nvidia and also diversifying its suppliers beyond Nvidia.

CEO Sam Altman has made the acquisition of more AI chips a top priority for the company. He has publicly complained about the scarcity of graphics processing units, a market dominated by Nvidia, which controls more than 80% of the global market for the chips best suited to run AI applications. The effort to get more chips is tied to two major concerns Altman has identified: a shortage of the advanced processors that power OpenAI's software and the "eye-watering" costs associated with running the hardware necessary to power its efforts and products.

Security

Vulnerable Arm GPU Drivers Under Active Exploitation, Patches May Not Be Available (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: Arm warned on Monday of active ongoing attacks targeting a vulnerability in device drivers for its Mali line of GPUs, which run on a host of devices, including Google Pixels and other Android handsets, Chromebooks, and hardware running Linux. "A local non-privileged user can make improper GPU memory processing operations to gain access to already freed memory," Arm officials wrote in an advisory. "This issue is fixed in Bifrost, Valhall and Arm 5th Gen GPU Architecture Kernel Driver r43p0. There is evidence that this vulnerability may be under limited, targeted exploitation. Users are recommended to upgrade if they are impacted by this issue."

The advisory continued: "A local non-privileged user can make improper GPU processing operations to access a limited amount outside of buffer bounds or to exploit a software race condition. If the system's memory is carefully prepared by the user, then this in turn could give them access to already freed memory." [...] Getting access to system memory that's no longer in use is a common mechanism for loading malicious code into a location an attacker can then execute. This code often allows them to exploit other vulnerabilities or to install malicious payloads for spying on the phone user. Attackers often gain local access to a mobile device by tricking users into downloading malicious applications from unofficial repositories. The advisory mentions drivers for the affected GPUs being vulnerable but makes no mention of microcode that runs inside the chips themselves.

The most prevalent platform affected by the vulnerability is Google's line of Pixels, which are one of the only Android models to receive security updates on a timely basis. Google patched Pixels in its September update against the vulnerability, which is tracked as CVE-2023-4211. Google has also patched Chromebooks that use the vulnerable GPUs. Any device that shows a patch level of 2023-09-01 or later is immune to attacks that exploit the vulnerability. The device driver on patched devices will show as version r44p1 or r45p0. CVE-2023-4211 is present in a range of Arm GPUs released over the past decade. The Arm chips affected are:

- Midgard GPU Kernel Driver: All versions from r12p0 - r32p0
- Bifrost GPU Kernel Driver: All versions from r0p0 - r42p0
- Valhall GPU Kernel Driver: All versions from r19p0 - r42p0
- Arm 5th Gen GPU Architecture Kernel Driver: All versions from r41p0 - r42p0

Hardware

Cooler Master's Sneaker Gaming PC Sells For $3,799 (tomshardware.com) 30

Cooler Master's new Sneaker X mimics the exact look of an authentic sneaker but doesn't sacrifice its visuals for performance, coming with a full-blown i7-13700K and an RTX 4070 Ti graphics card. The only drawback of the system is its high price, coming in at a whopping $3,799.99. From a report: Cooler Master's sneaker case is anything but ordinary; the case has all the visual queues of a giant sneaker doused in red and white. Inside is a mini ITX chassis, equipped with a mini-ITX motherboard, Core i7-13700K CPU, GeForce RTX 4070 Ti graphics card, 32GB of DDR5 memory, 2TB of NVMe storage, and an 850W SFX power supply. Topping it all off is a massive 360mm AIO liquid cooler cooling the i7-13700K that sits at the bottom of the case, wholly hidden from prying eyes.

For ventilation, the shoe has perforated outline side panels and one sizeable RGB-illuminated intake fan on the side (which appears to be 120 mm in diameter). There are two cut-outs on the shoe's top and front, exposing the case's insides. To the rear (where your foot would slide if it were an actual shoe), a giant cut-out gives way to the large RTX 4070 Ti graphics card inside. The front cut-out is more obstructive but looks at the system's power supply, RGB-illuminated DDR5 RAM, and rear motherboard tray.

AI

Elvis Is Back in the Building, Thanks to Generative AI - and U2 (time.com) 27

U2's inaugural performance at the opening of Las Vegas's Sphere included a generative AI video collage projected hundreds of feet into the air — showing hundreds of surreal renderings of Elvis Presley.

An anonymous reader shares this report from Time magazine: The video collage is the creation of the artist Marco Brambilla, the director of Demolition Man and Kanye West's "Power" music video, among many other art projects. Brambilla fed hours of footage from Presley's movies and performances into the AI model Stable Diffusion to create an easily searchable library to pull from, and then created surreal new images by prompting the AI model Midjourney with questions like: "What would Elvis look like if he were sculpted by the artist who made the Statue of Liberty...?"

While Brambilla's Elvises prance across the Sphere's screen — which is four times the size of IMAX — the band U2 will perform their song "Even Better Than The Real Thing," as part of their three-month residency at the Sphere celebrating their 1991 album Achtung Baby... Earlier this year, U2 commissioned several artists, including Brambilla and Jenny Holzer, to create visual works that would accompany their performances of specific songs. Given U2's love for the singer and the lavish setting of the Sphere, Brambilla thought a tribute to Elvis would be extremely fitting. He wanted to create a maximalist work that encapsulated both the ecstatic highs and grimy lows of not only Elvis, but the city of Las Vegas itself. "The piece is about excess, spectacle, the tipping point for the American Dream," Brambilla said in a phone interview.

Brambilla was only given three-and-a-half months to execute his vision, less than half the time that he normally spends on video collages. So he turned to AI tools for both efficiency and extravagance. "AI can exaggerate with no end; there's no limit to the density or production value," Brambilla says. And this seemed perfect for this project, because Elvis became a myth; a larger-than-life character..." Brambilla transplanted his MidJourney-created images into CG (computer graphics) software, where he could better manipulate them, and left some of the Stable Diffusion Elvis incarnations as they were. The result is a kaleidoscopic and overwhelming video collage filled with video clips both historical and AI-generated, that will soon stretch hundreds of feet above the audience at each of U2's concerts.

"I wanted to create the feeling that by the end of it," Brambilla says, "We're in a place that is so hyper-saturated and so dense with information that it's either exhilarating or terrifying, or both."

Brambilla created an exclusive video excerpting from the larger collage for TIME. The magazine reports that one of the exact prompts he entered was:

"Elvis Presley in attire inspired by the extravagance of ancient Egypt and fabled lost civilizations in a blissful state. Encircling him, a brigade of Las Vegas sorceresses, twisted and warped mid-chant, reflect the influence of Damien Hirst and Andrei Riabovitchev, creating an atmosphere of otherworldly realism, mirroring the decadence and illusion of consumption."
Businesses

Nvidia's French Offices Raided In Cloud-Computing Competition Inquiry (reuters.com) 9

According to the Wall Street Journal, Nvidia's French offices were raided this week on suspicion the chipmaker engaged in anticompetitive practices. Reuters reports: The French competition authority, which disclosed the dawn raid on Wednesday, did not say what practices it was investigating or which company it had targeted, beyond saying it was in the "graphics cards sector." The French competition authority said that its operation this week followed a broader inquiry into the cloud-computing sector. The broader inquiry revolves around concerns that cloud-computing companies could use their access to computing power to exclude smaller competitors.

This week's operation had targeted Nvidia, which is the world's largest maker of chips used both for artificial intelligence and for computer graphics, the WSJ report added, citing people familiar with the raid. Chips originally made for computer graphics are suited for AI-related computing.

Graphics

Burkey Belser, Designer of Ubiquitous Nutrition Facts Label, Dies At 76 (washingtonpost.com) 26

An anonymous reader quotes a report from the Washington Post: Burkey Belser, a graphic designer who created the ubiquitous nutrition facts label -- a stark rectangle listing calories, fat, sodium and other content information -- that adorns the packaging of nearly every digestible product in grocery stores, died Sept. 25 at his home in Bethesda, Md. He was 76. The cause was bladder cancer, said his wife Donna Greenfield, with whom he founded the Washington, D.C., design firm Greenfield/Belser.

Mr. Belser's nutrition facts label -- rendered in bold and light Helvetica type -- was celebrated as a triumph of public health and graphic design when it debuted in 1994 following passage of the Nutrition Labeling and Education Act. Although some products had previously included nutritional information, there was no set standard, and the information was of little public health value in helping consumers make better food choices. The new law, drafted as obesity and other diet-related illnesses were surging, required mandatory food labels with nutrients presented in the context of a healthy 2,000-calorie-a-day diet.

Writing in a journal published by the Professional Association for Design, Massimo Vignelli, the renowned Italian designer, called Mr. Belser's creation a "clean testimonial of civilization, a statement of social responsibility, and a masterpiece of graphic design." The Food and Drug Administration chose Mr. Belser to design the nutrition label following his success creating the black and yellow energy guide label for appliances. Once dubbed the "Steve Jobs of information design," Mr. Belser's fondness for exceedingly simple design perfectly suited him for a job that required stripping down nutritional facts to the bare essentials.
The report proceeds to tell the tale of how Mr. Belser worked pro bono with his team to labor through three dozen iterations of the label, ultimately settling on "simplicity in itself."

"There's a harmony about it, and the presentation has no extraneous components to it," Belser told The Washington Post. "The words are left and right justified, which gave it a kind of balance. There was no grammatical punctuation like commas or periods or parentheses that would slow the reader down."

He compared the finished product -- which he later adapted to over-the-counter drugs -- to the Apple iPod. "The detail is so important that you wouldn't even notice it and if you didn't notice it's a sign that it succeeded," he said. "I don't know if anybody's heart beats faster when they see nutrition facts, but they sense a pleasure that they get the information they need."
Security

GPUs From All Major Suppliers Are Vulnerable To New Pixel-Stealing Attack (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: GPUs from all six of the major suppliers are vulnerable to a newly discovered attack that allows malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites, researchers have demonstrated in a paper (PDF) published Tuesday. The cross-origin attack allows a malicious website from one domain -- say, example.com -- to effectively read the pixels displayed by a website from example.org, or another different domain. Attackers can then reconstruct them in a way that allows them to view the words or images displayed by the latter site. This leakage violates a critical security principle that forms one of the most fundamental security boundaries safeguarding the Internet. Known as the same origin policy, it mandates that content hosted on one website domain be isolated from all other website domains. [...]

GPU.zip works only when the malicious attacker website is loaded into Chrome or Edge. The reason: For the attack to work, the browser must:

1. allow cross-origin iframes to be loaded with cookies
2. allow rendering SVG filters on iframes and
3. delegate rendering tasks to the GPU

For now, GPU.zip is more of a curiosity than a real threat, but that assumes that Web developers properly restrict sensitive pages from being embedded by cross-origin websites. End users who want to check if a page has such restrictions in place should look for the X-Frame-Options or Content-Security-Policy headers in the source.
"This is impactful research on how hardware works," a Google representative said in a statement. "Widely adopted headers can prevent sites from being embedded, which prevents this attack, and sites using the default SameSite=Lax cookie behavior receive significant mitigation against personalized data being leaked. These protections, along with the difficulty and time required to exploit this behavior, significantly mitigate the threat to everyday users. We are in communication and are actively engaging with the reporting researchers. We are always looking to further improve protections for Chrome users."

An Intel representative, meanwhile, said that the chipmaker has "assessed the researcher findings that were provided and determined the root cause is not in our GPUs but in third-party software." A Qualcomm representative said "the issue isn't in our threat model as it more directly affects the browser and can be resolved by the browser application if warranted, so no changes are currently planned." Apple, Nvidia, AMD, and ARM didn't comment on the findings.

An informational write-up of the findings can be found here.
Graphics

Nvidia Hints At Replacing Rasterization and Ray Tracing With Full Neural Rendering (tomshardware.com) 131

Mark Tyson writes via Tom's Hardware: A future version of [Deep Learning Super Sampling (DLSS) technology] is likely to include full neural rendering, hinted Bryan Catanzaro, a Nvidia VP of Applied Deep Learning Research. In a round table discussion organized by Digital Foundry (video), various video game industry experts talked about the future of AI in the business. During the discussion, Nvidia's Catanzaro raised a few eyebrows with his openness to predict some key features of a hypothetical "DLSS 10." [...]

We've seen significant developments in Nvidia's DLSS technology over the years. First launched with the RTX 20-series GPUs, many wondered about the true value of technologies like the Tensor cores being included in gaming GPUs. The first ray tracing games, and the first version of DLSS, were of questionable merit. However, DLSS 2.X improved the tech and made it more useful, leading to it being more widely utilized -- and copied, first via FSR2 and later with XeSS. DLSS 3 debuted with the RTX 40-series graphics cards, adding Frame Generation technology. With 4x upscaling and frame generation, neural rendering potentially allows a game to only fully render 1/8 (12.5%) of the pixels. Most recently, DLSS 3.5 offered improved denoising algorithms for ray tracing games with the introduction of Ray Reconstruction technology.

The above timeline raises questions about where Nvidia might go next with future versions of DLSS. And of course, "Deep Learning Super Sampling" no longer really applies, as the last two additions have targeted other aspects of rendering. Digital Foundry asked that question to the group: "Where do you see DLSS in the future? What other problem areas could machine learning tackle in a good way?" Bryan Catanzaro immediately brought up the topic of full neural rendering. This idea isn't quite as far out as it may seem. Catanzaro reminded the panel that, at the NeurIPS conference in 2018, Nvidia researchers showed an open-world demo of a world being rendered in real-time using a neural network. During that demo the UE4 game engine provided data about what objects were in a scene, where they were, and so on, and the neural rendering provided all the on-screen graphics.
"DLSS 10 (in the far far future) is going to be a completely neural rendering system," Catanzaro added. The result will be "more immersive and more beautiful" games than most can imagine today.

Slashdot Top Deals