×
Graphics

Razer's First Linux Laptop Called 'Sexy' - But It's Not for Gamers (theverge.com) 45

A headline at Hot Hardware calls it "a sexy Linux laptop with deep learning chops... being pitched as the world's most powerful laptop for machine learning workloads."

And here's how Ars Technica describes the Razer x Lambda Tensorbook (announced Tuesday): Made in collaboration with Lambda, the Linux-based clamshell focuses on deep-learning development. Lambda, which has been around since 2012, is a deep-learning infrastructure provider used by the US Department of Defense and "97 percent of the top research universities in the US," according to the company's announcement. Lambda's offerings include GPU clusters, servers, workstations, and cloud instances that train neural networks for various use cases, including self-driving cars, cancer detection, and drug discovery.

Dubbed "The Deep Learning Laptop," the Tensorbook has an Nvidia RTX 3080 Max-Q (16GB) and targets machine-learning engineers, especially those who lack a laptop with a discrete GPU and thus have to share a remote machine's resources, which negatively affects development.... "When you're stuck SSHing into a remote server, you don't have any of your local data or code and even have a hard time demoing your model to colleagues," Lambda co-founder and CEO Stephen Balaban said in a statement, noting that the laptop comes with PyTorch and TensorFlow for quickly training and demoing models from a local GUI interface without SSH. Lambda isn't a laptop maker, so it recruited Razer to build the machine....

While there are more powerful laptops available, the Tensorbook stands out because of its software package and Ubuntu Linux 20.04 LTS.

The Verge writes: While Razer currently offers faster CPU, GPU and screens in today's Blade lineup, it's not necessarily a bad deal if you love the design, considering how pricey Razer's laptops can be. But we've generally found that Razer's thin machines run quite hot in our reviews, and the Blade in question was no exception even with a quarter of the memory and a less powerful RTX 3060 GPU. Lambda's FAQ page does not address heat as of today.

Lambda is clearly aiming this one at prospective MacBook Pro buyers, and I don't just say that because of the silver tones. The primary hardware comparison the company touts is a 4x speedup over Apple's M1 Max in a 16-inch MacBook Pro when running TensorFlow.

Specifically, Lambda's web site claims the new laptop "delivers model training performance up to 4x faster than Apple's M1 Max, and up to 10x faster than Google Colab instances." And it credits this to the laptop's use of NVIDIA's GeForce RTX 3080 Max-Q 16GB GPU, adding that NVIDIA GPUs "are the industry standard for parallel processing, ensuring leading performance and compatibility with all machine learning frameworks and tools."

"It looks like a fine package and machine, but pricing starts at $3,499," notes Hot Hardware, adding "There's a $500 up-charge to have it configured to dual-boot Windows 10."

The Verge speculates on what this might portend for the future. "Perhaps the recently renewed interest in Linux gaming, driven by the Steam Deck, will push Razer to consider Linux for its own core products as well."
Apple

How Apple's Monster M1 Ultra Chip Keeps Moore's Law Alive 109

By combining two processors into one, the company has squeezed a surprising amount of performance out of silicon. From a report: "UltraFusion gave us the tools we needed to be able to fill up that box with as much compute as we could," Tim Millet, vice president of hardware technologies at Apple, says of the Mac Studio. Benchmarking of the M1 Ultra has shown it to be competitive with the fastest high-end computer chips and graphics processor on the market. Millet says some of the chip's capabilities, such as its potential for running AI applications, will become apparent over time, as developers port over the necessary software libraries. The M1 Ultra is part of a broader industry shift toward more modular chips. Intel is developing a technology that allows different pieces of silicon, dubbed "chiplets," to be stacked on top of one another to create custom designs that do not need to be redesigned from scratch. The company's CEO, Pat Gelsinger, has identified this "advanced packaging" as one pillar of a grand turnaround plan. Intel's competitor AMD is already using a 3D stacking technology from TSMC to build some server and high-end PC chips. This month, Intel, AMD, Samsung, TSMC, and ARM announced a consortium to work on a new standard for chiplet designs. In a more radical approach, the M1 Ultra uses the chiplet concept to connect entire chips together.

Apple's new chip is all about increasing overall processing power. "Depending on how you define Moore's law, this approach allows you to create systems that engage many more transistors than what fits on one chip," says Jesus del Alamo, a professor at MIT who researches new chip components. He adds that it is significant that TSMC, at the cutting edge of chipmaking, is looking for new ways to keep performance rising. "Clearly, the chip industry sees that progress in the future is going to come not only from Moore's law but also from creating systems that could be fabricated by different technologies yet to be brought together," he says. "Others are doing similar things, and we certainly see a trend towards more of these chiplet designs," adds Linley Gwennap, author of the Microprocessor Report, an industry newsletter. The rise of modular chipmaking might help boost the performance of future devices, but it could also change the economics of chipmaking. Without Moore's law, a chip with twice the transistors may cost twice as much. "With chiplets, I can still sell you the base chip for, say, $300, the double chip for $600, and the uber-double chip for $1,200," says Todd Austin, an electrical engineer at the University of Michigan.
AMD

AMD Confirms Its GPU Drivers Are Overclocking CPUs Without Asking (tomshardware.com) 73

AMD has confirmed to Tom's Hardware that a bug in its GPU driver is, in fact, changing Ryzen CPU settings in the BIOS without permission. This condition has been shown to auto-overclock Ryzen CPUs without the user's knowledge. From the report: Reports of this issue began cropping up on various social media outlets recently, with users reporting that their CPUs had mysteriously been overclocked without their consent. The issue was subsequently investigated and tracked back to AMD's GPU drivers. AMD originally added support for automatic CPU overclocking through its GPU drivers last year, with the idea that adding in a Ryzen Master module into the Radeon Adrenalin GPU drivers would simplify the overclocking experience. Users with a Ryzen CPU and Radeon GPU could use one interface to overclock both. Previously, it required both the GPU driver and AMD's Ryzen Master software.

Overclocking a Ryzen CPU requires the software to manipulate the BIOS settings, just as we see with other software overclocking utilities. For AMD, this can mean simply engaging the auto-overclocking Precision Boost Overdrive (PBO) feature. This feature does all the dirty work, like adjusting voltages and frequency on the fly, to give you a one-click automatic overclock. However, applying a GPU profile in the AMD driver can now inexplicably alter the BIOS settings to enable automatic overclocking. This is problematic because of the potential ill effects of overclocking -- in fact, overclocking a Ryzen CPU automatically voids the warranty. AMD's software typically requires you to click a warning to acknowledge that you understand the risks associated with overclocking, and that it voids your warranty, before it allows you to overclock the system. Unfortunately, that isn't happening here.
Until AMD issues a fix, "users have taken to using the Radeon Software Slimmer to delete the Ryzen Master SDK from the GPU driver, thus preventing any untoward changes to the BIOS settings," adds Tom's Hardware.
Games

Epic's Unreal Engine 5 Has Officially Launched (axios.com) 60

Epic Games has officially launched Unreal Engine 5, its newest set of software tools for making video games. From a report: While Epic may be known to much of the public for its hit game Fortnite, its core business has long been Unreal. Epic's goal is to make Unreal Engine the definitive toolset for building games, virtual worlds and other digital entertainment. The engine is free to download, but Epic then takes a 5% cut of games that generate more than $1 million in revenue. During a kickoff video showcase today, the selling point wasn't just what the engine can do, but who is using it.

Epic workers demonstrated how the engine could be used to make and tweak modern games. Then came the key slide showing dozens of partners, including PlayStation, Xbox and Tencent followed by testimonials from recent Unreal Engine converts CD Projekt RED, which had previously used its own tech to make games in the Witcher and Cyberpunk franchises and ending with the kicker that Crystal Dynamics, another studio that long operated its own in-house engine, would use Unreal on its next Tomb Raider game.
More details at Kotaku.
Intel

Intel Beats AMD and Nvidia with Arc GPU's Full AV1 Support (neowin.net) 81

Neowin notes growing support for the "very efficient, potent, royalty-free video codec" AV1, including Microsoft's adding of support for hardware acceleration of AV1 on Windows.

But AV1 even turned up in Intel's announcement this week of the Arc A-series, a new line of discrete GPUs, Neowin reports: Intel has been quick to respond and the company has become the first such GPU hardware vendor to have full AV1 support on its newly launched Arc GPUs. While AMD and Nvidia both offer AV1 decoding with their newest GPUs, neither have support for AV1 encoding.

Intel says that hardware encoding of AV1 on its new Arc GPUs is 50 times faster than those based on software-only solutions. It also adds that the efficiency of AV1 encode with Arc is 20% better compared to HEVC. With this feature, Intel hopes to potentially capture at least some of the streaming and video editing market that's based on users who are looking for a more robust AV1 encoding solution compared to CPU-based software approaches.

From Intel's announcement: Intel Arc A-Series GPUs are the first in the industry to offer full AV1 hardware acceleration, including both encode and decode, delivering faster video encode and higher quality streaming while consuming the same internet bandwidth. We've worked with industry partners to ensure that AV1 support is available today in many of the most popular media applications, with broader adoption expected this year. The AV1 codec will be a game changer for the future of video encoding and streaming.
Intel

Intel Enters Discrete GPU Market With Launch of Arc A-Series For Laptops (hothardware.com) 23

MojoKid writes: Today Intel finally launched its first major foray into discrete GPUs for gamers and creators. Dubbed Intel Arc A-Series and comprised of 5 different chips built on two different Arc Alchemist SoCs, the company announced its entry level Arc 3 Graphics is shipping in market now with laptop OEMs delivering new all-Intel products shortly. The two SoCs set the foundation across three performance tiers, including Arc 3, Arc 5, and Arc 7.

For example, Arc A370M arrives today with 8 Xe cores, 8 ray tracing units, 4GB of GDDR6 memory linked to a 64-bit memory bus, and a 1,550MHz graphics clock. Graphics power is rated at 35-50W. However, Arc A770M, Intel's highest-end mobile GPU will come with 32 Xe cores, 32 ray tracing units, 16GB of GDDR 6 memory over a 256-bit interface and with a 1650MHz graphics clock. Doing the math, Arc A770M could be up to 4X more powerful than Arc 370M. In terms of performance, Intel showcased benchmarks from a laptop outfitted with a Core i7-12700H processor and Arc A370M GPU that can top the 60 FPS threshold at 1080p in many games where integrated graphics could come up far short. Examples included Doom Eternal (63 fps) at high quality settings, and Hitman 3 (62 fps), and Destiny 2 (66 fps) at medium settings. Intel is also showcasing new innovations for content creators as well, with its Deep Link, Hyper Encode and AV1 video compression support offering big gains in video upscaling, encoding and streaming. Finally, Intel Arc Control software will offer unique features like Smooth Sync that blends tearing artifacts when V-Synch is turned off, as well as Creator Studio with background blur, frame tracking and broadcast features for direct game streaming services support.

Graphics

The Untold Story of the Creation of GIF At CompuServe In 1987 (fastcompany.com) 43

Back in 1987 Alexander Trevor worked with the GIF format's creator, Steve Wilhite, at CompuServe. 35 years later Fast Company tech editor Harry McCracken (also Slashdot reader harrymcc) located Trevor for the inside story: Wilhite did not come up with the GIF format in order to launch a billion memes. It was 1987, and he was a software engineer at CompuServe, the most important online service until an upstart called America Online took off in the 1990s. And he developed the format in response to a request from CompuServe executive Alexander "Sandy" Trevor. (Trevor's most legendary contribution to CompuServe was not instigating GIF: He also invented the service's CB Simulator — the first consumer chat rooms and one of the earliest manifestation of social networking, period. That one he coded himself as a weekend project in 1980.)

GIF came to be because online services such as CompuServe were getting more graphical, but the computer makers of the time — such as Apple, Commodore, and IBM — all had their own proprietary image types. "We didn't want to have to put up images in 79 different formats," explains Trevor. CompuServe needed one universal graphics format.

Even though the World Wide Web and digital cameras were still in the future, work was already underway on the image format that came to be known as JPEG. But it wasn't optimized for CompuServe's needs: For example, stock charts and weather graphics didn't render crisply. So Trevor asked Wilhite to create an image file type that looked good and downloaded quickly at a time when a 2,400 bits-per-second dial-up modem was considered torrid. Reading a technical journal, Wilhite came across a discussion of an efficient compression technique known as LZW for its creators — Abraham Limpel, Jacob Ziv, and Terry Welch. It turned out to be an ideal foundation for what CompuServe was trying to build, and allowed GIF to pack a lot of image information into as few bytes as possible. (Much later, computing giant Unisys, which gained a patent for LZW, threatened companies that used it with lawsuits, leading to a licensing agreement with CompuServe and the creation of the patent-free PNG image format.)

GIF officially debuted on June 15, 1987. "It met my requirements, and it was extremely useful for CompuServe," says Trevor....

GIF was also versatile, offering the ability to store the multiple pictures that made it handy for creating mini-movies as well as static images. And it spread beyond CompuServe, showing up in Mosaic, the first graphical web browser, and then in Netscape Navigator. The latter browser gave GIFs the ability to run in an infinite loop, a crucial feature that only added to their hypnotic quality. Seeing cartoon hamsters dance for a split second is no big whoop, but watching them shake their booties endlessly was just one of many cultural moments that GIFs have given us.

Media

Stephen Wilhite, Creator of the GIF, Has Died (theverge.com) 128

Stephen Wilhite, one of the lead inventors of the GIF, died last week from COVID at the age of 74, according to his wife, Kathaleen, who spoke to The Verge. From the report: Stephen Wilhite worked on GIF, or Graphics Interchange Format, which is now used for reactions, messages, and jokes, while employed at CompuServe in the 1980s. He retired around the early 2000s and spent his time traveling, camping, and building model trains in his basement.

Although GIFs are synonymous with animated internet memes these days, that wasn't the reason Wilhite created the format. CompuServe introduced them in the late 1980s as a way to distribute "high-quality, high-resolution graphics" in color at a time when internet speeds were glacial compared to what they are today. "He invented GIF all by himself -- he actually did that at home and brought it into work after he perfected it," Kathaleen said. "He would figure out everything privately in his head and then go to town programming it on the computer."

If you want to go more in-depth into the history of the GIF, the Daily Dot has a good explainer of how the format became an internet phenomenon.
In 2013, Wilhite weighed in on the long-standing debate about the correct pronunciation of the image format. He told The New York Times, "The Oxford English Dictionary accepts both pronunciations. They are wrong. It is a soft 'G,' pronounced 'jif.' End of story."
Google

Steam (Officially) Comes To Chrome OS 24

An anonymous reader shares a report: This may feel like deja vu because Google itself mistakenly leaked this announcement a few days ago, but the company today officially announced the launch of Steam OS on Chrome OS. Before you run off to install it, there are a few caveats: This is still an alpha release and only available on the more experimental and unstable Chrome OS Dev channel. The number of supported devices is also still limited since it'll need at least 8GB of memory, an 11th-generation Intel Core i5 or i7 processor and Intel Iris Xe Graphics. That's a relatively high-end configuration for what are generally meant to be highly affordable devices and somewhat ironically means that you can now play games on Chrome OS devices that are mostly meant for business users. The list of supported games is also still limited but includes the likes of Portal 2, Skyrim, The Witcher 3: Wild Hunt, Half-Life 2, Stardew Valley, Factorio, Stellaris, Civilization V, Fallout 4, Dico Elysium and Untitled Goose Game.
Technology

Nvidia Takes the Wraps off Hopper, Its Latest GPU Architecture (venturebeat.com) 58

After much speculation, Nvidia today at its March 2022 GTC event announced the Hopper GPU architecture, a line of graphics cards that the company says will accelerate the types of algorithms commonly used in data science. Named for Grace Hopper, the pioneering U.S. computer scientist, the new architecture succeeds Nvidia's Ampere architecture, with launched roughly two years ago. From a report: The first card in the Hopper lineup is the H100, containing 80 billion transistors and a component called the Transformer Engine that's designed to speed up specific categories of AI models. Another architectural highlight includes Nvidia's MIG technology, which allows an H100 to be partitioned into seven smaller, isolated instances to handle different types of jobs. "Datacenters are becoming AI factories -- processing and refining mountains of data to produce intelligence," Nvidia founder and CEO Jensen Huang said in a press release. "Nvidia H100 is the engine of the world's AI infrastructure that enterprises use to accelerate their AI-driven businesses."

The H100 is the first Nvidia GPU to feature dynamic programming instructions (DPX), "instructions" in this context referring to segments of code containing steps that need to be executed. Developed in the 1950s, dynamic programming is an approach to solving problems using two key techniques: recursion and memoization. Recursion in dynamic programming involves breaking a problem down into sub-problems, ideally saving time and computational effort. In memoization, the answers to these sub-problems are stored so that the sub-problems don't need to be recomputed when they're needed later on in the main problem. Dynamic programming is used to find optimal routes for moving machines (e.g., robots), streamline operations on sets of databases, align unique DNA sequences, and more.

Iphone

Apple's iPhone Cameras Accused of Being 'Too Smart' (newyorker.com) 162

The New Yorker argues that photos on newer iPhones are "coldly crisp and vaguely inhuman, caught in the uncanny valley where creative expression meets machine learning...."

"[T]he truth is that iPhones are no longer cameras in the traditional sense. Instead, they are devices at the vanguard of 'computational photography,' a term that describes imagery formed from digital data and processing as much as from optical information. Each picture registered by the lens is altered to bring it closer to a pre-programmed ideal." In late 2020, Kimberly McCabe, an executive at a consulting firm in the Washington, D.C. area, upgraded from an iPhone 10 to an iPhone 12 Pro... But the 12 Pro has been a disappointment, she told me recently, adding, "I feel a little duped." Every image seems to come out far too bright, with warm colors desaturated into grays and yellows. Some of the photos that McCabe takes of her daughter at gymnastics practice turn out strangely blurry. In one image that she showed me, the girl's upraised feet smear together like a messy watercolor. McCabe said that, when she uses her older digital single-lens-reflex camera (D.S.L.R.), "what I see in real life is what I see on the camera and in the picture." The new iPhone promises "next level" photography with push-button ease. But the results look odd and uncanny. "Make it less smart — I'm serious," she said. Lately she's taken to carrying a Pixel, from Google's line of smartphones, for the sole purpose of taking pictures....

Gregory Gentert, a friend who is a fine-art photographer in Brooklyn, told me, "I've tried to photograph on the iPhone when light gets bluish around the end of the day, but the iPhone will try to correct that sort of thing." A dusky purple gets edited, and in the process erased, because the hue is evaluated as undesirable, as a flaw instead of a feature. The device "sees the things I'm trying to photograph as a problem to solve," he added. The image processing also eliminates digital noise, smoothing it into a soft blur, which might be the reason behind the smudginess that McCabe sees in photos of her daughter's gymnastics. The "fix" ends up creating a distortion more noticeable than whatever perceived mistake was in the original.

Earlier this month, Apple's iPhone team agreed to provide me information, on background, about the camera's latest upgrades. A staff member explained that, when a user takes a photograph with the newest iPhones, the camera creates as many as nine frames with different levels of exposure. Then a "Deep Fusion" feature, which has existed in some form since 2019, merges the clearest parts of all those frames together, pixel by pixel, forming a single composite image. This process is an extreme version of high-dynamic range, or H.D.R., a technique that previously required some software savvy.... The iPhone camera also analyzes each image semantically, with the help of a graphics-processing unit, which picks out specific elements of a frame — faces, landscapes, skies — and exposes each one differently. On both the 12 Pro and 13 Pro, I've found that the image processing makes clouds and contrails stand out with more clarity than the human eye can perceive, creating skies that resemble the supersaturated horizons of an anime film or a video game. Andy Adams, a longtime photo blogger, told me, "H.D.R. is a technique that, like salt, should be applied very judiciously." Now every photo we take on our iPhones has had the salt applied generously, whether it is needed or not....

The average iPhone photo strains toward the appearance of professionalism and mimics artistry without ever getting there. We are all pro photographers now, at the tap of a finger, but that doesn't mean our photos are good.

Data Storage

Nvidia Wants To Speed Up Data Transfer By Connecting Data Center GPUs To SSDs (arstechnica.com) 15

Microsoft brought DirectStorage to Windows PCs this week. The API promises faster load times and more detailed graphics by letting game developers make apps that load graphical data from the SSD directly to the GPU. Now, Nvidia and IBM have created a similar SSD/GPU technology, but they are aiming it at the massive data sets in data centers. From a report: Instead of targeting console or PC gaming like DirectStorage, Big accelerator Memory (BaM) is meant to provide data centers quick access to vast amounts of data in GPU-intensive applications, like machine-learning training, analytics, and high-performance computing, according to a research paper spotted by The Register this week. Entitled "BaM: A Case for Enabling Fine-grain High Throughput GPU-Orchestrated Access to Storage" (PDF), the paper by researchers at Nvidia, IBM, and a few US universities proposes a more efficient way to run next-generation applications in data centers with massive computing power and memory bandwidth. BaM also differs from DirectStorage in that the creators of the system architecture plan to make it open source.
Security

Cybercriminals Who Breached Nvidia Issue One of the Most Unusual Demands Ever (arstechnica.com) 60

shanen shares a report: Data extortionists who stole up to 1 terabyte of data from Nvidia have delivered one of the most unusual ultimatums ever in the annals of cybercrime: allow Nvidia's graphics cards to mine cryptocurrencies faster or face the imminent release of the company's crown-jewel source code. A ransomware group calling itself Lapsus$ first claimed last week that it had hacked into Nvidia's corporate network and stolen more than 1TB of data. Included in the theft, the group claims, are schematics and source code for drivers and firmware. A relative newcomer to the ransomware scene, Lapsus$ has already published one tranche of leaked files, which among other things included the usernames and cryptographic hashes for 71,335 of the chipmaker's employees.
Hardware

Raspberry Pi Alternative Banana Pi Reveals Powerful New Board (tomshardware.com) 78

Banana Pi has revealed a new board resembling the Raspberry Pi Computer Module 3. According to Tom's Hardware, it features a powerful eight-core processor, up to 8GB of RAM and 32GB eMMC. Additional features like ports will require you to connect it to a carrier board. From the report: At the core of the Banana Pi board is a Rockchip RK3588 SoC. This brings together four Arm Cortex-A76 cores at up to 2.6 GHz with four Cortex-A55 cores at 1.8 GHz in Arm's new DynamIQ configuration - essentially big.LITTLE in a single fully integrated cluster. It uses an 8nm process. The board is accompanied by an Arm Mali-G610 MP4 Odin GPU with support for OpenGLES 1.1, 2.0, and 3.2, OpenCL up to 2.2, and Vulkan1.2. There's a 2D graphics engine supporting resolutions up to 8K too, with four separate displays catered for (one of which can be 8K 30FPS), and up to 8GB of RAM, though the SoC supports up to 32GB. Built-in storage is catered for by up to 128GB of eMMC flash. It offers 8K 30fps video encoding in the H.265, VP9, AVS2 and (at 30fps) H.264 codecs.

That carrier board is a monster, with ports along every edge. It looks to be about four times the area of the compute board, though no official measurements have been given. You get three HDMIs (the GPU supports version 2.1), two gigabit Ethernet, two SATA, three USB Type-A (two 2.0 and one 3) one USB Type-C, micro SD, 3.5mm headphones, ribbon connectors, and what looks very like a PCIe 3.0 x4 micro slot. The PCIe slot seems to breakout horizontally, an awkward angle if you are intending to house the board in a case. Software options include Android and Linux.

Security

Nvidia Says Employee, Company Information Leaked Online After Cyber Attack (cnn.com) 9

U.S. chipmaker Nvidia said on Tuesday a cyber attacker has leaked employee credentials and some company proprietary information online after their systems were breached. From a report: "We have no evidence of ransomware being deployed on the Nvidia environment or that this is related to the Russia-Ukraine conflict," the company's spokesperson said in a statement. The Santa Clara, California-based company said it became aware of the breach on Feb. 23. Nvidia added it was working to analyze the information that has been leaked and does not anticipate any disruption to the company's business. A ransomware outfit under the name "Lapsus$" has reportedly claimed to be responsible for the leak and seemingly has information about the schematics, drivers and firmware, among other data, about the graphics chips.
Security

Utility Promising To Restore Mining Performance on Nvidia GPUs Actually Malware (web3isgoinggreat.com) 23

Web3 is Going Great reports: The popular Tom's Hardware and PC Gamer websites both ran articles about a utility called "Nvidia RTX LHR v2 Unlocker", which claimed to increase the artificially-limited cryptocurrency mining performance of its RTX graphics cards. These graphics cards are shipped with performance-limiting software to reduce the GPUs' attractiveness to cryptocurrency miners, whose thirst for GPUs has made it difficult and expensive for gamers and various others to acquire the hardware. Unfortunately, both publications had to run a second article just a day later to warn their readers away from the software they had just advertised.
Intel

Intel Arc Update: Alchemist Laptops Q1, Desktops Q2; 4M GPUs Total for 2022 (anandtech.com) 12

As part of Intel's annual investor meeting taking place today, Raja Koduri, Intel's SVP and GM of the Accelerated Computing Systems and Graphics (AXG) Group delivered an update to investors on the state of Intel's GPU and accelerator group, including some fresh news on the state of Intel's first generation of Arc graphics products. AnandTech: Among other things, the GPU frontman confirmed that while Intel will indeed ship the first Arc mobile products in the current quarter, desktop products will not come until Q2. Meanwhile, in the first disclosure of chip volumes, Intel is now projecting that they'll ship 4mil+ Arc GPUs this year. In terms of timing, today's disclosure confirms some earlier suspicions that developed following Intel's CES 2022 presentation: that the company would get its mobile Arc products out before their desktop products. Desktop products will now follow in the second quarter of this year, a couple of months behind the mobile parts. And finally, workstation products, which Intel has previously hinted at, are on their way and will land in Q3.
Intel

Intel To Enter Bitcoin Mining Market With Energy-Efficient GPU (pcmag.com) 52

Intel is entering the blockchain mining market with an upcoming GPU capable of mining Bitcoin. From a report: Intel insists the effort won't put a strain energy supplies or deprive consumers of chips. The goal is to create the most energy-efficient blockchain mining equipment on the planet, it says. "We expect that our circuit innovations will deliver a blockchain accelerator that has over 1,000x better performance per watt than mainstream GPUs for SHA-256 based mining," Intel's General Manager for Graphics, Raja Koduri, said in the announcement. (SHA-256 is a reference to the mining algorithm used to create Bitcoins.)

News of Intel's blockchain-mining effort first emerged last month after the ISSCC technology conference posted details about an upcoming Intel presentation titled: "Bonanza Mine: An Ultra-Low-Voltage Energy-Efficient Bitcoin Mining ASIC." ASICs are chips designed for a specific purpose, and also refer to dedicated hardware to mine Bitcoin. Friday's announcement from Koduri added that Intel is establishing a new "Custom Compute Group" to create chip platforms optimized for customers' workloads, including for blockchains.

Transportation

Tesla Now Runs the Most Productive Auto Factory In America (bloomberg.com) 198

An anonymous reader quotes a report from Bloomberg: Elon Musk has a very specific vision for the ideal factory: densely packed, vertically integrated and unusually massive. During Tesla's early days of mass production, he was chided for what was perceived as hubris. Now, Tesla's original California factory has achieved a brag-worthy title: the most productive auto plant in North America. Last year Tesla's factory in Fremont, California, produced an average of 8,550 cars a week. That's more than Toyota's juggernaut in Georgetown, Kentucky (8,427 cars a week), BMW AG's Spartanburg hub in South Carolina (8,343) or Ford's iconic truck plant in Dearborn, Michigan (5,564), according to a Bloomberg analysis of production data from more than 70 manufacturing facilities.

In a year when auto production around the world was stifled by supply-chain shortages, Tesla expanded its global production by 83% over 2020 levels. Its other auto factory, in Shanghai, tripled output to nearly 486,000. In the coming weeks, Tesla is expected to announce the start of production at two new factories -- Gigafactory Berlin-Brandenburg, its first in Europe, and Gigafactory Texas in Austin. Musk said in October that he plans to further increase production in Fremont and Shanghai by 50%. [...] Once Tesla flips the switch on two new factories, what comes next? Musk has a longstanding target to increase vehicle deliveries by roughly 50% a year. To continue such growth, Tesla will need to either open more factories or make the facilities even more productive. Musk said in October that he's working on both. Site selection for the next Gigafactories begins this year.

AI

Meta Unveils New AI Supercomputer (wsj.com) 48

An anonymous reader quotes a report from The Wall Street Journal: Meta said Monday that its research team built a new artificial intelligence supercomputer that the company maintains will soon be the fastest in the world. The supercomputer, the AI Research SuperCluster, was the result of nearly two years of work, often conducted remotely during the height of the pandemic, and led by the Facebook parent's AI and infrastructure teams. Several hundred people, including researchers from partners Nvidia, Penguin Computing and Pure Storage, were involved in the project, the company said.

Meta, which announced the news in a blog post Monday, said its research team currently is using the supercomputer to train AI models in natural-language processing and computer vision for research. The aim is to boost capabilities to one day train models with more than a trillion parameters on data sets as large as an exabyte, which is roughly equivalent to 36,000 years of high-quality video. "The experiences we're building for the metaverse require enormous compute powerand RSC will enable new AI models that can learn from trillions of examples, understand hundreds of languages, and more," Meta CEO Mark Zuckerberg said in a statement provided to The Wall Street Journal. Meta's AI supercomputer houses 6,080 Nvidia graphics-processing units, putting it fifth among the fastest supercomputers in the world, according to Meta.

By mid-summer, when the AI Research SuperCluster is fully built, it will house some 16,000 GPUs, becoming the fastest AI supercomputer in the world, Meta said. The company declined to comment on the location of the facility or the cost. [...] Eventually the supercomputer will help Meta's researchers build AI models that can work across hundreds of languages, analyze text, images and video together and develop augmented reality tools, the company said. The technology also will help Meta more easily identify harmful content and will aim to help Meta researchers develop artificial-intelligence models that think like the human brain and support rich, multidimensional experiences in the metaverse. "In the metaverse, it's one hundred percent of the time, a 3-D multi-sensorial experience, and you need to create artificial-intelligence agents in that environment that are relevant to you," said Jerome Pesenti, vice president of AI at Meta.

Slashdot Top Deals