Intel

Intel Enters the Laptop Discrete GPU Market With Xe Max (arstechnica.com) 32

An anonymous reader quotes a report from Ars Technica: This weekend, Intel released preliminary information on its newest laptop part -- the Xe Max discrete GPU, which functions alongside and in tandem with Tiger Lake's integrated Iris Xe GPU. We first heard about Xe Max at Acer's Next 2020 launch event, where it was listed as a part of the upcoming Swift 3x laptop -- which will only be available in China. The new GPU will also be available in the Asus VivoBook Flip TP470 and the Dell Inspiron 15 7000 2-in-1.

During an extended product briefing, Intel stressed to us that the Xe Max beats Nvidia's entry-level MX 350 chipset in just about every conceivable metric. In another year, this would have been exciting -- but the Xe Max is only slated to appear in systems that feature Tiger Lake processors, whose Iris Xe integrated GPUs already handily outperform the Nvidia MX 350 in both Intel's tests and our own. The confusion here largely springs from mainstream consumer expectations of a GPU versus what Intel's doing with the Xe Max. Our GPU tests largely revolve around gaming, using 3DMark's well-known benchmark suite, which includes gaming, fps-focused tests such as Time Spy and Night Raid. Intel's expectations for the Xe Max instead revolve, almost entirely, around content creation with a side of machine learning and video encoding.

Xe Max is, roughly speaking, the same 96 Execution Unit (EU) GPU to be found in the Tiger Lake i7-1185G7 CPU we've tested already this year -- the major difference, beyond not being on-die with the CPU, is a higher clock rate, dedicated RAM, and separate TDP budget. Tiger Lake's Iris Xe has a peak clock rate of 1.35GHz, and it shares the CPU's TDP constraints. Iris Xe Max has its own 25W TDP and a higher peak clock rate of 1.65GHz. It also has its own 4GiB of dedicated RAM -- though that RAM is the same LPDDR4X-4266 that Tiger Lake itself uses, which is something of a first for discrete graphics and might lead to better power efficiency.

Software

The FSF Is Looking To Update Its High Priority Free Software Projects List (phoronix.com) 33

AmiMoJo writes: As we roll into 2021 the Free Software Foundation is looking to update its high priority free software projects list. These are the software projects that should be incorporating "the most important threats, and most critical opportunities, that free software faces in the modern computing landscape." For now the FSF is looking for help deciding what to include. The FSF high priority projects list is what once included PowerVR reverse engineering as being very important albeit never happened prior to PowerVR graphics becoming less common. In fact, many FSF high priority projects never panned out as they weren't contributing much in the way of resources to the causes but just calling attention to them. PDF support was among their high priority projects as well as another example as well as the likes of an open-source Skype replacement and reverse-engineering other popular technologies.

They overhauled the list in 2017 after forming a committee to maintain the list while now as 2021 is just around the corner they are looking to revise their high priority projects focus once more. They have issued a call for input to share with the High Priority Free Software Projects committee what you feel should belong on the list. Feedback is being collected through early January. Currently on the list are different "areas" they feel are high priority for free software as opposed to previously focusing on particular projects.

Linux

SiFive Unveils Plan For Linux PCs With RISC-V Processors (venturebeat.com) 42

SiFive today announced it is creating a platform for Linux-based personal computers based on RISC-V processors. VentureBeat reports: Assuming customers adopt the processors and use them in PCs, the move might be part of a plan to create Linux-based PCs that use royalty-free processors. This could be seen as a challenge to computers based on designs from Intel, Advanced Micro Devices, Apple, or Arm, but giants of the industry don't have to cower just yet. The San Mateo, California-based company unveiled HiFive Unmatched, a development design for a Linux-based PC that uses its RISC-V processors. At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665.

The SiFive HiFive Unmatched board will have a SiFive processor, dubbed the SiFive FU740 SoC, a 5-core processor with four SiFive U74 cores and one SiFive S7 core. The U-series cores are Linux-based 64-bit application processor cores based on RISC-V. These cores can be mixed and matched with other SiFive cores, such as the SiFive FU740. These components are all leveraging SiFive's existing intellectual property portfolio. The HiFive Unmatched board comes in the mini-ITX standard form factor to make it easy to build a RISC-V PC. SiFive also added some standard industry connectors -- ATX power supplies, PCI-Express expansion, Gigabit Ethernet, and USB ports are present on a single-board RISC-V development system.

The HiFive Unmatched board includes 8GB of DDR4 memory, 32MB of QSPI flash memory, and a microSD card slot on the motherboard. For debugging and monitoring, developers can access the console output of the board through the built-in microUSB type-B connector. Developers can expand it using PCI-Express slots, including both a PCIe general-purpose slot (PCIe Gen 3 x8) for graphics, FPGAs, or other accelerators and M.2 slots for NVME storage (PCIe Gen 3 x4) and Wi-Fi/Bluetooth modules (PCIe Gen 3 x1). There are four USB 3.2 Gen 1 type-A ports on the rear, next to the Gigabit Ethernet port, making it easy to connect peripherals. The system will ship with a bootable SD card that includes Linux and popular system developer packages, with updates available for download from SiFive.com. It will be available for preorders soon.

For some more context: Could RISC-V processors compete with Intel, ARM, and AMD?
Intel

Could RISC-V Processors Compete With Intel, ARM, and AMD? (venturebeat.com) 112

"As promised, SiFive has unveiled a new computer featuring the company's SiFive FU740 processor based on RISC-V architecture," reports Liliputing: The company, which has been making RISC-V chips for several years, is positioning its new SiFive HiFive Unmatched computer as a professional development board for those interested in working with RISC-V. But unlike the company's other HiFive boards, the new Unmatched model is designed so that it can be easily integrated into a standard PC...

SiFive says the system can support GNU/Linux distributions including Yocto, Debian, and Fedora.

"SiFive is releasing the HiFive Unleashed in an effort to afford developers the ability to build RISC-V based systems, using readily available, off-the-shelf parts," explains Forbes: SiFive says it built the board to address the market need for easily accessible RISC-V hardware to further advance development of new platforms, products, and software using the royalty-free ISA...

A short video demo shows the HiFive Unmatched installed in a common mid-tower PC chassis, running the included Linux distro, with an AMD Radeon graphics card pushing the pixels. In the video, the HiFive Unmatched is compiling an application and is shown browsing the web and opening a PDF. SiFive also notes that video playback is accelerated in hardware with the included version of Linux.

"At the moment, these development PCs are early alternatives, most likely targeted at hobbyists and engineers who may snap them up when they become available in the fourth quarter for $665," notes VentureBeat.

But they add that "While it's still early days, it's not inconceivable that RISC-V processors could someday be alternatives to Intel-based PCs and PC processors." The startup has raised $190 million to date, and former Qualcomm executive Patrick Little recently joined SiFive as CEO. His task will be to establish the company's RISC-V processors as an alternative to ARM. This move comes in the wake of Nvidia's $40 billion acquisition of the world's leading processor architecture.

If Little is also looking to challenge Intel and AMD in PCs, he'll have his work cut out for him. For starters, SiFive is currently focused on Linux-based PCs, not Microsoft Windows PCs. Secondly, SiFive wouldn't build these processors or computers on its own. Its customers — anyone brave enough to take on the PC giants — would have to do that.

"I wouldn't see this as SiFive moving out of the box. It's more like they're expanding their box," said Linley Group senior analyst Aakash Jani. "They're using their core architecture to enable other chip designers to build PCs, or whatever they plan to build."

Intel

Intel Begins Their Open-Source Driver Support For Vulkan Ray-Tracing With Xe HPG (phoronix.com) 10

In preparation for next year's Xe HPG graphics cards, Intel's open-source developers have begun publishing their patches enabling their "ANC" Vulkan Linux driver to support Vulkan ray-tracing. Phoronix reports: Jason Ekstrand as the lead developer originally on the Intel ANV driver has posted today the initial ray-tracing code for ANV in order to support VK_KHR_ray_tracing for their forthcoming hardware. Today is the first time Intel has approved of this open-source code being published and more is on the way. The code today isn't enough for Vulkan ray-tracing but more is on the way and based against the latest internal Khronos ray-tracing specification. At the moment they are not focusing on the former NVIDIA-specific ray-tracing extension but may handle it in the future if game vendors continue targeting it rather than the forthcoming finalized KHR version.

Among other big ticket items still to come in the near-term includes extending the ANV driver to support compiling and dispatching OpenCL kernels, new SPIR-V capabilities, and generic pointer support. Also needed is the actual support for compiling ray-tracing pipelines, managing acceleration structures, dispatching rays, and the platform support. The actual exposing of the support won't come until after The Khronos Group has firmed up their VK_KHR_ray_tracing extension. Some of this Intel-specific Vulkan ray-tracing code may prove useful to Mesa's Radeon Vulkan "RADV" driver as well. Intel engineers have been testing their latest ray-tracing support with ANV internally on Xe HPG.

Wikipedia

WHO To Grant Wikipedia Free Use of Its Published Material To Combat Covid Misinformation (nytimes.com) 51

As part of efforts to stop the spread of false information about the coronavirus pandemic, Wikipedia and the World Health Organization announced a collaboration on Thursday: The health agency will grant the online encyclopedia free use of its published information, graphics and videos. The collaboration is the first between Wikipedia and a health agency. From a report: "We all consult just a few apps in our daily life, and this puts W.H.O. content right there in your language, in your town, in a way that relates to your geography," said Andrew Pattison, a digital content manager for the health agency who helped negotiate the contract. "Getting good content out quickly disarms the misinformation." Since its start in 2001, Wikipedia has become one of the world's 10 most consulted sites; it is frequently viewed for health information. The agreement puts much of the W.H.O.'s material into the Wikimedia "commons," meaning it can be reproduced or retranslated anywhere, without the need to seek permission -- as long as the material is identified as coming from the W.H.O. and a link to the original is included.

"Equitable access to trusted health information is critical to keeping people safe and informed," said Tedros Adhanom Ghebreyesus, the W.H.O.'s director general. His agency translates its work into six official languages, which do not include, for example, Hindi, Bengali, German or Portuguese, so billions of people cannot read its documents in their native or even second language. Wikipedia articles, by contrast, are translated into about 175 languages. The first W.H.O. items used under the agreement are its "Mythbusters" infographics, which debunk more than two dozen false notions about Covid-19. Future additions could include, for example, treatment guidelines for doctors, said Ryan Merkley, chief of staff at the Wikimedia Foundation, which produces Wikipedia. If the arrangement works out, it could be extended to counter misinformation regarding AIDS, Ebola, influenza, polio and dozens of other diseases, Mr. Merkley said, "But this was something that just had to happen now." Eventually, live links will be established that would, for example, update global case and death numbers on Wikipedia as soon as the W.H.O. posts them, Mr. Pattison said.

China

Huawei Announces Last Major Phone Before US Ban Forces Rethink (bloomberg.com) 20

Huawei introduced the Mate 40 smartphone series on Thursday, potentially its last major release powered by its self-designed Kirin chips. From a report: China's biggest tech company by sales has been stockpiling chips to get its signature device out in time to compete with Apple's iPhone 12 over the holidays. Huawei will have to overhaul its smartphone lineup after Trump administration sanctions that took effect in September curtailed its ability to design and manufacture advanced in-house chips by cutting it off from the likes of Taiwan Semiconductor Manufacturing Co. The company's consumer devices group, led by Richard Yu, was already prevented from shipping handsets with the full Google-augmented Android experience. But that didn't stop it from surpassing Samsung Electronics to become the world's best-selling smartphone maker in the summer, largely on the strength of growing domestic sales. Without a contractor to produce its own chips or the ability to buy processors from a supplier like Qualcomm, prognostications for the division's future are less rosy.

The 6.5-inch Mate 40 and 6.76-inch Mate 40 Pro feature the 5nm Kirin 9000 processor, second to Apple's A14 chip to offer that advanced manufacturing node in consumer devices. The system-on-chip contains 15.3 billion transistors, including eight CPU cores maxing out at a speed of 3.13GHz and 24 GPU cores that Huawei claims give it 52% faster graphics than Qualcomm's best offering. Both devices have sloping glass sides and in-display fingerprint sensors. The new rear "Space Ring" design accommodating Huawei's multi-camera system is reminiscent of the control wheel of iPods of yesteryear. It plays host to a 50-megapixel main camera accompanied by zoom and ultrawide lenses.

Games

AOC's Debut Twitch Stream Is One of the Biggest Ever (theverge.com) 120

Rep. Alexandria Ocasio-Cortez (D-NY) made her Twitch debut last night to play Among Us and quickly became one of the platform's biggest broadcasters. According to Twitch, her stream peaked at 435,000 viewers around the time of her first match. The Verge reports: That peak viewership puts her broadcast among the 20 biggest streams ever, according to the third-party metrics site TwitchTracker, and much higher if you're only looking at broadcasts from individual streamers. Ninja holds the record for an individual streamer, with more than 600,000 viewers during a Fortnite match with Drake in 2018. TwitchTracker's metrics suggest that AOC's stream could in the top 10 for an individual in terms of peak viewers.

Ocasio-Cortez's stream came together quickly. She tweeted Monday asking, "Anyone want to play Among Us with me on Twitch to get out the vote?" Major streamers quickly signed up -- she ended up being joined by Rep. Ilhan Omar (D-MN), Pokimane, HasanAbi, Disguised Toast, DrLupo, and more. Her stream even had graphics prepared, which Ocasio-Cortez said came from supporters who started making art after she tweeted. Despite only having minimal Among Us experience -- Ocasio-Cortez said Monday that she'd never played before, but seemed to have brushed up before the stream -- she did well in her first broadcast. She was chosen as an impostor in the first round and, with a partner, knocked out about half the field before getting caught. Omar later made it to the final three as an impostor before getting voted out by Ocasio-Cortez and Hasan.

Technology

Raspberry Pi Foundation Launches Compute Module 4 for Industrial Users (techcrunch.com) 40

The Raspberry Pi Foundation is launching a new product today -- the Compute Module 4. From a report: If you've been keeping an eye on the Raspberry Pi releases, you know that the flagship Raspberry Pi 4 was released in June 2019. The Compute Module 4 features the same processor, but packed in a compute module for industrial use cases. A traditional Raspberry Pi is a single-board computer with a ton of ports sticking out. Compute Modules are somewhat different. Those system-on-module variants are more compact single-board computers without any traditional port. It lets you create a prototype using a traditional Raspberry Pi, and then order a bunch of Compute Modules to embed in your commercial products. "Over half of the seven million Raspberry Pi units we sell each year go into industrial and commercial applications, from digital signage to thin clients to process automation," Eben Upton wrote on the Raspberry Pi blog. Some things are strictly similar between the Raspberry Pi 4 and the Compute Module 4, such as the 64-bit ARM-based processor with VideoCore VI graphics. This is going to represent a huge upgrade for previous Compute Module customers. In particular, you get much better video performance with 4Kp60 hardware decode for H.265 videos, 1080p60 hardware decode for H.264 videos, 1080p30 hardware encode of H.264 videos. You can also take advantage of the dual HDMI interfaces to connect up to two 4K displays at 60 frames per second.
Iphone

New Benchmark Shows iPhones Throttle So Hard They Lose Their Edge Over Android (hothardware.com) 133

MojoKid writes: Apple has repeatedly asserted its dominance in terms of performance versus competitive mobile platforms. And it has been historically true that, in cross-platform benchmarks, iPhones generally can beat out Android phones in both CPU and GPU (graphics) performance. However, a new benchmark recently released from trusted benchmark suite developer UL Benchmarks sheds light on what could be the iPhone's Achilles' Heel in terms of performance, or more specifically, performance over extended duration.

The new benchmark, 3DMark WildLife, employs Apple's Metal API for rendering and Vulkan on Android devices. In testing at HotHardware, for basic single-run tests, again iPhones trounce anything Android, including flagship devices like the Samsung Galaxy S20 Ultra, ASUS ROG Phone 3 and OnePlus 8. However, in the extended duration WildLife Stress Test, which loops the single test over and over for 20 minutes, the current flagship iPhone 11 Pro and A13 Bionic's performance craters essentially to Snapdragon 865/865+ performance levels, while Android phones like the OnePlus 8 maintain 99+% of their performance. Though this is just one gaming benchmark test that employs the latest graphics technologies and APIs, it's interesting to see that perhaps Apple's focus on tuning for quick bursty workloads (and maybe benchmark optimization too?) falls flat if the current class of top-end iPhone is pushed continuously.

Technology

Razer Launches 120-Hz Screen Blade Stealth 13 Laptop With Fall Accessories Lineup (venturebeat.com) 9

Razer has unveiled its fall lineup of gaming products for its base of hardcore gamers, including new laptops and an ergonomic gaming chair. The company unveiled the gear at its first annual RazerCon weekend-long event, a virtual festival filmed at the company's Las Vegas store. The event includes concerts with artists such as Deadmau5, DragonForce and Friends, Sabaton, and Speaker Honey. From a report: More than a million people were watching at the outset of RazerCon as Razer CEO Min-Liang Tan took the stage. He noted that Razer donated more than a million masks during the pandemic and that fans donated 75,000. He said Razer also created a $15 million COVID-19 relief fund and the company is supporting green product design with its products and packaging. Tan also announced a partnership with Conservation International to fund the protection of trees worldwide.

Razer announced the latest version of its Razer Blade Stealth 13 laptop with an "ultrabook" design. It has an 11th Gen Intel Core i7-1165G7 processor running at up to 4.7GHz (base performance of 2.8GHz), a full HD OLED touch display option, and THX Spatial Audio. It also has an Nvidia GeForce GTX 1650 Ti graphics chip (the same as the prior model with 4GB GDDR6 memory, which gives it 10% faster graphics performance) and is 2.7 times better at content creation than the previous version. The display has an option for operating at a 120-hertz refresh rate. Razer marketing manager Eugene Kuo said in a press briefing that the laptop is the company's first to combine the OLED screen with the faster refresh rate. The laptop runs at 28 watts and can produce darker images and contrast ratios. Razer acquired THX, which was founded by filmmaker George Lucas, and is now adding the THX Spatial Audio technology to its peripherals and computers. For gaming, Razer claims the spatial audio offers a competitive edge, as you can hear which direction enemies are coming from.
The Razer Blade Stealth 13 will be available this month at $1,800 on Razer.com, as well as through select retailers this fall.
Linux

Linux 5.9 Boosts CPU Performance With FSGSBASE Support (phoronix.com) 75

FSGSBASE support in Linux "has the possibility of helping Intel/AMD CPU performance especially in areas like context switching that had been hurt badly by Spectre/Meltdown and other CPU vulnerability mitigations largely on the Intel side," Phoronix wrote back in August. As it started its journey into the kernel, they provided a preview on August 10: The FSGSBASE support that was finally mainlined a few days ago for Linux 5.9 is off to providing a nice performance boost for both Intel and AMD systems... FSGSBASE support for the Linux kernel has been around a half-decade in the making and finally carried over the finish line by one of Microsoft's Linux kernel engineers...

FSGSBASE particularly helps out context switching heavy workloads like I/O and allowing user-space software to write to the x86_64 GSBASE without kernel interaction. That in turn has been of interest to Java and others...On Linux 5.9 where FSGSBASE is finally mainlined, it's enabled by default on supported CPUs. FSGSBASE can be disabled at kernel boot time via the "nofsgsbase" kernel option.

Today on the Linux kernel mailing list, Linus Torvalds announced the release of Linux 5.9: Ok, so I'll be honest - I had hoped for quite a bit fewer changes this last week, but at the same time there doesn't really seem to be anything particularly scary in here. It's just more commits and more lines changed than I would have wished for.
And Phoronix reported: Linux 5.9 has a number of exciting improvements including initial support for upcoming Radeon RX 6000 "RDNA 2" graphics cards, initial Intel Rocket Lake graphics, NVMe zoned namespaces (ZNS) support, various storage improvements, IBM's initial work on POWER10 CPU bring-up, the FSGSBASE instruction is now used, 32-bit x86 Clang build support, and more. See our Linux 5.9 feature overview for the whole scoop on the many changes to see with this kernel.
Graphics

Nvidia CEO Anticipates Supply Shortages For the RTX 3080 and 3090 To Last Until 2021 (theverge.com) 98

Nvidia CEO Jensen Huang announced today that the company expects shortages for the Nvidia RTX 3080 and 3090 graphics cards will continue to for the remainder of the year. The Verge reports: During a Q&A with press to cover its GTC announcements, Huang responded to the continuous shortages for both graphics cards. "I believe that demand will outstrip all of our supply through the year," Huang said. The RTX 3080 and 3090 had extremely rough launches, with both cards selling out within minutes of preorders going live, but Huang says the issue is not with supply but rather the demand of both GPUs. "Even if we knew about all the demand, I don't think it's possible to have ramped that fast," Huang said. "We're ramping really really hard. Yields are great, the product's shipping fantastically, it's just getting sold out instantly." Nvidia has apologized for the launch of the RTX 3080 and the limited supply of the cards. The company plans to launch the $499 RTX 3070, but the release date has been pushed to October 29th "in the hopes that the company can work with retailers to get the cards to more customers on launch day," reports The Verge.
X

Graphical Linux Apps Are Coming to Windows Subsystem for Linux (zdnet.com) 89

ZDNet reports: At the Microsoft Build 2020 virtual developers' conference, CEO Satya Nadella announced that Windows Subsystem for Linux (WSL) 2.0 would soon support Linux GUIs and applications. That day is closer now than ever before. At the recent X.Org Developers Conference, Microsoft partner developer lead Steve Pronovost revealed that Microsoft has made it possible to run graphical Linux applications within WSL.

It's always been possible to run Linux graphical programs such as the GIMP graphics editor, Evolution e-mail client, and LibreOffice on WSL. But it wasn't easy. You had to install a third-party X Window display server, such as the VcXsrv Windows X Server in Windows 10, and then do some tuning with both Windows and Linux to get them to work together smoothly. The X Window System underlies almost all Linux graphical user interfaces. Now, Microsoft has ported a Wayland display server to WSL. Wayland is the most popular X Window compatible server. In WSL2, it connects the graphical Linux applications via a Remote Desktop Protocol (RDP) connection to the main Windows display. This means you can run Linux and Windows GUI applications simultaneously on the same desktop screen....

Craig Loewen, Microsoft WSL Program Manager, added in a Twitter thread that the key differences between using a third-party X server and the built-in Wayland server is that: "You don't need to start up or start the server, we'll handle that for you." In addition, it comes with "Lovely integration with Windows," such as drop shadows and Linux icon support. Loewen also said you can run a Linux web browser in it. "We haven't tested it extensively with a full desktop environment yet, as we want to focus on running often asked for apps first, and primarily IDEs [integrated development environment] so you can run those in a full Linux environment," he said.

Don't get too excited about it just yet, though. Loewen continued, "We don't yet have an ETA for the beta channel, however, this work will be available in general for Insiders to try within the next couple of months."

Graphics

GeForce RTX 3090 Launched: NVIDIA's Biggest, Fastest Gaming GPU Tested (hothardware.com) 62

MojoKid writes: NVIDIA's GeForce RTX 3090, which just launched this morning, is the single most powerful graphics card money can buy currently (almost). It sits at the top of NVIDIA's product stack, and according to the company, it enables new experiences like smooth 8K gaming and seamless processing of massive content creation workloads, thanks in part to its 24GB of on-board GDDR6X memory. A graphics card like the GeForce RTX 3090 isn't for everyone, however. Though its asking price is about a $1,000 lower than its previous-gen, Turing-based Titan RTX counterpart, it is still out of reach for most gamers. That said, content creation and workstation rendering professionals can more easily justify its cost.

In performance testing fresh off the NDA lift, versus the GeForce RTX 3080 that arrived last week, the more powerful RTX 3090's gains range from about 4% to 20%. Versus the more expensive previous generation Titan RTX though, the GeForce RTX 3090's advantages increase to approximately 6% to 40%. When you factor in complex creator workloads that can leverage the GeForce RTX 3090's additional resources and memory, however, it can be many times faster than either the RTX 3080 or Titan RTX. The GeForce RTX 3090 will be available in limited quantities today but the company pledges to make more available directly and through OEM board partners as soon as possible.

Intel

Intel Details Chips Designed For IoT and Edge Workloads (venturebeat.com) 14

Intel today announced the launch of new products tailored to edge computing scenarios like digital signage, interactive kiosks, medical devices, and health care service robots. From a report: The 11th Gen Intel Core Processors, Atom x6000E Series, Pentium, Celeron N, and J Series bring new AI security, functional safety, and real-time capabilities to edge customers, the chipmaker says, laying the groundwork for innovative future applications. Intel expects the edge market to be a $65 billion silicon opportunity by 2024. The company's own revenue in the space grew more than 20% to $9.5 billion in 2018. And according to a 2020 IDC report, up to 70% of all enterprises will process data at the edge within three years. To date, Intel claims to have cultivated an ecosystem of more than 1,200 partners, including Accenture, Bosch, ExxonMobil, Philips, Verizon, and ViewSonic, with over 15,000 end customer deployments across "nearly every industry."

The 11th Gen Core processors -- which Intel previewed in early September -- are enhanced for internet of things (IoT) use cases requiring high-speed processing, computer vision, and low-latency deterministic processing, the company says. They bring an up to 23% performance gain in single-threaded workloads, a 19% performance gain in multithreaded workloads, and an up to 2.95 times performance gain in graphics workloads versus the previous generation. New dual video decode boxes allow the processors to ingest up to 40 simultaneous video streams at 1080p up to 30 frames per second and output four channels of 4K or two channels of 8K video. According to Intel, the combination of the 11th Gen's SuperFin process improvements, miscellaneous architectural enhancements, and Intel's OpenVINO software optimizations translates to 50% faster inferences per second compared with the previous 8th Gen processor using CPU mode or up to 90% faster inferences using the processors' GPU-accelerated mode.

Data Storage

Samsung's Fast, PCIe 4.0-ready 980 Pro SSD Can Future-Proof Your PC Build (theverge.com) 78

Samsung has unveiled its next high-performance NVMe 2280-sized M.2 drive, the 980 Pro. So far, it comes in three capacities shipping this month: 250GB for $89.99, 500GB for $149.99, and 1TB for $229.99. A 2TB model will arrive later this year, but Samsung didn't share a price. From a report: The standout feature of this drive is its compatibility with M.2 slots over the PCIe 4.0 interface. If you have a compatible motherboard, Samsung says the 980 Pro can go on a tear with sequential read / write speeds of up to 7,000MB/s and 5,000MB/s, respectively. It claims that this is two times faster performance than PCIe 3.0 SSDs and nearly 13 times faster than the more affordable but slower SATA SSDs. Of course, to get the best speeds out of this Samsung M.2 drive, you'll need a compatible motherboard with a PCIe 4.0 M.2 slot. Adoption of the tech is starting to ramp up, including mainstream computing products like AMD's third-generation Ryzen CPUs, its Radeon RX 5700 and 5700XT GPUs, and more recently, Nvidia's RTX 3080 graphics card. Sony and Microsoft are also using the technology for their custom SSD technologies in the PS5 and Xbox Series S / X consoles.
Hardware

'Huang's Law Is the New Moore's Law' (wsj.com) 55

As chip makers have reached the limits of atomic-scale circuitry and the physics of electrons, Moore's law has slowed, and some say it's over. But a different law, potentially no less consequential for computing's next half century, has arisen. WSJ: I call it Huang's Law, after Nvidia chief executive and co-founder Jensen Huang. It describes how the silicon chips that power artificial intelligence more than double in performance every two years. While the increase can be attributed to both hardware and software, its steady progress makes it a unique enabler of everything from autonomous cars, trucks and ships to the face, voice and object recognition in our personal gadgets. Between November 2012 and this May, performance of Nvidia's chips increased 317 times for an important class of AI calculations, says Bill Dally, chief scientist and senior vice president of research at Nvidia. On average, in other words, the performance of these chips more than doubled every year, a rate of progress that makes Moore's Law pale in comparison.

Nvidia's specialty has long been graphics processing units, or GPUs, which operate efficiently when there are many independent tasks to be done simultaneously. Central processing units, or CPUs, like the kind that Intel specializes in, are on the other hand much less efficient but better at executing a single, serial task very quickly. You can't chop up every computing process so that it can be efficiently handled by a GPU, but for the ones you can -- including many AI applications -- you can perform it many times as fast while expending the same power. Intel was a primary driver of Moore's Law, but it was hardly the only one. Perpetuating it required tens of thousands of engineers and billions of dollars in investment across hundreds of companies around the globe. Similarly, Nvidia isn't alone in driving Huang's Law -- and in fact its own type of AI processing might, in some applications, be losing its appeal. That's probably a major reason it has moved to acquire chip architect Arm Holdings this month, another company key to ongoing improvement in the speed of AI, for $40 billion.

Classic Games (Games)

Thieves' Guild: a BBS Game With the Best 1990s Pixel Graphics You've Never Seen (breakintochat.com) 55

"The sky is clear, the breeze is strong. A perfect day to make the long sea voyage to Mythyn," writes BBS history blogger Josh Renaud. "You prepare your galley, hire a crew of sailors, and cast off. But a few hours into your trip, the dreaded words appear: 'Thou seest rippling waters...'"

He's describing the beginning of a 27-year-old game that he'd been searching for since 2013. Slashdot reader Kirkman14 why the game is so special — and so rare: Thieves' Guild is a BBS door game for the Atari ST that came out in 1993. [A "door" connected the software running the dial-up Bulletin Board system to an external application.] What made Thieves' Guild unique was its graphical front-end client, which features dozens of eye-popping pixel art vignettes, along with simple animated sprites, sampled speech, and sound effects.

As a BBS door game (strike 1) for the Atari ST (strike 2), not many people played this game or saw its front-end in the 90s. But it's worth re-discovering.

The game was created by Paul Witte and Herb Flower who teamed up again in the early 2000s to produce the MMORPG "Linkrealms."

The Pascal source code for several versions of Thieves' Guild, including an unreleased 1995 port for PC BBSes, has been rescued and published on GitHub.

Intel

First Intel Tiger Lake Benchmarks Show Big CPU and Graphics Performance Gains (hothardware.com) 46

MojoKid writes: Intel formally announced its 11th Gen Core mobile processor family, known by the code name Tiger Lake, a few weeks back and made some bold performance claims for it as well. The company even compared its quad-core variant to AMD's 8-core Ryzen 7 4800U in gaming and content creation. Today Intel lifted the embargo veil on benchmarks with its Core i7-1185G7 Tiger Lake CPU with on-board Iris Xe graphics and there's no question Tiger Lake is impressive. Intel indeed achieved single-threaded performance gains north of 20% with even larger deltas for multithreaded throughput in some cases as well. In addition, Tiger Lake's integrated Iris Xe graphics put up over 2X the gaming performance over the company's 10th Gen Ice Lake processors, and it looks to be the fastest integrated graphics solution for laptops on the market currently, besting AMD's Ryzen 4000 series as well. Battery life measurements are still out, however, as retail ready products have yet to hit the channel. Intel notes Tiger Lake-powered laptops from OEM partners should be available in the next month or so.

Slashdot Top Deals