Supercomputing

University of Texas Announces Fastest Academic Supercomputer In the World (utexas.edu) 31

On Tuesday the University of Texas at Texas launched the fastest supercomputer at any academic facility in the world.

The computer -- named "Frontera" -- is also the fifth most-powerful supercomputer on earth. Slashdot reader aarondubrow quotes their announcement: The Texas Advanced Computing Center (TACC) at The University of Texas is also home to Stampede2, the second fastest supercomputer at any American university. The launch of Frontera solidifies UT Austin among the world's academic leaders in this realm...

Joined by representatives from the National Science Foundation (NSF) -- which funded the system with a $60 million award -- UT Austin, and technology partners Dell Technologies, Intel, Mellanox Technologies, DataDirect Networks, NVIDIA, IBM, CoolIT and Green Revolution Cooling, TACC inaugurated a new era of academic supercomputing with a resource that will help the nation's top researchers explore science at the largest scale and make the next generation of discoveries.

"Scientific challenges demand computing and data at the largest and most complex scales possible. That's what Frontera is all about," said Jim Kurose, assistant director for Computer and Information Science and Engineering at NSF. "Frontera's leadership-class computing capability will support the most computationally challenging science applications that U.S. scientists are working on today."

Frontera has been supporting science applications since June and has already enabled more than three dozen teams to conduct research on a range of topics from black hole physics to climate modeling to drug design, employing simulation, data analysis, and artificial intelligence at a scale not previously possible.

Here's more technical details from the announcement about just how fast this supercomputer really is.
Amiga

Ask Slashdot: What Would Computing Look Like Today If the Amiga Had Survived? 221

dryriver writes: The Amiga was a remarkable machine at the time it was released -- 1985. It had a multitasking capable GUI-driven OS and a mouse. It had a number of cleverly designed custom chips that gave the Amiga amazing graphics and sound capabilities far beyond the typical IBM/DOS PCs of its time. The Amiga was the multimedia beast of its time -- you could create animated and still 2D or 3D graphics on it, compose sophisticated electronic music, develop 2D or 3D 16-Bit games, edit and process digital video (using Video Toaster), and of course, play some amazing games. And after the Amiga -- as well as the Atari ST, Archimedes and so on -- died, everybody pretty much had to migrate to either the PC or Mac platforms. If Commodore and the Amiga had survived and thrived, there might have been four major desktop platforms in use today: Windows, OSX, AmigaOS and Linux. And who knows what the custom chips (ASICs? FPGAs?) of an Amiga in 2019 might have been capable of -- Amiga could possibly have been the platform that makes nearly life-like games and VR/AR a reality, and given Nvidia and AMD's GPUs a run for their money.

What do you think the computing landscape in 2019 would have looked like if the Amiga and AmigaOS as a platform had survived? Would Macs be as popular with digital content creators as they are today? Would AAA games target Windows 7/8/10 by default or tilt more towards the Amiga? Could there have been an Amiga hardware-based game console? Might AmigaOS and Linux have had a symbiotic existence of sorts, with AmigOS co-existing with Linux on many enthusiast's Amigas, or even becoming compatible with each other over time?
AMD

New Stats Suggest Strong Sales For AMD (techspot.com) 32

Windows Central reports: AMD surpassed NVIDIA when it comes to total GPU shipments according to new data from Jon Peddie Research (via Tom's Hardware). This is the first time that AMD ranked above NVIDIA in total GPU shipments since Q3 of 2014. AMD now has a 17.2 percent market share compared to NVIDIA's 16 percent according to the most recent data. John Peddie Research also reports that "AMD's overall unit shipments increased 9.85% quarter-to-quarter."

AMD gained 2.4 percent market share over the last year while NVIDIA lost 1 percent. Much of AMD's growth came in the last quarter, in which AMD saw a difference of 1.5 percent compared to NVIDIA's 0.1 percent.

The Motley Fool points out that "NVIDIA doesn't sell CPUs, so this comparison isn't apples-to-apples."

But meanwhile, TechSpot reports: German hardware retailer Mindfactory has published their CPU sales and revenue figures, and they show that for the past year AMD had sold slightly more units than Intel -- until Ryzen 3000 arrived. When the new hardware launched in July, AMD's sales volume doubled and their revenue tripled, going from 68% to 79% volume market share and 52% to 75% revenue share -- this is for a single major PC hardware retailer in Germany -- but the breakdown is very interesting to watch nonetheless...

Full disclaimer: German markets have historically been more biased towards Ryzen than American ones, and AMD's sales will fall a bit before stabilizing, while Intel's appear to have already plateaued.

Businesses

Ask Slashdot: Who Are the 'Steve Wozniaks' of the 21st Century? 155

dryriver writes: There are some computer engineers -- working in software or hardware, or both -- who were true pioneers. Steve Wozniak needs no introduction. Neither do Alan Turing, Ada Lovelace or Charles Babbage. Gordon Moore and Robert Noyce started Intel decades ago. John Carmack of Doom is a legend in realtime 3D graphics coding. Aleksey Pajitnov created Tetris. Akihiro Yokoi and Aki Maita invented the Tamagotchi. Jaron Lanier is the father of VR. Palmer Luckey hacked together the first Oculus Rift VR headset in his parent's garage in 2011. To the question: Who in your opinion are the 21st Century "Steve Wozniaks," working in either hardware or software, or both?
Science

Graphics That Seem Clear Can Easily Be Misread (scientificamerican.com) 54

An anonymous reader shares a report: "A picture is worth a thousand words." That saying leads us to believe that we can readily interpret a chart correctly. But charts are visual arguments, and they are easy to misunderstand if we do not pay close attention. Alberto Cairo, chair of visual journalism at the University of Miami, reveals pitfalls in an example diagrammed here. Learning how to better read graphics can help us navigate a world in which truth may be hidden or twisted. Say that you are obese, and you've grown tired of family, friends and your doctor telling you that obesity may increase your risk for diabetes, heart disease, even cancer -- all of which could shorten your life. One day you see this chart. Suddenly you feel better because it shows that, in general, the more obese people a country has (right side of chart), the higher the life expectancy (top of chart). Therefore, obese people must live longer, you think. After all, the correlation (red line) is quite strong.

The chart itself is not incorrect. But it doesn't really show that the more obese people are, the longer they live. A more thorough description would be: "At the national level -- country by country -- there is a positive association between obesity rates and life expectancy at birth, and vice versa." Still, this does not mean that a positive association will hold at the local or individual level or that there is a causal link. Two fallacies are involved. First, a pattern in aggregated data can disappear or even reverse once you explore the numbers at different levels of detail. If the countries are split by income levels, the strong positive correlation becomes much weaker as income rises. In the highest-income nations (chart on bottom right), the association is negative (higher obesity rates mean lower life expectancy). The pattern remains negative when you look at the U.S., state by state: life expectancy at birth drops as obesity rises. Yet this hides the second fallacy: the negative association can be affected by many other factors. Exercise and access to health care, for example, are associated with life expectancy. So is income.

Intel

Intel's Line of Notebook CPUs Gets More Confusing With 14nm Comet Lake (arstechnica.com) 62

Intel today launched a new series of 14nm notebook CPUs code-named Comet Lake. Going by Intel's numbers, Comet Lake looks like a competent upgrade to its predecessor Whiskey Lake. The interesting question -- and one largely left unanswered by Intel -- is why the company has decided to launch a new line of 14nm notebook CPUs less than a month after launching Ice Lake, its first 10nm notebook CPUs. From a report: Both the Comet Lake and Ice Lake notebook CPU lines this month consist of a full range of i3, i5, and i7 mobile CPUs in both high-power (U-series) and low-power (Y-series) variants. This adds up to a total of 19 Intel notebook CPU models released in August, and we expect to see a lot of follow-on confusion. During the briefing call, Intel executives did not want to respond to questions about differentiation between the Comet Lake and Ice Lake lines based on either performance or price, but the technical specs lead us to believe that Ice Lake is likely the far more attractive product line for most users.

Intel's U-series CPUs for both Comet Lake and Ice Lake operate at a nominal 15W TDP. Both lines also support a "Config Up" 25W TDP, which can be enabled by OEMs who choose to provide the cooling and battery resources necessary to support it. Things get more interesting for the lower-powered Y-series -- Ice Lake offers 9W/12W configurable TDP, but Comet Lake undercuts that to 7W/9W. This is already a significant drop in power budget, which Comet Lake takes even further by offering a new Config Down TDP, which is either 4.5W or 5.5W, depending on which model you're looking at. Comet Lake's biggest and meanest i7, the i7-10710U, sports 6 cores and 12 threads at a slightly higher boost clock rate than Ice Lake's 4C/8T i7-1068G7. However, the Comet Lake parts are still using the older UHD graphics chipset -- they don't get access to Ice Lake's shiny new Iris+, which offers up to triple the onboard graphics performance. This sharply limits the appeal of the Comet Lake i7 CPUs in any OEM design that doesn't include a separate Nvidia or Radeon GPU -- which would in turn bump the real-world power consumption and heat generation of such a system significantly.

The Internet

The Truth About Faster Internet: It's Not Worth It (wsj.com) 253

Americans are spending ever more for blazing internet speeds, on the promise that faster is better. Is that really the case? For most people, the answer is no. From a report: The Wall Street Journal studied the internet use of 53 of our journalists across the country, over a period of months, in coordination with researchers at Princeton University and the University of Chicago. Our panelists used only a fraction of their available bandwidth to watch streaming services including Netflix, Amazon Prime Video and YouTube, even simultaneously. Quality didn't improve much with higher speeds. Picture clarity was about the same. Videos didn't launch quicker. Broadband providers such as Comcast, Charter and AT&T are marketing speeds in the range of 250, 500 or even 1,000 megabits a second, often promising that streaming-video bingers will benefit. "Fast speeds for all of your shows," declares one online ad from Comcast. But for a typical household, the benefits of paying for more than 100 megabits a second are marginal at best, according to the researchers. That means many households are paying a premium for services they don't need.

To gauge how much bandwidth, or speed capacity, households need, it helps to look at an extreme scenario. Our users spent an evening streaming up to seven services simultaneously, including on-demand services like Netflix and live-TV services like Sling TV. We monitored the results. Peter Loftus, one of our panelists, lives outside Philadelphia and is a Comcast customer with a speed package of 150 megabits a second. Peter's median usage over 35 viewing minutes was 6.9 Mbps, 5% of the capacity he pays for. For the portion when all seven of his streams were going at once, he averaged 8.1 Mbps. At one point, for one second, Peter reached 65% of his capacity. Did his video launch faster or play more smoothly? Not really. The researchers said that to the extent there were differences in video quality such as picture resolution or the time it took to launch a show, they were marginal.

AI

Cerebras Systems Unveils a Record 1.2 Trillion Transistor Chip For AI (venturebeat.com) 67

An anonymous reader quotes a report from VentureBeat: New artificial intelligence company Cerebras Systems is unveiling the largest semiconductor chip ever built. The Cerebras Wafer Scale Engine has 1.2 trillion transistors, the basic on-off electronic switches that are the building blocks of silicon chips. Intel's first 4004 processor in 1971 had 2,300 transistors, and a recent Advanced Micro Devices processor has 32 billion transistors. Samsung has actually built a flash memory chip, the eUFS, with 2 trillion transistors. But the Cerebras chip is built for processing, and it boasts 400,000 cores on 42,225 square millimeters. It is 56.7 times larger than the largest Nvidia graphics processing unit, which measures 815 square millimeters and 21.1 billion transistors. The WSE also contains 3,000 times more high-speed, on-chip memory and has 10,000 times more memory bandwidth.
Graphics

Can JPEG XL Become the Next Free and Open Image Format? (jpeg.org) 106

"JPEG XL looks very promising as a next gen replacement for JPEG, PNG and GIF," writes icknay (Slashdot reader #96,963): JPEG was incredibly successful by solving a real problem with a free and open format. Other formats have tried to replace it, notably HEIF which will never by universal due to its patent licensing.

JPEG XL combines all the modern features, replacing JPEG PNG and GIF and has free and open licensing. The linked slides from Jon Sneyers review the many other attempts at replacing JPEG plus the obligatory XKCD standards joke.

Microsoft

June Windows Security Patch Broke Many EMF Files (microsoft.com) 12

reg (Slashdot user #5,428) writes: A Windows security patch in June broke the display of many Windows Metafile graphics across all supported versions of Windows, resulting in many old PowerPoint files and Word documents not displaying figures, and graphics from some popular applications not displaying, including at least some ESRI GIS products and files created using the devEMF driver in R. This likely also impacts EMF files created with Open Source Office suites. While the problem can be fixed by recreating the files using a newer set of options, or resorting to using bitmaps, it means that presentations or documents that used to display perfectly no longer do. Microsoft promised a fix in July, but there is still no news of when it will be available.
Google

Nvidia CEO Says Google Is the Company's Only Customer Building Its Own Silicon At Scale (cnbc.com) 20

An anonymous reader quotes a report from CNBC: Nvidia's CEO, Jensen Huang, has reason to be concerned about other chipmakers, like AMD. But he's not worried about Nvidia's own big customers turning into competitors. Amazon, Facebook, Google and Tesla are among the companies that buy Nvidia's graphics cards and have kicked off chip-development projects. "There's really one I know of that have silicon that's really in production," Huang told CNBC in an interview on Thursday. That company would be Google, he said. "But our conversation with large customers is intensifying," Huang said. "We're talking to more large customers."

Google first announced its entrance into the data center AI chip-making world in 2016. As it came up with new versions, the web company pointed to performance advantages over graphics cards that were available at the time. Google hasn't started selling data center chips for training AI models to other companies, though. (Google has started offering various products that use its Edge tensor processing unit chips, but those chips aren't as powerful as the TPU chips for training AI models in Google's cloud.)

Intel

Intel Reveals 10th Gen Core Lineup For Laptops and 2-in-1s (venturebeat.com) 64

Intel's new generation of processors is nigh upon us, and it promises to be a doozy in several respects. VentureBeat: The Santa Clara chipmaker today launched 11 new 10-nanometer 10th Gen Intel Core processors (code-named Ice Lake) designed for slim laptops, 2-in-1s, and other high-end mobile form factors. In addition to capable new integrated graphics and enhanced connectivity courtesy Wi-Fi 6 and Thunderbolt 3, the chips feature tweaks intended to accelerate task-specific workloads like AI inference and photo editing, as well as gaming. Intel expects the first 35 or so systems sporting Ice Lake-U and Ice Lake-Y chips to ship for the holiday season. Several that passed the chipmaker's Project Athena certification were previewed at Computex in Taiwan, including the Acer Swift 5, Dell XPS 13-inch 2-in 1, HP Envy 13, and Lenovo S940.

No matter which processor in the 10th Gen portfolio your future PC sports, its four cores (eight logical cores) paired with a 6MB or 8MB cache will support up to 32 GB of LP4/x-3733 (or up to 64GB of DDR4-3200), and they'll sip 9W, 15W, or 28W of power while clocking up to 4.1GHz at maximum Turbo Boost frequency. Each chip has 16 PCIe 3.0 lanes for external use, and their memory controllers allow for idle power states for less intensive tasks. With respect to AI and machine learning, every laptop-bound 10th Gen processor -- whether Core i3, Core i5, and Core i7 -- boasts Sunny Cove cores with Intel AVX-512-Deep Learning Boost, a new instruction set that speeds up automatic image enhancements, photo indexing, media postprocessing, and other AI-driven tasks.

First Person Shooters (Games)

'Doom' Celebrates 25th Anniversary By Re-Releasing Three Classic Games (theverge.com) 102

To celebrate the 25th anniversary of Doom, there's now mobile versions in the Google Play Store, reports Android Police, "and since this is a 25th-anniversary release, it includes the fourth expansion Thy Flesh Consumed. It's the complete package folks, and it's finally available on Android as an official release."

And in addition, three Doom re-releases are now available for the Nintendo Switch, Xbox One, and PlayStation 4, reports the Verge -- though there was one little glitch: Bethesda says it'll get rid of the strange requirement that players must log into an online account before they play the newly re-released versions of Doom, Doom II, and Doom 3, which went live yesterday. Players quickly criticized Bethesda for the seemingly ridiculous limitation -- the first of these games was released more than 25 years ago, at a time when there was obviously no internet requirement. The online login will be made optional in a coming update, Bethesda said today.
The re-releases were part of QuakeCon 2019, reports IGN, noting that Bethesda also showcased Doom Eternal's multiplayer, "revealing new details about the unique 1v2 Battle Mode."

Forbes hails the re-releases as "id Software's fast-paced, ultra-violent...classic shooters," adding that "It appears the re-releases are actually Unity remakes, though whether much has changed beyond resolution support remains to be seen." But they may also have some other minor differences, Engadget reports: There have been a few other complaints as well, such as the addition of copy protection, graphical changes (such as filtering that softens those 1993-era graphics) and apparent music tempo slowdowns on the Switch. That's not including the removal of downloads for the old PS3 and Xbox 360 versions. It's not a fiasco, but these clearly weren't the straightforward ports some were expecting.
AI

Microsoft Invests $1 Billion in OpenAI To Develop AI Technologies on Azure (venturebeat.com) 28

Microsoft today announced that it would invest $1 billion in OpenAI, the San Francisco-based AI research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman. From a report: In a blog post, Brockman said the investment will support the development of artificial general intelligence (AGI) -- AI with the capacity to learn any intellectual task that a human can -- with "widely distributed" economic benefits. To this end, OpenAI intends to partner with Microsoft to jointly develop new AI technologies for the Seattle company's Azure cloud platform and will enter into an exclusivity agreement with Microsoft to "further extend" large-scale AI capabilities that "deliver on the promise of AGI." Additionally, OpenAI will license some of its technologies to Microsoft, which will commercialize them and sell them to as-yet-unnamed partners, and OpenAI will train and run AI models on Azure as it works to develop new supercomputing hardware while "adhering to principles on ethics and trust."

According to Brockman, the partnership was motivated in part by OpenAI's continued pursuit of enormous computational power. Its researchers recently released analysis showing that from 2012 to 2018 the amount of compute used in the largest AI training runs grew by more than 300,000 times, with a 3.5-month doubling time, far exceeding the pace of Moore's Law. Perhaps exemplifying the trend is OpenAI's OpenAI Five, an AI system that squared off against professional players of the video game Dota 2 last summer. On Google's Cloud Platform -- in the course of training -- it played 180 years' worth of games every day on 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores, up from 60,000 cores just a few years ago.

Graphics

'Fortnite' Creator Epic Games Supports Blender Foundation With $1.2 Million (blender.org) 43

Long-time Slashdot reader dnix writes: Apparently having a lot of people playing Fortnite is good for the open source community too. Epic Games' MegaGrants program just awarded the Blender Foundation with $1.2 million over the next three years...to further the success of the free and open source 3D creation suite.
It's part of the company's $100 million "MegaGrants" program, according to the announcement. "Open tools, libraries and platforms are critical to the future of the digital content ecosystem," said Tim Sweeney, founder and CEO of Epic Games. "Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators."
Portables (Apple)

Apple Lowers Prices on the MacBook Air and MacBook Pro and Adds New Features (cnbc.com) 65

Apple today announced updates to the MacBook Air and 13-inch MacBook Pro. The MacBook Air price is being lowered to $1,099, but it will be offered to college students for $999. From a report: It will be sold in the same configurations as before, starting with 128GB of storage, but Apple updated the screen with new TrueTone technology. That means it sets the colors on the screen to match the lighting of the room for a more comfortable viewing experience. It also includes the updated keyboard design that Apple first launched in updated MacBook Pros back in May. It should help to prevent some of the sticky key problems experienced in Apple's MacBooks. But this is not the full keyboard refresh that's rumored to ship with an entirely new keyboard configuration. The new 13-inch Retina MacBook Pro starts at $1,299 (or $1,199 for college students.) and includes a quad-core processor in the entry-level model for the first time and improved graphics performance. Like the refresh in May, the entry-level models now also come with new keyboard materials to help prevent sticking keys.
AMD

In New Benchmark Tests, AMD Challenges Both Intel And Nvidia (hothardware.com) 130

"AMD is unleashing an arsenal of products today," writes Slashdot reader MojoKid.

Hot Hardware writes: The Zen 2-based AMD Ryzen 3000 series is easily one of the most anticipated product launches in the PC space in recent memory. AMD has essentially promised to address virtually all of the perceived shortcomings of the original Zen-based Ryzen processors, with the Ryzen 3000 series, while continuing to aggressively challenge Intel on multiple fronts -- performance, power, price, you name it.
MojoKid summarizes their analysis: In the benchmarks, performance has been improved across the board. The AMD Ryzen 9 3900X and Ryzen 7 3700X offered superior single and multi-thread performance versus their second-gen counterparts, and better latency characteristics, that allowed them to occasionally overtake processors with more cores / threads in a few multi-threaded tests. On a couple of occasions, the 12-core / 24-thread Ryzen 9 3900X even outpaced the 16-core / 32-thread Threadripper 2950X. Performance versus Intel is more of a mixed bag, but the Ryzen 3000 series still looks strong. Single-thread performance is roughly on-par with Intel's Coffee Lake based Core i9-9900K, depending on the workload. Multi-threaded scaling is a dogfight strictly in terms of absolute performance, but because AMD offers more cores per dollar, the Ryzen 3000 series is the clear winner here.

Meanwhile, AMD's Radeon RX 5700 and Radeon RX 5700 XT Navi-powered graphics cards are set to take on NVIDIA's GeForce RTX offerings in the midrange

There's more details in the original submission, and PC World writes that AMD's Radeon RX 5700 and Radeon RX 5700 XT graphics cards "represent a fresh start and a bright future for AMD, brimming with technologies that have never been seen in GPUs before." But they're not the only site offering a detailed analysis.

Forbes tested the chips on five high-workload games (including World of Tanks and Shadow of the Tomb Raider) and shared their results: As usual, things are very title and resolution dependent, but in general, [AMD's] RX 5700 XT proved to be a slightly better option at 1080p with the RTX 2060 Super mostly matching it above this... However, the 2060 Super was cooler-running and much quieter than its AMD counterpart, plus I'd argue it's better-looking too... You also get the option of Ray Tracing and DLSS, but even discounting those, the Nvidia card is a slightly better buy overall.
But CNET argues that AMD's new graphics cards "are very quiet. They are bigger and do require more power than the RTX 2060...but the 2060 Super has increased power requirements as well."

TL:DR: There's a chip war going on.
XBox (Games)

Microsoft's Cloud-Only Xbox Still In Development, Report Says (vg247.com) 36

According to Thurrott's Brad Sams, Microsoft is still developing a low-cost, cloud-based Xbox console. "Sams suggests the low-power box will be just capable enough to allow a player to 'move around in a virtual environment,' but crucially, game elements like NPCs, interactables, text and even graphics won't be there," reports VG247. From the report: This is obviously not playable, but the idea is that having movement calculations run locally reduces input lag compared to a 100% streamed game. Though this might make technical sense, it's hard to imagine the company pushing this hard unless the difference is really perceptible. Of course, there's a lot we still don't know about the streaming market, and some segment of that audience may opt to pay $80 or so to get an experience better than running the game through a web browser.
AMD

NVIDIA Launches GeForce RTX 2080 Super, RTX 2070 Super and RTX 2060 Super GPUs, Aims To One-Up AMD With More Power For the Same Price (hothardware.com) 63

MojoKid writes: NVIDIA just launched three new GeForce RTX gaming GPUs to battle against AMD's forthcoming Radeon RX 5700 series. The GeForce RTX 2080 Super, GeForce RTX 2070 Super and GeForce RTX 2060 Super will all be shipping this month. GeForce RTX 2070 Super and RTX 2060 Super cards are out making the rounds in benchmark reviews, while the RTX 2080 Super will arrive in a couple of weeks. The GeForce RTX 2070 Super is more than just an overclocked RTX 2070 but actually based on the GeForce RTX 2080's TU104 NVIDIA Turing GPU with 40 active SMs, for a total of 2,560 CUDA cores at 1,605MHz and 1,770MHz base and boost clocks, respectively. The RTX 2060 Super is still based on the original TU106 GPU, but it has four additional SMs enabled, which brings the CUDA core count up to 2,176 (from 1,920) at a somewhat higher 1470MHz base clock and boost clock 30MHz lower at 1,650MHz.

There is an additional 2GB of GDDR6 memory on the card too for a total of 8GB now. Performance-wise, both cards are significant upgrades over the originals, with roughly 10 -- 23 percent gains, depending on the resolution or application. The GeForce RTX 2070 Super is often faster than the pricier AMD Radeon VII, especially at 1440p. At 4K, however, the Radeon VII's memory bandwidth advantage often gives it an edge. The new GeForce RTX 2060 Super is faster than a Radeon RX Vega 64 more often than not. It will be interesting to see how these cards compete with AMD's Radeon RX 5700 Navi-based card when they arrive later this month. NVIDIA could have just thrown a wrench in the works for AMD.

Open Source

Tech Press Rushes To Cover New Linus Torvalds Mailing List Outburst (zdnet.com) 381

"Linux frontman Linus Torvalds thinks he's 'more self-aware' these days and is 'trying to be less forceful' after his brief absence from directing Linux kernel developers because of his abusive language on the Linux kernel mailing list," reports ZDNet.

"But true to his word, he's still not necessarily diplomatic in his communications with maintainers..." Torvalds' post-hiatus outburst was directed at Dave Chinner, an Australian programmer who maintains the Silicon Graphics (SGI)-created XFS file system supported by many Linux distros. "Bullshit, Dave," Torvalds told Chinner on a mailing list. The comment from Chinner that triggered Torvalds' rebuke was that "the page cache is still far, far slower than direct IO" -- a problem Chinner thinks will become more apparent with the arrival of the newish storage-motherboard interface specification known as Peripheral Express Interconnect Express (PCIe) version 4.0. Chinner believes page cache might be necessary to support disk-based storage, but that it has a performance cost....

"You've made that claim before, and it's been complete bullshit before too, and I've called you out on it then too," wrote Torvalds. "Why do you continue to make this obviously garbage argument?" According to Torvalds, the page cache serves its correct purpose as a cache. "The key word in the 'page cache' name is 'cache'," wrote Torvalds.... "Caches work, Dave. Anybody who thinks caches don't work is incompetent. 99 percent of all filesystem accesses are cached, and they never do any IO at all, and the page cache handles them beautifully," Torvalds wrote.

"When you say the page cache is slower than direct IO, it's because you don't even see or care about the *fast* case. You only get involved once there is actual IO to be done."

"The thing is," reports the Register, "crucially, Chinner was talking in the context of specific IO requests that just don't cache well, and noted that these inefficiencies could become more obvious as the deployment of PCIe 4.0-connected non-volatile storage memory spreads."

Here's how Chinner responded to Torvalds on the mailing list. "You've taken one single statement I made from a huge email about complexities in dealing with IO concurrency, the page cache and architectural flaws in the existing code, quoted it out of context, fabricated a completely new context and started ranting about how I know nothing about how caches or the page cache work."

The Register notes their conversation also illustrates a crucial difference from closed-source software development. "[D]ue to the open nature of the Linux kernel, Linus's rows and spats play out in public for everyone to see, and vultures like us to write up about."

Slashdot Top Deals