AMD Unveils 2nd Gen Ryzen and Threadripper CPUs, 7nm Vega Mobile GPUs At CES ( 97

MojoKid writes: AMD is unveiled a number of upcoming chip products for the new year at CES 2018, including updated next-generation Ryzen and Threadripper desktop processors covering every market segment from mobile to HEDT, and an array of Vega-based graphics products. AMD will be releasing a pair of Ryzen 3-branded mobile APUs for mainstream notebooks. The quad-core / quad-thread Ryzen 3 2300U has base and boost clocks of 2.0GHz and 3.4GHz, respectively, while the dual-core / quad-thread Ryzen 3 2200U clocks-in at 2.5GHz and 3.4GHz, base and boost. Desktop Ryzen APUs, codenamed Raven Ridge, are inbound for the AM4 platform as well. Launching on February 12 are the upcoming Ryzen 5 2400G and Ryzen 3 2200G. The Ryzen 5 chip is a quad-core / eight-thread machine with an on-die, 11 CU Vega graphics core, priced at $169. The Ryzen 3 2200G is a quad-core / quad-thread chip with and 8 CU Vega-based graphics engine for only $99. CPU core frequencies on the Ryzen 5 2400G range from 3.6GHz -- 3.9GHz (base / boost) and the Ryzen 3 2400G clocks-in at 3.5GHz -- 3.7GHz. 2nd-generation Ryzen desktop processors are on the way as well and will be manufactured using an advanced 12nm+ lithography process, leveraging the Zen+ architecture, which is fundamentally unchanged from current Zen-based processors, save for a few tweaks and fixes that improve cache and memory speeds and latency. 2nd-Generation Ryzen processors are NOT based on the Zen 2 architecture. AMD also mentioned that these new processors will be used in a new line-up of 2nd-Generation Threadripper processors. Finally, the company disclosed two new Vega-based GPUs, a Vega Mobile part with a svelte 1.7mm Z-height and second Vega-based chip, which will be manufactured at 7nm that specifically targets machine learning applications. The low-profile Vega Mobile GPU will find its ways into ultra-thin notebooks and mobile workstations, but speeds and feeds weren't disclosed. AMD also announced that it will be supporting variable refresh rate over HDMI 2.1 in the future as well.

Nvidia Wants To Prohibit Consumer GPU Use In Datacenters ( 312

The Register reports: Nvidia has banned the use of its GeForce and Titan gaming graphics cards in data centers -- forcing organizations to fork out for more expensive gear, like its latest Tesla V100 chips. The chip-design giant updated its GeForce and Titan software licensing in the past few days, adding a new clause that reads: "No Datacenter Deployment. The SOFTWARE is not licensed for datacenter deployment, except that blockchain processing in a datacenter is permitted."
Long-time Slashdot reader Xesdeeni has a few questions: Is this really even legal? First, because it changes use of existing hardware, already purchased, by changing software (with potentially required bug fixes) agreements retroactively. Second, because how can a customer (at least in the U.S.) be told they can't use a product in a particular place, unless it's a genuine safety or security concern (i.e. government regulation)!?
Nvidia expects that "working together with our user base on a case-by-case basis, we will be able to resolve any customer concerns," they told CNBC, adding that "those who don't download new drivers won't be held to the new terms."

Intel with Radeon RX Vega Graphics: Core i7-8809G with 3.1 GHz Base, 100W Target TDP, Overclockable ( 48

An anonymous reader shares a report: To begin the year, Intel's Indian website has published a small number of details regarding Intel's first 'Intel with Radeon RX Vega Graphics' processor. Within the table of overclockable processors, accompanying the Core i9, Core i7-X and 8th Generation K processors is listed the Intel Core i7-8809G, a quad core processor with two sets of graphics options listed. The Core i7-8809G is not a part that Intel has formally announced in a press release, but on Intel's overclocking webpage here it as listed as being a quad-core processor with hyperthreading, supporting a 3.1 GHz base frequency, having an 8 MB L3 cache, a 100W 'Target' TDP, and supporting two channels of DDR4-2400. Intel lists both sets of graphics: the integrated graphics (iGPU, or IGP) as 'Intel HD Graphics 630', and the package graphics (pGPU) as 'Radeon RX Vega M GH Graphics'.
Operating Systems

Nvidia To Cease Producing New Drivers For 32-Bit Systems ( 92

An anonymous reader quotes a report from Ars Technica: While most people have probably made the switch by now, yet another reason to drop 32-bit operating systems and move to 64-bits is coming. Version 390 of Nvidia's graphics drivers, likely to arrive in January, will be the last to contain support for 32-bit versions of Windows (7, 8/8.1, and 10), Linux, and FreeBSD. There will be another year of security updates for 32-bit drivers, but all new features, performance enhancements, and support for new hardware will require the use of a 64-bit operating system and 64-bit drivers. Reasons to stick with 32-bit Windows are at this point few and far between. 64-bit Windows has superior security to 32-bit, and while it varies with workload, 64-bit applications can run somewhat faster than 32-bit counterparts; for workloads that won't fit within the constraints of 32-bit software, the difference is of course enormous. Generally, those who continue to use the 32-bit operating system tend to be subject to some kind of legacy constraint. 32-bit drivers won't work in 64-bit Windows, so obscure but mission critical hardware can extend the life of 32-bit systems.

Why Linux HDCP Isn't the End of the World ( 136

"There is no reason for the open-source community to worry..." writes Daniel Stone, who heads the graphics team at open-source consultancy Collabora. mfilion quotes Recently, Sean Paul from Google's ChromeOS team, submitted a patch series to enable HDCP support for the Intel display driver. HDCP is used to encrypt content over HDMI and DisplayPort links, which can only be decoded by trusted devices... However, if you already run your own code on a free device, HDCP is an irrelevance and does not reduce freedom in any way....

HDCP support is implemented almost entirely in the hardware. Rather than adding a mandatory encryption layer for content, the HDCP kernel support is dormant unless userspace explicitly requests an encrypted link. It then attempts to enable encryption in the hardware and informs userspace of the result. So there's the first out: if you don't want to use HDCP, then don't enable it! The kernel doesn't force anything on an unwilling userspace.... HDCP is only downstream facing: it allows your computer to trust that the device it has been plugged into is trusted by the HDCP certification authority, and nothing more. It does not reduce user freedom, or impose any additional limitations on device usage.


Chrome 64 Beta Adds Sitewide Audio Muting, Pop-Up Blocker, Windows 10 HDR Video ( 43

Chrome 64 is now in beta and it has several new features over version 63. In addition to a stronger pop-up blocker and support for HDR video playback when Windows 10 is in HDR mode, Chrome 64 features sitewide audio muting to block sound when navigating to other pages within a site. 9to5Google reports: An improved pop-up blocker in Chrome 64 prevents sites with abusive experiences -- like disguising links as play buttons and site controls, or transparent overlays -- from opening new tabs or windows. Meanwhile, as announced in November, other security measures in Chrome will prevent malicious auto-redirects. Beginning in version 64, the browser will counter surprise redirects from third-party content embedded into pages. The browser now blocks third-party iframes unless a user has directly interacted with it. When a redirect attempt occurs, users will remain on their current page with an infobar popping up to detail the block. This version also adds a new sitewide audio muting setting. It will be accessible from the permissions dropdown by tapping the info icon or green lock in the URL bar. This version also brings support for HDR video playback when Windows 10 is in HDR mode. It requires the Windows 10 Fall Creator Update, HDR-compatible graphics card, and display. Meanwhile, on Windows, Google is currently prototyping support for an operating system's native notification center. Other features include a new "Split view" feature available on Chrome OS. Developers will also be able to take advantage of the Resize Observer API to build responsive sites with "finger control to observe changes to sizes of elements on a page."

NVIDIA Titan V Benchmarks Show Volta GPU Compute, Mining and Gaming Strength ( 51

MojoKid shares a report from Hot Hardware: Although NVIDIA officially unveiled its Volta-based GV100 GPU a few months ago, the NVIDIA TITAN V featuring the GV100 began shipping just this past week. The card targets a very specific audience and is designed for professional and academic deep learning applications, which partly explains its lofty $3,000 price tag. Unlike NVIDIA's previous-gen consumer flagship, the TITAN Xp, the TITAN V is not designed for gamers. However, since it features NVIDIA's latest GPU architecture, it potentially foreshadows next-year's consumer-targeted GeForce cards that could possibly be based on Volta. The massive 21.1 billion transistor GV100 GPU powering the TITAN V has a base clock of 1,200MHz and a boost clock of 1,455MHz. The card has 12GB of HBM2 memory on-board that is linked to the GPU via a 3072-bit interface, offering up 652.8 GB/s of peak bandwidth, which is about 100GB/s more than a TITAN Xp. Other features of the GV100 include 5,120 single-precision CUDA cores, 2,560 double-precision FP64 cores, and 630 Tensor cores. Although the card is not designed for gamers, the fact remains that the TITAN V significantly outpaces every other graphics card in a variety of games with the highest image quality settings. In GPU compute workloads, the TITAN V is much more dominant and can offer many times the performance of a high-end NVIDIA TITAN Xp or AMD Radeon RX Vega 64. Finally, when it comes to Ethereum mining, NVIDIA's Titan V is far and away the fastest GPU on the planet currently.

CIA Captured Putin's 'Specific Instructions' To Hack the 2016 Election, Says Report ( 535

An anonymous reader quotes a report from The Daily Beast: When Director of National Intelligence James R. Clapper Jr., CIA Director John Brennan and FBI Director James B. Comey all went to see Donald Trump together during the presidential transition, they told him conclusively that they had "captured Putin's specific instructions on the operation" to hack the 2016 presidential election, according to a report in The Washington Post. The intel bosses were worried that he would explode but Trump remained calm during the carefully choreographed meeting. "He was affable, courteous, complimentary," Clapper told the Post. Comey stayed behind afterward to tell the president-elect about the controversial Steele dossier, however, and that private meeting may have been responsible for the animosity that would eventually lead to Trump firing the director of the FBI.

AMD Is Open-Sourcing Their Official Vulkan Linux Driver ( 75

An anonymous reader writes: While many of you have likely heard of the "RADV" open-source Vulkan driver, it's been a community-written driver up to this point in the absence of AMD's official, cross-platform Vulkan driver being open-source. That's now changed with AMD now open-sourcing their official Vulkan driver. The code drop is imminent and they are encouraging the use of it for quick support of new AMD hardware, access to the Radeon GPU Profiler, easy integration of AMD Vulkan extensions, and enabling third-party extensions. For now at least it does provide better Vulkan performance than RADV but the RADV developers have indicated they plan to continue development of their Mesa-based Vulkan driver.

Apple iMac Pro Goes on Sale December 14th ( 278

Apple vowed to ship the iMac Pro in December, and it's making good on that promise. From a report: The company has confirmed that its workstation-grade all-in-one will be available on December 14th. It has yet to reveal the exact configuration options, but the $4,999 'starter' model ships with an 8-core Xeon processor, 32GB of RAM, 1TB of solid-state storage and a Radeon Vega graphics chipset with 8GB of RAM. You can option it with up to an 18-core Xeon, 128GB of RAM, a 4TB SSD and a 16GB Vega chipset, although video creator Marques Brownlee notes that you'll have to wait until the new year for that 18-core beast.

Nvidia Announces 'Nvidia Titan V' Video Card: GV100 for $3000 ( 51

Nvidia has announced the Titan V, the "world's most powerful PC GPU." It's based on Nvidia's Volta, the same architecture as the Nvidia Tesla V100 GPUs behind Amazon Web Service's recently launched top-end P3 instances, which are dedicated to artificial-intelligence applications. From a report: A mere 7 months after Volta was announced with the Tesla V100 accelerator and the GV100 GPU inside it, Nvidia continues its breakneck pace by releasing the GV100-powered Titan V, available for sale today. Aimed at a decidedly more compute-oriented market than ever before, the 815 mm2 behemoth die that is GV100 is now available to the broader public. [...] The Titan V, by extension, sees the Titan lineup finally switch loyalties and start using Nvidia's high-end compute-focused GPUs, in this case the Volta architecture based V100. The end result is that rather than being Nvidia's top prosumer card, the Titan V is decidedly more focused on compute, particularly due to the combination of the price tag and the unique feature set that comes from using the GV100 GPU. Which isn't to say that you can't do graphics on the card -- this is still very much a video card, outputs and all -- but Nvidia is first and foremost promoting it as a workstation-level AI compute card, and by extension focusing on the GV100 GPU's unique tensor cores and the massive neural networking performance advantages they offer over earlier Nvidia cards.

Cryptocurrency Miners Are Using Old Tires to Power Their Rigs ( 79

Christopher Malmo, writing for Motherboard: An entrepreneurial cryptocurrency mining company has just announced an unusual deal: it has partnered with a tire-based waste-to-energy company in the United States to power its mining computers. Standard American Mining and PRTI, a tire "thermal demanufacturing" company based in North Carolina, are powering graphics cards-based mining equipment to earn a range of alternative cryptocurrencies like Ethereum. Basically, they take used tires and heat them to a precise temperature, resulting in components like steel (from belted tires), carbon black, and a burnable fuel. That fuel is the energy source driving turbines to make electricity, which powers an onsite cryptocurrency mining farm. Taking advantage of an underutilized electricity source to run computers isn't groundbreaking, but the unusual set-up shows that cryptocurrency mining is now profitable enough to justify finding quite unconventional sources of cheap or new energy generation.

AMD Quietly Made Some Radeon RX 560 Graphics Cards Worse ( 40

Brad Chacos: When the Radeon RX 560 launched in April it was the only RX 500-series card with a meaningful under-the-hood tech boost compared to the RX 400-series. The graphics processor in the older RX 460 cards packed 14 compute units and 896 stream processors; the upgraded Radeon RX 560 bumped that to 16 CUs and 1,024 SPs. Now, some -- but not all -- of the Radeon RX 560s you'll find online have specs that match the older 460 cards, and sometimes run at lower clock speeds to boot. AMD's Radeon RX 560 page was also quietly altered to include the new configurations at some point, discovered. The last snapshot of the page by the Internet Archive's Wayback Machine occurred on July 7 and only lists the full-fat 16 CU version of the card, so the introduction of the nerfed 896 SP model likely occurred some time after that. Sifting through all of the available Radeon RX 560s on Newegg this morning reveals a fairly even split between the two configurations, all of which are being sold under the same RX 560 name. In a statement, AMD acknowledged the existence of 14 Compute Unit (896 stream processors) and 16 Compute Unit (1024 stream processor) versions of the Radeon RX 560. "We introduced the 14CU version this summer to provide AIBs and the market with more RX 500 series options. It's come to our attention that on certain AIB and etail websites there's no clear delineation between the two variants. We're taking immediate steps to remedy this: we're working with all AIB and channel partners to make sure the product descriptions and names clarify the CU count, so that gamers and consumers know exactly what they're buying. We apologize for the confusion this may have caused."

HDMI 2.1 Is Here With 10K and Dynamic HDR Support ( 176

Swapna Krishna reports via Engadget: Back in January, the HDMI Forum unveiled its new specifications for the HDMI connector, called HDMI 2.1. Now, that HDMI specification is available to all HDMI 2.0 adopters. It's backwards compatible with all previous HDMI specifications. The focus of HDMI 2.1 is on higher video bandwidth; it supports 48 GB per second with a new backwards-compatible ultra high speed HDMI cable. It also supports faster refresh rates for high video resolution -- 60 Hz for 8K and 120 Hz for 4K. The standard also supports Dynamic HDR and resolutions up to 10K for commercial and specialty use. This new version of the HDMI specification also introduces an enhanced refresh rate that gamers will appreciate. VRR, or Variable Refresh Rate, reduces, or in some cases eliminates, lag for smoother gameplay, while Quick Frame Transport (QFT) reduces latency. Quick Media Switching, or QMS, reduces the amount of blank-screen wait time while switching media. HDMI 2.1 also includes Auto Low Latency Mode (ALLM), which automatically sets the ideal latency for the smoothest viewing experience.

Tesla Owners Are Mining Bitcoins With Free Power From Charging Stations ( 141

dmoberhaus writes: Someone claimed to use their Tesla to power a cryptocurrency mine to take advantage of the free energy given to Tesla owners. But even with free energy, does this scheme make sense? Motherboard ran the numbers.

From the report: "...If we assume that each of the GPUs in this rig draws around 150 watts, then the 16 GPUs have a total power draw of 2.4 kilowatts per hour or 57.6 kilowatt hours per day if they ran for a full 24 hours. According to Green Car Reports, a Tesla Model S gets about 3 miles per kilowatt hour, meaning that running this mining rig for a full day is the equivalent of driving nearly 173 miles in the Tesla. According to the Federal Highway Administration, the average American drives around 260 miles a week. In other words, running this cryptocurrency mine out of the trunk of a Tesla for a day and a half would use as much energy as driving that Tesla for a full week, on average. Moreover, drivers who are not a part of Tesla's unlimited free energy program are limited to 400 kilowatt hours of free electricity per year, meaning they could only run their rig for a little over 7 days on free energy.

Okay, but how about the cost? Let's assume that this person is mining Ethereum with their GPUs. Out of the box, an average GPU can do about 20 megahashes per second on the Ethereum network (that is, performing a math problem known as hashing 20 million times per second). This Tesla rig, then, would have a total hashrate of about 320 megahashes. According to the Cryptocompare profitability calculator, if the Tesla rig was used to mine Ethereum using free electricity, it would result in about .05 Ether per day -- equivalent to nearly $23, going by current prices at the time of writing. In a month, this would result in $675 in profit, or about the monthly lease for a Tesla Model S. So the Tesla would pay for itself, assuming the owner never drove it or used it for anything other than mining Ethereum, Ethereum doesn't drop in value below $450, and the Tesla owner gets all of their energy for free."
Motherboard also notes that this conclusion "doesn't take into account the price of each of the mining rigs, which likely cost about $1,000 each, depending on the quality of the GPUs used." TL;DR: Mining cryptocurrency out of your electric car is not worth it.

First AMD Ryzen Mobile Laptop Tested Shows Strong Zen-Vega Performance ( 85

MojoKid writes: AMD Ryzen Mobile processors are arriving now in retail laptops from the likes of HP, Lenovo and Acer. With the first CPUs to hit the market, AMD took quad-core Ryzen and coupled it with 8 or 10-core Vega GPUs on a single piece of silicon in an effort to deliver a combination of strong Ryzen CPU performance along with significantly better integrated graphics performance over Intel's current 8th Gen Kaby Lake laptop chips. AMD Ryzen 7 2700U and Ryzen 5 2500U chips have 4MB of shared L3 cache each, but differ with respect to top-end CPU boost clock speeds, number of integrated Radeon Vega Compute Units (CUs), and the GPU's top-end clocks. Ryzen 7 2700U is more powerful with 10 Radeon Vega CUs, while Ryzen 5 2500U sports 8. Ryzen 7 2700U also boosts to 3.8GHz, while Ryzen 5 2500U tops out at 3.6GHz. In the benchmarks, Ryzen Mobile looks strong, competing well with Intel quad-core 8th Gen laptop CPUs, while offering north of 60 percent better performance in graphics and gaming. Battery life is still a question mark, however, as some of the very first models to hit the market from HP have inefficient displays and hard drives instead of SSDs. As more premium configurations hit the market in the next few weeks, hopefully we'll get a better picture of Ryzen Mobile battery life in more optimized laptop builds.

Microsoft Confirms Surface Book 2 Can't Stay Charged During Gaming Sessions ( 138

The Verge mentioned in their review that the Surface Book 2's power supply can't charge the battery fast enough to prevent it from draining in some cases. Microsoft has since confirmed that "in some intense, prolonged gaming scenarios with Power Mode Slider set to 'best performance' the battery may discharge while connected to the power supply." Engadget reports: To let you choose between performance and battery life, the Surface Book has a range of power settings. If you're doing video editing or other GPU intensive tasks, you can crank it up to "best performance" to activate the NVIDIA GPU and get more speed. Battery drain is normally not an issue with graphics apps because the chip only kicks in when needed. You'll also need the "best performance" setting for GPU-intensive games, as they'll slow down or drop frames otherwise. The problem is that select titles like Destiny 2 use the NVIDIA chip nearly continuously, pulling up to 70 watts of power on top of the 35 watt CPU. Unfortunately, the Surface Book comes with a 102-watt charger, and only about 95 watts of that reaches the device, the Verge points out. Microsoft says that the power management system will prevent the battery from draining completely, even during intense gaming, but it would certainly mess up your Destiny 2 session. It also notes that the machine is intended for designers, developers and engineers, with the subtext that it's not exactly marketed as a gaming rig.

Google Cloud Platform Cuts the Price of GPUs By Up To 36 Percent ( 28

In a blog post, Google's Product Manager, Chris Kleban, announced that the company is cutting the price of using Nvidia's Tesla GPUs through its Compute Engine by up to 36 percent. The older K80 GPUs will now cost $0.45 per hour while the more powerful P100 machines will cost $1.46 per minute (all with per-second billing). TechCrunch reports: The company is also dropping the prices for preemptible local SSDs by almost 40 percent. "Preemptible local SSDs" refers to local SSDs attached to Google's preemptible VMs. You can't attach GPUs to preemptible instances, though, so this is a nice little bonus announcement -- but it isn't going to directly benefit GPU users. As for the new GPU pricing, it's clear that Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications -- including physical simulations and molecular modeling -- that greatly benefit from the hundreds of cores that are now available on these GPUs. The P100, which is officially still in beta on the Google Cloud Platform, features 3594 cores, for example. Developers can attach up to four P100 and eight K80 dies to each instance. Like regular VMs, GPU users will also receive sustained-use discounts, though most users probably don't keep their GPUs running for a full month.

NVIDIA Launches Modded Collector's Edition Star Wars Titan Xp Graphics Card ( 45

MojoKid writes: NVIDIA just launched its fastest graphics card yet and this GPU is targeted at Star Wars fans. In concert with EA's official launch today of Star Wars Battlefront II, NVIDIA unveiled the new Star Wars Titan Xp Collector's Edition graphics card for enthusiast gamers. There are two versions of the cards available -- the Galactic Empire version and a Jedi Order version. Both of the cards feature customized coolers, shrouds, and lighting, designed to mimic the look of a lightsaber. They also ship in specialized packaging that can be used to showcase the cards if they're not installed in a system. The GPU powering the TITAN Xp Collector's Edition has a base clock of 1,481MHz and a boost clock of 1,582MHz. It's packing a fully-enabled NVIDIA GP102 GPU with 3,840 cores and 12GB of GDDR5X memory clocked at 5.5GHz for an effective data rate of 11Gbps, resulting in 547.2GB/s of peak memory bandwidth. At those clocks, the card also offers a peak texture fillrate of 379.75 GigaTexels/s and 12.1TFLOPs of FP32 compute performance, which is significantly higher than a GeForce GTX 1080 Ti. In the benchmarks, it's the fastest GPU out there right now (it better be for $1200), but this card is more about nostalgia and the design customizations NVIDIA made to the cards that should appeal to gamers and Star Wars fans alike.
United States

America's 'Retail Apocalypse' Is Really Just Beginning ( 398

An anonymous reader quotes a report from Bloomberg: The so-called retail apocalypse has become so ingrained in the U.S. that it now has the distinction of its own Wikipedia entry. The industry's response to that kind of doomsday description has included blaming the media for hyping the troubles of a few well-known chains as proof of a systemic meltdown. There is some truth to that. In the U.S., retailers announced more than 3,000 store openings in the first three quarters of this year. But chains also said 6,800 would close. And this comes when there's sky-high consumer confidence, unemployment is historically low and the U.S. economy keeps growing. Those are normally all ingredients for a retail boom, yet more chains are filing for bankruptcy and rated distressed than during the financial crisis. That's caused an increase in the number of delinquent loan payments by malls and shopping centers. The reason isn't as simple as Inc. taking market share or twenty-somethings spending more on experiences than things. The root cause is that many of these long-standing chains are overloaded with debt -- often from leveraged buyouts led by private equity firms. There are billions in borrowings on the balance sheets of troubled retailers, and sustaining that load is only going to become harder -- even for healthy chains. The debt coming due, along with America's over-stored suburbs and the continued gains of online shopping, has all the makings of a disaster. The spillover will likely flow far and wide across the U.S. economy. There will be displaced low-income workers, shrinking local tax bases and investor losses on stocks, bonds and real estate. If today is considered a retail apocalypse, then what's coming next could truly be scary.

Slashdot Top Deals