AMD

AMD Details High Bandwidth Memory (HBM) DRAM, Pushes Over 100GB/s Per Stack 98

Posted by timothy
from the lower-power-higher-interest dept.
MojoKid writes: Recently, a few details of AMD's next-generation Radeon 300-series graphics cards have trickled out. Today, AMD has publicly disclosed new info regarding their High Bandwidth Memory (HBM) technology that will be used on some Radeon 300-series and APU products. Currently, a relatively large number of GDDR5 chips are necessary to offer sufficient capacity and bandwidth for modern GPUs, which means significant PCB real estate is consumed. On-chip integration is not ideal for DRAM because it is not size or cost effective with a logic-optimized GPU or CPU manufacturing process. HBM, however, brings the DRAM as close to possible to the logic die (GPU) as possible. AMD partnered with Hynix and a number of companies to help define the HBM specification and design a new type of memory chip with low power consumption and an ultra-wide bus width, which was eventually adopted by JEDEC 2013. They also develop a DRAM interconnect called an "interposer," along with ASE, Amkor, and UMC. The interposer allows DRAM to be brought into close proximity with the GPU and simplifies communication and clocking. HBM DRAM chips are stacked vertically, and "through-silicon vias" (TSVs) and "bumps" are used to connect one DRAM chip to the next, and then to a logic interface die, and ultimately the interposer. The end result is a single package on which the GPU/SoC and High Bandwidth Memory both reside. 1GB of GDDR5 memory (four 256MB chips), requires roughly 672mm2. Because HBM is vertically stacked, that same 1GB requires only about 35mm2. The bus width on an HBM chip is 1024-bits wide, versus 32-bits on a GDDR5 chip. As a result, the High Bandwidth Memory interface can be clocked much lower but still offer more than 100GB/s for HBM versus 25GB/s with GDDR5. HBM also requires significantly less voltage, which equates to lower power consumption.
Graphics

Oculus Rift Hardware Requirements Revealed, Linux and OS X Development Halted 227

Posted by Soulskill
from the sad-penguin dept.
An anonymous reader writes: Oculus has selected the baseline hardware requirements for running their Rift virtual reality headset. To no one's surprise, they're fairly steep: NVIDIA GTX 970 / AMD 290 equivalent or greater, Intel i5-4590 equivalent or greater, and 8GB+ RAM. It will also require at least two USB 3.0 ports and "HDMI 1.3 video output supporting a 297MHz clock via a direct output architecture."

Oculus chief architect Atman Binstock explains: "On the raw rendering costs: a traditional 1080p game at 60Hz requires 124 million shaded pixels per second. In contrast, the Rift runs at 2160×1200 at 90Hz split over dual displays, consuming 233 million pixels per second. At the default eye-target scale, the Rift's rendering requirements go much higher: around 400 million shaded pixels per second. This means that by raw rendering costs alone, a VR game will require approximately 3x the GPU power of 1080p rendering." He also points out that PC graphics can afford a fluctuating frame rate — it doesn't matter too much if it bounces between 30-60fps. The Rift has no such luxury, however.

The last requirement is more onerous: WIndows 7 SP1 or newer. Binstock says their development for OS X and Linux has been "paused" so they can focus on delivering content for Windows. They have no timeline for going back to the less popular platforms.
AMD

AMD Outlines Plans For Zen-Based Processors, First Due In 2016 166

Posted by samzenpus
from the check-it-out dept.
crookedvulture writes: AMD laid out its plans for processors based on its all-new Zen microarchitecture today, promising 40% higher performance-per-clock from from the x86 CPU core. Zen will use simultaneous multithreading to execute two threads per core, and it will be built using "3D" FinFETs. The first chips are due to hit high-end desktops and servers next year. In 2017, Zen will combine with integrated graphics in smaller APUs designed for desktops and notebooks. AMD also plans to produce a high-performance server APU with a "transformational memory architecture" likely similar to the on-package DRAM being developed for the company's discrete graphics processors. This chip could give AMD a credible challenger in the HPC and supercomputing markets—and it could also make its way into laptops and desktops.
DRM

Microsoft, Chip Makers Working On Hardware DRM For Windows 10 PCs 304

Posted by Soulskill
from the just-what-users-wanted dept.
writertype writes: Last month, Microsoft began talking about PlayReady 3.0, which adds hardware DRM to secure 4K movies. Intel, AMD, Nvidia, and Qualcomm are all building it in, according to Microsoft. "Older generations of PCs used software-based DRM technology. The new hardware-based technology will know who you are, what rights your PC has, and won’t ever allow your PC to unlock the content so it can be ripped. ... Unfortunately, it looks like the advent of PlayReady 3.0 could leave older PCs in the lurch. Previous PlayReady technology secured content up to 1080p resolution using software DRM—and that could be the maximum resolution for older PCs without PlayReady 3.0." Years back, a number of people got upset when Hollywood talked about locking down "our content." It looks like we may be facing it again for 4K video.
Graphics

NVIDIA Quadro M6000 12GB Maxwell Workstation Graphics Tested Showing Solid Gains 66

Posted by samzenpus
from the check-it-out dept.
MojoKid writes: NVIDIA's Maxwell GPU architecture has has been well-received in the gaming world, thanks to cards like the GeForce GTX Titan X and the GeForce GTX 980. NVIDIA recently took time to bring that same Maxwell goodness over the workstation market as well and the result is the new Quadro M6000, NVIDIA's new highest-end workstation platform. Like the Titan X, the M6000 is based on the full-fat version of the Maxwell GPU, the G200. Also, like the GeForce GTX Titan X, the Quadro M6000 has 12GB of GDDR5, 3072 GPU cores, 192 texture units (TMUs), and 96 render outputs (ROPs). NVIDIA has said that the M6000 will beat out their previous gen Quadro K6000 in a significant way in pro workstation applications as well as GPGPU or rendering and encoding applications that can be GPU-accelerated. One thing that's changed with the launch of the M6000 is that AMD no longer trades shots with NVIDIA for the top pro graphics performance spot. Last time around, there were some benchmarks that still favored team red. Now, the NVIDIA Quadro M6000 puts up pretty much a clean sweep.
AMD

AMD Publishes New 'AMDGPU' Linux Graphics Driver 88

Posted by Soulskill
from the doing-it-right dept.
An anonymous reader writes: AMD has made available its new AMDGPU Linux graphics driver comprised of a brand new DRM/KMS kernel driver, a new xf86-video-amdgpu X11 driver, and modifications to libdrm and Gallium3D. This new AMDGPU driver is designed for supporting AMD's next-generation hardware with no support differences for currently supported Radeon GPUs. While yet to be released, this new AMDGPU driver is the critical piece to the new unified driver strategy with Catalyst where their high performance proprietary driver will now become limited to being a user-space binary component that uses this open-source kernel driver.
AMD

AMD Withdraws From High-Density Server Business 133

Posted by samzenpus
from the stop-the-bleeding dept.
An anonymous reader sends word that AMD has pulled out of the market for high-density servers. "AMD has pulled out of the market for high-density servers, reversing a strategy it embarked on three years ago with its acquisition of SeaMicro. AMD delivered the news Thursday as it announced financial results for the quarter. Its revenue slumped 26 percent from this time last year to $1.03 billion, and its net loss increased to $180 million, the company said. AMD paid $334 million to buy SeaMicro, which developed a new type of high-density server aimed at large-scale cloud and Internet service providers."
China

IBM and OpenPower Could Mean a Fight With Intel For Chinese Server Market 85

Posted by timothy
from the round-the-mulberry-bust dept.
itwbennett writes With AMD's fade out from the server market and the rapid decline of RISC systems, Intel has stood atop the server market all by itself. But now IBM, through its OpenPOWER Foundation, could give Intel and its server OEMs a real fight in China, which is a massive server market. As the investor group Motley Fool notes, OpenPOWER is a threat to Intel in the Chinese server market because the government has been actively pushing homegrown solutions over foreign technology, and many of the Foundation members, like Tyan, are from China.
AMD

Gaming On Linux With Newest AMD Catalyst Driver Remains Slow 178

Posted by samzenpus
from the molasses-in-the-winter dept.
An anonymous reader writes The AMD Catalyst binary graphics driver has made a lot of improvements over the years, but it seems that NVIDIA is still leading in the Linux game with their shared cross-platform driver. Tests done by Phoronix of the Catalyst 15.3 Linux Beta found on Ubuntu 15.04 shows that NVIDIA continues leading over AMD Catalyst with several different GPUs on BioShock Infinite, a game finally released for Linux last week. With BioShock Infinite on Linux, years old mid-range GeForce GPUs were clobbering the high-end Radeon R9 290 and other recent AMD GPUs tested. The poor showing wasn't limited to BS:I though as the Metro Redux games were re-tested too on the new drivers and found the NVIDIA graphics still ran significantly faster and certainly a different story than under Windows.
Displays

First AMD FreeSync Capable Gaming Displays and Drivers Launched, Tested 63

Posted by timothy
from the play-on-this dept.
MojoKid writes Soon after NVIDIA unveiled its G-SYNC technology, AMD announced that it would pursue an open standard, dubbed FreeSync, leveraging technologies already available in the DisplayPort specification to offer adaptive refresh rates to users of some discrete Radeon GPUs and AMD APUs. AMD's goal with FreeSync was to introduce a technology that offered similar end-user benefits to NVIDIA's G-SYNC, that didn't require monitor manufacturers to employ any proprietary add-ons, and that could be adopted by any GPU maker. Today, AMD released its first FreeSync capable set of drivers and this first look at the sleek ultra-widescreen LG 34UM67 showcases some of the benefits, based on an IPS panel with a native resolution of 2560x1080 and a max refresh rate of 75Hz. To fully appreciate how adaptive refresh rate technologies work, it's best to experience them in person. In short, the GPU scans a frame out to the monitor where it's drawn on-screen and the monitor doesn't update until a frame is done drawing. As soon as a frame is done, the monitor will update again as quickly as it can with the next frame, in lockstep with the GPU. This completely eliminates tearing and jitter issues that are common in PC gaming. Technologies like NVIDIA G-SYNC and AMD FreeSync aren't a panacea for all of PC gaming anomalies, but they do ultimately enhance the experience and are worthwhile upgrades in image quality and less eye strain.
AMD

AMD Enters Virtual Reality Fray With LiquidVR SDK At GDC 23

Posted by Soulskill
from the buzzword-ascending dept.
MojoKid writes: AMD jumped into the virtual reality arena today by announcing that its new LiquidVR SDK will help developers customize VR content for AMD hardware. "The upcoming LiquidVR SDK makes a number of technologies available which help address obstacles in content, comfort and compatibility that together take the industry a major step closer to true, life-like presence across all VR games, applications, and experiences," AMD representatives said in a statement. Oculus is one of the VR companies that will be working with AMD's LiquidVR SDK, and likes what it's seen so far. "Achieving presence in a virtual world continues to be one of the most important elements to delivering amazing VR," said Brendan Iribe, CEO of Oculus. "We're excited to have AMD working with us on their part of the latency equation, introducing support for new features like asynchronous timewarp and late latching, and compatibility improvements that ensure that Oculus' users have a great experience on AMD hardware."
AMD

AMD Unveils Carrizo APU With Excavator Core Architecture 114

Posted by Soulskill
from the trying-to-catch-up dept.
MojoKid writes: AMD just unveiled new details about their upcoming Carrizo APU architecture. The company is claiming the processor, which is still built on Global Foundries' 28nm 28SHP node like its predecessor, will nonetheless deliver big advances in both performance and efficiency. When it was first announced, AMD detailed support for next generation Radeon Graphics (DX12, Mantle, and Dual Graphics support), H.265 decoding, full HSA 1.0 support, and ARM Trustzone compatibility. But perhaps one of the biggest advantages of Carrizo is the fact that the APU and Southbridge are now incorporated into the same die; not just two separates dies built into and MCM package.

This not only improves performance, but also allows the Southbridge to take advantage of the 28SHP process rather than older, more power-hungry 45nm or 65nm process nodes. In addition, the Excavator cores used in Carrizo have switched from a High Performance Library (HPL) to a High Density Library (HDL) design. This allows for a reduction in the die area taken up by the processing cores (23 percent, according to AMD). This allows Carrizo to pack in 29 percent more transistors (3.1 billion versus 2.3 billion in Kaveri) in a die size that is only marginally larger (250mm2 for Carrizo versus 245mm2 for Kaveri). When all is said and done, AMD is claiming a 5 percent IPC boost for Carrizo and a 40 percent overall reduction in power usage.
Encryption

New Encryption Method Fights Reverse Engineering 215

Posted by Soulskill
from the with-many-obfuscations,-all-bugs-are-deep dept.
New submitter Dharkfiber sends an article about the Hardened Anti-Reverse Engineering System (HARES), which is an encryption tool for software that doesn't allow the code to be decrypted until the last possible moment before it's executed. The purpose is to make applications as opaque as possible to malicious hackers trying to find vulnerabilities to exploit. It's likely to find work as an anti-piracy tool as well. To keep reverse engineering tools in the dark, HARES uses a hardware trick that’s possible with Intel and AMD chips called a Translation Lookaside Buffer (or TLB) Split. That TLB Split segregates the portion of a computer’s memory where a program stores its data from the portion where it stores its own code’s instructions. HARES keeps everything in that “instructions” portion of memory encrypted such that it can only be decrypted with a key that resides in the computer’s processor. (That means even sophisticated tricks like a “cold boot attack,” which literally freezes the data in a computer’s RAM, can’t pull the key out of memory.) When a common reverse engineering tool like IDA Pro reads the computer’s memory to find the program’s instructions, that TLB split redirects the reverse engineering tool to the section of memory that’s filled with encrypted, unreadable commands.
Graphics

GeForce GTX 980 and 970 Cards From MSI, EVGA, and Zotac Reviewed 66

Posted by Soulskill
from the price-vs.-performance-vs.-really-loud-fans dept.
MojoKid writes: In all of its iterations, NVIDIA's Maxwell architecture has proven to be a good performing, power-efficient GPU thus far. At the high-end of the product stack is where some of the most interesting products reside, however. When NVIDIA launches a new high-end GPU, cards based on the company's reference design trickle out first, and then board partners follow up with custom solutions packing unique cooling hardware, higher clocks, and sometimes additional features. With the GeForce GTX 970 and GTX 980, NVIDIA's board partners were ready with custom solutions very quickly. These three custom GeForce cards, from enthusiast favorites EVGA, MSI, and Zotac represent optimization at the high-end of Maxwell. Two of the cards are GTX 980s: the MSI GTX 980 Gaming 4G and the Zotac GeForce GTX 980 AMP! Omgea, the third is a GTX 970 from EVGA, their GeForce GTX 970 FTW with ACX 2.0. Besides their crazy long names, all of these cards are custom solutions, that ship overclocked from the manufacturer. In testing, NVIDIA's GeForce GTX 980 was the fastest, single-GPU available. The custom, factory overclocked MSI and Zotac cards cemented that fact. Overall, thanks to a higher default GPU-clock, the MSI GTX 980 Gaming 4G was the best performing card. EVGA's GeForce GTX 970 FTW was also relatively strong, despite its alleged memory bug. Although, as expected, it couldn't quite catch the higher-end GeForce GTX 980s, but occasionally outpaced the AMD's top-end Radeon R9 290X.
Graphics

Ask Slashdot: GPU of Choice For OpenCL On Linux? 110

Posted by timothy
from the discriminating-tastes dept.
Bram Stolk writes So, I am running GNU/Linux on a modern Haswell CPU, with an old Radeon HD5xxx from 2009. I'm pretty happy with the open source Gallium driver for 3D acceleration. But now I want to do some GPGPU development using OpenCL on this box, and the old GPU will no longer cut it. What do my fellow technophiles from Slashdot recommend as a replacement GPU? Go NVIDIA, go AMD, or just use the integrated Intel GPU instead? Bonus points for open sourced solutions. Performance not really important, but OpenCL driver maturity is.
Input Devices

Samsung's Advanced Chips Give Its Cameras a Big Boost 192

Posted by Soulskill
from the welcome-to-the-bigs dept.
GhostX9 writes: SLR Lounge just posted a first look at the Samsung NX1 28.1 MP interchangeable lens camera. They compare it to Canon and Sony full-frame sensors. Spoiler: The Samsung sensor seems to beat the Sony A7R sensor up to ISO 3200. They attribute this to Samsung's chip foundry. While Sony is using 180nm manufacturing (Intel Pentium III era) and Canon is still using 500nm process (AMD DX4 era), Samsung has gone with 65nm with copper interconnects (Intel Core 2 Duo — Conroe era). Furthermore, Samsung's premium lenses appear to be as sharp or sharper than Canon's L line and Sony's Zeiss line in the center, although the Canon 24-70/2.8L II is sharper at the edge of the frame.
AMD

AMD Catalyst Is the Broken Wheel For Linux Gaming 160

Posted by Soulskill
from the didn't-squeek-enough-to-get-the-grease dept.
An anonymous reader writes: Tests of the AMD Catalyst driver with the latest AAA Linux games/engines have shown what poor shape the proprietary Radeon driver currently is in for Linux gamers. Phoronix, which traditionally benchmarks with open-source OpenGL games and other long-standing tests, recently has taken specially interest in adapting some newer Steam-based titles for automated benchmarking. With last month's Linux release of Metro Last Light Redux and Metro 2033 Redux, NVIDIA's driver did great while AMD Catalyst was miserable. Catalyst 14.12 delivered extremely low performance and some major bottleneck with the Radeon R9 290 and other GPUs running slower than NVIDIA's midrange hardware. In Unreal Engine 4 Linux tests, the NVIDIA driver again was flawless but the same couldn't be said for AMD. Catalyst 14.12 wouldn't even run the Unreal Engine 4 demos on Linux with their latest generation hardware but only with the HD 6000 series. Tests last month also showed AMD's performance to be crippling for NVIDIA vs. AMD Civilization: Beyond Earth Linux benchmarks with the newest drivers.
AMD

Tiny Fanless Mini-PC Runs Linux Or Windows On Quad-core AMD SoC 180

Posted by samzenpus
from the getting-small dept.
DeviceGuru writes CompuLab has unveiled a tiny 'Fitlet' mini-PC that runs Linux or Windows on a dual- or quad-core 64-bit AMD x86 SoC (with integrated Radeon R3 or R2 GPU), clocked at up to 1.6GHz, and offering extensive I/O, along with modular internal expansion options. The rugged, reconfigurable 4.25 x 3.25 x 0.95 in. system will also form the basis of a pre-configured 'MintBox Mini' model, available in Q2 in partnership with the Linux Mint project. To put things in perspective, CompuLab says the Fitlet is three times smaller than the Celeron Intel NUC.
AMD

AMD, Nvidia Reportedly Tripped Up On Process Shrinks 230

Posted by Soulskill
from the stupid-physics-getting-in-the-way dept.
itwbennett writes: In the fierce battle between CPU and GPU vendors, it's not just about speeds and feeds but also about process shrinks. Both Nvidia and AMD have had their move to 16nm and 20nm designs, respectively, hampered by the limited capacity of both nodes at manufacturer TSMC, according to the enthusiast site WCCFTech.com. While AMD's CPUs are produced by GlobalFoundaries, its GPUs are made at TSMC, as are Nvidia's chips. The problem is that TSMC only has so much capacity and Apple and Samsung have sucked up all that capacity. The only other manufacturer with 14nm capacity is Intel and there's no way Intel will sell them some capacity.