Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Upgrades

AMD Details High Bandwidth Memory (HBM) DRAM, Pushes Over 100GB/s Per Stack 98

MojoKid writes: Recently, a few details of AMD's next-generation Radeon 300-series graphics cards have trickled out. Today, AMD has publicly disclosed new info regarding their High Bandwidth Memory (HBM) technology that will be used on some Radeon 300-series and APU products. Currently, a relatively large number of GDDR5 chips are necessary to offer sufficient capacity and bandwidth for modern GPUs, which means significant PCB real estate is consumed. On-chip integration is not ideal for DRAM because it is not size or cost effective with a logic-optimized GPU or CPU manufacturing process. HBM, however, brings the DRAM as close to possible to the logic die (GPU) as possible. AMD partnered with Hynix and a number of companies to help define the HBM specification and design a new type of memory chip with low power consumption and an ultra-wide bus width, which was eventually adopted by JEDEC 2013. They also develop a DRAM interconnect called an "interposer," along with ASE, Amkor, and UMC. The interposer allows DRAM to be brought into close proximity with the GPU and simplifies communication and clocking. HBM DRAM chips are stacked vertically, and "through-silicon vias" (TSVs) and "bumps" are used to connect one DRAM chip to the next, and then to a logic interface die, and ultimately the interposer. The end result is a single package on which the GPU/SoC and High Bandwidth Memory both reside. 1GB of GDDR5 memory (four 256MB chips), requires roughly 672mm2. Because HBM is vertically stacked, that same 1GB requires only about 35mm2. The bus width on an HBM chip is 1024-bits wide, versus 32-bits on a GDDR5 chip. As a result, the High Bandwidth Memory interface can be clocked much lower but still offer more than 100GB/s for HBM versus 25GB/s with GDDR5. HBM also requires significantly less voltage, which equates to lower power consumption.
This discussion has been archived. No new comments can be posted.

AMD Details High Bandwidth Memory (HBM) DRAM, Pushes Over 100GB/s Per Stack

Comments Filter:
  • This will make password cracking that much easier. My understanding is that space-time tradeoff drastically changes with increased bandwidth for RAM. I am not smart enough to derive a formal proof, but I think a number of key-stretching functions just bit the dust.
    • by Anonymous Coward

      They just went from 330GB/s on the 290X to ~500GB/s. This will do nothing at all to security. Also we still have no idea how memory latency is impacted (shorter paths, but also lower clocks). If they can scale down to APUs and lower cost GPUs then there is some really great potential.

      • by Agripa ( 139780 )

        Also we still have no idea how memory latency is impacted (shorter paths, but also lower clocks).

        Latency is dominated by reading the DRAM itself and not the interface frequency which is why latency which is specified in clocks is roughly proportional to interface frequency.

    • Moore's law cover this security concern. Expect computations to keep doubling this your key that takes a year to crack will take a few hours in the course of a few days in the next decade.

      • Expect computations to keep doubling this your key that takes a year to crack will take a few hours in the course of a few days in the next decade.

        Not to worry; I'll change my key several times before then.

    • by suutar ( 1860506 )

      I'm not sure how increased bandwidth could make a difference beyond sheer number of tries per second. On that scale, it's (very) roughly a factor of 2 difference, so add a bit and move on, no?

  • Repeated links? (Score:1, Insightful)

    by valinor89 ( 1564455 )
    What is the point of providing more than one link if all of them point to the same page?
  • by Andrio ( 2580551 ) on Tuesday May 19, 2015 @11:18AM (#49726873)

    They're a substantially less evil force than either Intel or NVIDIA.

    • by Anonymous Coward

      AMD Vs NVIDIA! Asian CEO kung fu showdown!

    • Re: (Score:3, Insightful)

      by Adriax ( 746043 )

      I buy AMD because it's cheaper. I have yet to notice a real performance difference.
      The fact my money doesn't go towards bribing benchmark/review sites and prebuilt manufactures to lock competition out of the market is just a bonus.

      • I kept buying AMD to save a penny and my experience was iffy drivers (even on Windows) and two dead fans. Now I go NVidia and have had far fewer problems.

        • by Adriax ( 746043 )

          Neither amd nor nvidia makes the actual card, just the chips and design spec. Busted fans would be the fault of the card manufacture. And it's a good bet the same manufacture makes crappy nvidia cards too.

          Drivers are a valid point. Though not one I've experienced myself.
          Worst i can say about drivers is nvidia won't let you easily uninstall the support software if you replace a burnt out card with an amd. The uninstaller wouldn't run without a valid nvidia card installed.

  • Power savings (Score:5, Interesting)

    by asliarun ( 636603 ) on Tuesday May 19, 2015 @11:23AM (#49726931)

    One has to give it to AMD. Despite their stock and sales taking a battering, they have consistently refused to let go of cutting edge innovation. If anything, their CPU team should learn something from their GPU team.

    On the topic of HBM, the most exciting thing is the power saving. This would potentially shave off 10-15W from the DRAM chip and possibly more from the overall implementation itself - simply because this is a far simpler and more efficient way for the GPU to address memory.

    To quote:
    "Macri did say that GDDR5 consumes roughly one watt per 10 GB/s of bandwidth. That would work out to about 32W on a Radeon R9 290X. If HBM delivers on AMD's claims of more than 35 GB/s per watt, then Fiji's 512 GB/s subsystem ought to consume under 15W at peak. A rough savings of 15-17W in memory power is a fine thing, I suppose, but it's still only about five percent of a high-end graphics cards's total power budget. Then again, the power-efficiency numbers Macri provided only include the power used by the DRAMs themselves. The power savings on the GPU from the simpler PHYs and such may be considerable."

    http://techreport.com/review/2... [techreport.com]

    For high end desktop GPUs, this may not be much, but this provides exciting possibilities for gaming laptop GPUs, small formfactor / console formfactor gaming machines (Steam Machine.. sigh), etc. This kind of power savings combined with increased bandwidth cna be a potential game changer. You can finally have a lightweight thin gaming laptop that can still do 1080p resolution at high detail levels for modern games.

    I know Razer etc already have some options, but a power efficient laptop GPU from the AMD stable will be a very compelling option for laptop designers. And really, AMD needed something like Fiji - they really have to dig themselves out of their hole.

    • by Kjella ( 173770 )

      One has to give it to AMD. Despite their stock and sales taking a battering, they have consistently refused to let go of cutting edge innovation. If anything, their CPU team should learn something from their GPU team.

      Well unlike their CPU division the GPU division hasn't been the one bleeding massive amounts of cash, at least not until the GTX 970/980 generation from nVidia. Though with the R300 OEM series being a R200 rebrand they seem to be running out of steam, one limited quantity HBM card won't fix their lineup.

      This kind of power savings combined with increased bandwidth cna be a potential game changer. You can finally have a lightweight thin gaming laptop that can still do 1080p resolution at high detail levels for modern games.

      You still need power for shaders that is about 80-90% of a GPU's power consumption. In fact, AMDs problem is that even if they could swap out the GDDR5 for HBM today they still lose on performance/watt to nV

      • by Agripa ( 139780 )

        Well unlike their CPU division the GPU division hasn't been the one bleeding massive amounts of cash, at least not until the GTX 970/980 generation from nVidia.

        Their CPU division has also been bleeding people since the K8 was released. Apple has more ex-K8 employees than AMD. I have been told this this started and was primarily caused by Hector Ruiz.

    • by Anonymous Coward

      "On the topic of HBM, the most exciting thing is the power saving. This would potentially shave off 10-15W from the DRAM"

      Umm, going to have to disagree with you there. We create very high end real time visual simulations that are inherently bandwidth bound, i.e. the limit to model complexity is generally how fast vertices and textures can be moved around.

      Most of my customers are going to be way more excited about potentially doubling our tripling the fidelity of their visualization than saving a few $ on po

    • I don't know why you're attributing this. One of the leading firms for 3D stacking is Tezzaron, and they've been around for more than a decade. You can give credit to AMD for taking on the risk of 3D memory, but not for innovating it -- other companies did most of that work.
  • by davydagger ( 2566757 ) on Tuesday May 19, 2015 @11:27AM (#49726961)

    nVidia fanboy since switching to Linux. A combination of them releasing their new unified driver, the latest nvidia chips being notoriously hard for nouveau, and now this, I think my next card is going to come from AMD

  • All the graphics I have seen represent the HVM memory being much higher or thick that the GPU core part. Is that how it will be? And if so how are they going to make the lid make good contact with the hot parts when they are "recessed"?
    • Go back to using separate heatsinks instead of one gigantic one?

    • As far as cooling, note that the ram is also clocked "much lower" than traditional gddr and uses less power. In practical terms this means "this ram solves heat issues."
    • They could use separate heatsinks or mill a recess into a bigger one for the memory.

  • The good: next year AMD start making APUs with HBM. The only thing that was holding back the iGPU was memory bandwidth. So, now they can put a 1024 shader GPU on the die and not have it starved by bandwidth. That will have interesting applications: powerful gaming laptops much cheaper than those with a discreet GPU and HPC (especially considering HSA applications)
    The bad: this year AMD is only releasing one new GPU, Fiji. The rest are rebadges. And there is no new architecture. Even Fiji is making do with G

    • Not earth-shattering, but Tonga which is GCN 1.2 is the GPU that needs most a re-release and I think they release a new GPU this year, Iceland. That one is low end 128bit, and is about what you need to run the new AMD linux driver - which sadly requires GCN 1.2 (?) from what I think I've read.

    • by 0123456 ( 636235 )

      The good: next year AMD start making APUs with HBM. The only thing that was holding back the iGPU was memory bandwidth. So, now they can put a 1024 shader GPU on the die and not have it starved by bandwidth.

      Yeah, now you just need to get 500W out of that chip that combines a power-hungry AMD CPU with a power-hungry AMD GPU.

  • which was eventually adopted by JEDEC 2013

    I hope the organizers were careful not to invite any patent trolls to that round.

  • They're moving the RAM modules closer to the controlling chip? Why can't they simply make the copper path that connects them slightly thicker and out of purer copper if they want mad bandwidth? Isn't that how all electronic modulated pulses work?
    • by Anonymous Coward

      I'm glad you brought that up. Clearly, the AMD engineers spent all this time working on the wrong solution.

    • Re:easier solution (Score:4, Informative)

      by matfud ( 464184 ) <matfud@yahoo.com> on Tuesday May 19, 2015 @01:41PM (#49728395) Homepage

      No. To drive an external bus requires a lot of silicon space to handle the capacitance resistance and distance. This also requires a lot of power.

      Stacked chips required far smaller drivers. The distance is in the mm rather than decimeters. The insulators are far better (as the current and voltage can be far smaller). Capacitance is also far lower. And you do not need to have 1024 bit data paths + address + signaling on the motherboard which makes motherboards far simpler and cheaper to make. Not counting the problems with signal propagation along different length paths on a motherboard (designed into the chip in this case) or having interactions from the multilayer PCB traces.

      So yes there are very good reasons to do this.

      • by matfud ( 464184 )

        Oh yes when you get to the hundreds of Mhz-Ghtz range the signaling properties and distance become very important. Electrical signals in copper do not travel at the speed of light. Even if they did you still would have timing issues due to different lengths of the copper traces. on the MB. It takes a lot of skill to negate this single effect. The wider the bus the harder it is to achieve

        Unless you get Monster cables of course!.

  • And if you stagger the memory addresses round robin and maybe offset the clock of each stack appropriately, might there be a performance gain as well?
  • Googling the JEDEC document number JESD235 from the article found several references with NVIDIA talking about this for 2 years now for their Pascal series of chips after Maxwell.

    future-nvidia-pascal-gpus-pack-3d-memory-homegrown-interconnect [enterprisetech.com]
    http://en.wikipedia.org/wiki/High_Bandwidth_Memory [wikipedia.org]
    http://www.cs.utah.edu/thememoryforum [utah.edu]
    • by Xenx ( 2211586 )
      From the summary: AMD partnered with Hynix and a number of companies to help define the HBM specification and design a new type of memory chip with low power consumption and an ultra-wide bus width, which was eventually adopted by JEDEC 2013

      AMD started this with Hynix 4 years ago, so obviously the tech itself isn't brand new. The new news, is the pending release of hardware using said technology.
  • Control-F "heat"

    [No Results]

    My thoughts exactly.

    That's not to say this isn't great stuff, but thermal issues have always been the first or second most important problem with stacked memory. The other problem being fabrication and routing. You can't put a heatsink on a die when the die is wedged between 5 other dies. So you're hoping a heatsink on the top and bottom is enough for the middle wafers, or you're running some sort of tiny heat exchanger system.
  • How about they work on the drivers? i love amd cards and new HBM is super ..but every 1-2 years i fall for AMD.ATI video card hype and fall into the driver hell trap. 260x and 285x horrid linux/windows support . i can live with heat or noise issues but driver crash's on a black desktop with icons ..come on before they break out new gpu how about fix the drivers ..nvidia gets hate for there drivers but they work and dont go bonkers

  • Comment removed based on user account deletion

If all else fails, lower your standards.

Working...