Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
IT

Samsung Unveils Industry's First 32Gbit DDR5 Memory Die (anandtech.com) 17

Samsung today revealed the world's first 32 Gb DDR5 DRAM die. From a report: The new memory die is made on the company's 12 nm-class DRAM fabrication process and not only offers increased density, but also lowers power consumption. The chip will allow Samsung to build record 1 TB RDIMMs for servers as well as lower costs of high-capacity memory modules. "With our 12nm-class 32 Gb DRAM, we have secured a solution that will enable DRAM modules of up to 1 TB, allowing us to be ideally positioned to serve the growing need for high-capacity DRAM in the era of AI (Artificial Intelligence) and big data," said SangJoon Hwang, executive vice president of DRAM product & technology at Samsung Electronics.

32 Gb memory dies not only enable Samsung to build a regular, single-rank 32 GB module for client PCs using only eight single-die memory chips, but they also allow for higher capacity DIMMs that were not previously possible. We are talking about 1 TB memory modules using 40 8-Hi 3DS memory stacks based on eight 32 Gb memory devices. Such modules may sound overkill, but for artificial intelligence (AI), Big Data, and database servers, more DRAM capacity can easily be put to good use. Eventually, 1TB RDIMMs would allow for up to 12 TB of memory in a single socket server (e.g. AMD's EPYC 9004 platform), something that cannot be done now.

This discussion has been archived. No new comments can be posted.

Samsung Unveils Industry's First 32Gbit DDR5 Memory Die

Comments Filter:
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Friday September 01, 2023 @05:31PM (#63815865)
    Comment removed based on user account deletion
  • I thought... (Score:4, Interesting)

    by Anonymous Coward on Friday September 01, 2023 @05:44PM (#63815887)

    No one would ever need more than 640K.

    This is 2^16 *times* that amount! (in 8-bit word, non-parity). Move to 64-bit ECC, we're looking at around 2^24 times the number of bits.

    My Ghod - it's full of bits!

    • That was a notion from a more innocent time. A time before man had even dreamt of multiple tabs, let alone the Chrome web browser. I don't even think 1 TB will be enough, but it will certainly allow for more tabs.
    • We only have another 6 orders of magnitude in memory to go and then we'll be needing 128-bit processors.
      • by dargaud ( 518470 )
        Or maybe we can extend the old 16-bit memory extenders ? What were their names ? Pharlap and others ?
  • At the very minimum in server applications you are going to want at least 8 channels which means 8 TB of ram. Probably overkill for many. Access to that 8 TB would be 3 times slower than present day high end consumer GPUs.

    For AI applications especially there needs to be a reasonable ratio of capacity to the time required for the controller to scan the chip. 1TB is as far as you can possibly get from that. Unless you are batching a lot of concurrent operations from the processors cache the CPU is just jus

    • HBM is usually used for anything that bandwidth sensitive. NVidia's H100 accelerator card has about 4x the effective bandwidth as their L40 card that uses GDDR6 memory. AMD is also using HBM in their MI300X card that has over 5 TB/s bandwidth. It's be great if they could get the cost of that down to where it could be used in consumer parts, but at a certain point it doesn't really add much for consumer workloads.

      AMD seems to be tackling the bandwidth limits on the server side by offering processors with
  • and charging an arm and a leg for memory upgrade.

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...