Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Intel IT Hardware

Intel Unveils Optical Compute Interconnect Chiplet: Adding 4 Tbps Optical Connectivity To CPUs or GPUs (tomshardware.com) 24

Intel has introduced an advanced optical input/output chiplet, marking what it claims to be a significant leap in data center technology. The optical compute interconnect (OCI) chiplet, unveiled at the Optical Fiber Communication Conference 2024, is designed for integration with CPUs and GPUs and boasts 64 PCIe 5.0 channels transmitting 4 Tbps over 100 meters using fiber optics. Tom's Hardware adds: The chiplet uses dense wavelength division multiplexing (DWDM) wavelengths and consumes only five pico-Joules per bit, significantly more energy-efficient than pluggable optical transceiver modules, which consume about 15 pico-Joules per bit, according to Intel. This device is crucial for next-generation data centers and AI/HPC applications. It will enable high-performance connections for CPU and GPU clusters, coherent memory expansion, and resource disaggregation. These features will be handy for operating supercomputers for large-scale AI models and machine learning tasks that require tremendous data bandwidth.

Intel Unveils Optical Compute Interconnect Chiplet: Adding 4 Tbps Optical Connectivity To CPUs or GPUs

Comments Filter:
  • Good move (Score:4, Interesting)

    by Fons_de_spons ( 1311177 ) on Thursday June 27, 2024 @01:04PM (#64582889)
    Yep... having a fab has its strengths... Well played, Intel. I guess this is a lot harder for AMD to develop this. Interesting times.
  • by jenningsthecat ( 1525947 ) on Thursday June 27, 2024 @01:42PM (#64582963)

    Traditional electrical I/O systems, which rely on copper traces for connectivity, offer high bandwidth density and low power but are limited to short distances of about one meter.

    With optical connections getting physically closer and closer to the CPUs and GPUs, are they likely to replace copper connections on the same PCB? If so, that could have significant advantages.

    Matching the delays and impedances of copper traces could be dispensed with, and RF emissions could be reduced. There would be more room both for copper devoted to power distribution, and for more bypass caps; that could result result in lower emissions as well as more stable voltages at the chips. Overall board real estate might be reduced, although possibly at the expense of additional height for the optical interconnects. And the changes in component layout might allow for more effective cooling.

    I'm guessing that the optical transceivers would increase power demands, but that increase might be offset by other factors. Does anybody here know anything about that?

    • by Tailhook ( 98486 )

      Likely? No. There could be real benefits for data conversion applications (ADC/DAC) where noise can mitigated using optics. Otherwise PCBs with optical traces are likely to cost a great deal more. Fabricating working fiber optics, particularly finishing and aligning the ends of fibers, is not trivial. It's hard to see how that would be compatible with low cost mass PCB manufacturing.

      • It definitely seems like it'd be more expensive... however, fiber isn't exactly expensive itself.
        Being in the networking business, we literally have kilometers of the shit in the warehouse.
        Whatever non-triviality involved in its manufacture, I've got it laying on my office floor as throwaway clutter used in experiments.

        Certainly from the perspective of "moving all things over to optical fibers" (which nobody is suggesting), I could see how the price begins to matter... but this is for ultra-high-bandwid
        • by Shaitan ( 22585 )

          If fiber is going to 'hurt' there it isn't in dollar cost of the fiber the but real estate cost, conversion cost, and costs of added complexity [increases failures]. If intel has found a way for the benefits to outweigh those costs then good on them but I suspect it is just a matter of time before someone manages to shift those scales back again.

    • Do any chips leverage the fact that lasers going in different directions can pass through each other? That'd be cool if the core of the chip was a vacuum with a layer of transceivers sending information all throughout a 3d structure. Like the galactic senate.
      • Re: Just wondering (Score:4, Interesting)

        by Fons_de_spons ( 1311177 ) on Thursday June 27, 2024 @02:44PM (#64583193)
        Once joined a discussion about a proof of concept paper. They added a mirror on top of the chip. Instead of using long interconnect wires, they shot light from one side of the chip, bounced it on the mirror to receive it on the other end. The idea was to do this in a directional way, allowing multiple point to point connections. It of course had issues, but that was roughly two decades ago. Found it a nice idea back then.
        • by Tailhook ( 98486 )

          This discussion has me recalling HP's "Machine" [wikipedia.org] architecture. The idea was to interconnect CPUs and large amounts of NVRAM with optics.

          Seems like this Intel OCI technology would be an enabler for this sort of work. Not that HP isn't the hidebound, rent seeking operation we know it to be, and incapable of actually pulling it off. But someone probably can.

        • by Kaenneth ( 82978 )

          That could be interesting combined with DLP mirror technology

          https://en.wikipedia.org/wiki/... [wikipedia.org]

          each light beam shines up at it's unique mirror, that could direct the beam wherever else the data is being sent. I think DLP is pretty much on/off, but if each mirror could be aimed using something like a GPU bump texture map...

    • Sending many bits quickly over wires requires high speed pin drivers and high frequency, low jitter synchronization (PLLs).

      On-silicon-optics side-steps those fantastically complicated technologies with a different fantastically complicated technology.
      PLL and line driver technology is quite tied to the silicon performance. If optics on the same silicon can push data quicker, it's a win.

    • by AmiMoJo ( 196126 )

      It seems a bit pointless for short distance connections like between a CPU and GPU. The transceivers add their own latency, and would increase power consumption and cost. For longer distances between servers or more, fibre makes sense.

  • I just remember being disappointed after finding out fiber channel hard drives did not have fiber optic connections.

    • There there ;-)
    • heh. No, they do not. But fiber channel does use optical connectors for packet forwarding past the drives.
      I have several FC chassis with optical FC switches.

      These days, it's cheaper to whip up a fat SCSI chassis and just use iSCSI across a cheap Nexus, though. We won't be replacing those devices with FC.
      • by Shaitan ( 22585 )

        Quite, for a hot second I was able to use cheap used adapters for point to point FC connections between my NAS and SAN [at home] because datacenter phase outs dump gear in such quantities and 10gb adapters remained expensive for an insanely and inappropriately long time. The 10gb market is finally cracking though, adapters are cheap and there are a couple switches offering multiple 10g ports for prosumers for what a single adapter cost a few years ago.

  • Didn't some PowerPC CPUs have an optical interconnect at some point? This was before the concept of cores took off, and this was mainly for CPU to CPU communication.

Brain damage is all in your head. -- Karl Lehenbauer

Working...