Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

PCI SIG Releases PCIe 2.0 113

symbolset notes that The Register is reporting that PCI SIG has released version 2.0 of the PCI Express base specification: "The new release doubles the signaling rate from 2.5Gbps to 5Gbps. The upshot: a x16 connector can transfer data at up to around 16GBps." The PCI-SIG release also says that the electromechanical specification is due to be released shortly.
This discussion has been archived. No new comments can be posted.

PCI SIG Releases PCIe 2.0

Comments Filter:
  • by RuBLed ( 995686 ) on Tuesday January 16, 2007 @03:41AM (#17625694)
    The signalling rates are measured in GT/s not Gbps (correct me if I'm wrong). The new release doubles the current 2.5 GT/s to 5 GT/s. As a comparison, the 2.5 GT/s is about 500 MB/s bandwith per lane thus 16 GB/s in a 32 lane configuration.

    I tried to do the math but I just can't get it right with Gbps instead of GT/s.

    http://www.intel.com/technology/itj/2005/volume09i ssue01/art02_pcix_mobile/p01_abstract.htm [intel.com]
  • by Kjella ( 173770 ) on Tuesday January 16, 2007 @03:51AM (#17625738) Homepage
    It's 2.5 and 5.0Gbps, but with a 10 bits to encode 1 byte (8 bits), so net 250MB/s to 500MB/s, which works out to 16GB/s in a 32-lane config. "The upshot: a x16 connector can transfer data at up to around 16GBps." in the article is simply wrong.
  • by grindcorefan ( 959282 ) on Tuesday January 16, 2007 @05:52AM (#17626308) Homepage
    Well, one 16x link can transmit 8 Gibibytes per second. As PCIe is full-duplex, stupid salesdroids and marketingdwarves can be expected to simply add both directions together and use that figure instead. But I agree with you, it is misleading.

    PCIe 1.0 does 2,500,000 Transfers per second per lane in each direction. Each transfer transmits one bit of data.
    It uses a 8B/10B encoding, therefore you need 10 transfers in order to transmit 8 bits of payload data.
    Disregarding further protocol overhead, the best rate you can get is 250,000,000 bytes of payload data per seconds per lane.
    16 * 250 * 10^6 = 4 * 10^9 = 4 Gibibytes/s on a 16x link in each direction

    with PCIe 2.0 the data rate doubles, therefore the max transfer rate per direction is 8 Gibibytes per second on a 16x link in each direction when you disregard protocol overhead.

  • by kinema ( 630983 ) on Tuesday January 16, 2007 @05:53AM (#17626314)
    Intel is scheduled to start shipping their X38 (aka "Bearlake") chipsets Q3 of this year. The final v2 spec may have just been released but it's been in development for sometime allowing engineers to at least rough out designs. Also, much of the logic from previous v1.x chipsets can be reused as v2 is an evolution not a completely new interconnect standard.
  • Re:Why 'PCI'? (Score:5, Informative)

    by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Tuesday January 16, 2007 @09:01AM (#17627334) Homepage
    It has more to do with PCI than you think.

    While the electrical interface has changed significantly, the basics of the protocol have not changed much at all, at least at a certain layer.

    The end result is that at some layer of abstraction, a PCI-Express system appears identical to a PCI system to the operating system (as another poster mentioned). BTW, with a few small exceptions (such as the GART), AGP was the same way. Also, (in theory) the migration path from PCI to PCI Express for a peripheral vendor is simple - A PCI chipset can be interfaced with a PCI Express bus with some "one size fits all" glue logic, although of course that peripheral will suffer a bandwidth penalty compared to being native PCIe.

    Kind of similar to PATA vs. SATA - Vastly different signaling schemes, but with enough protocol similarities that most initial SATA implementations involved PATA-to-SATA bridges.
  • Re:Math??? (Score:3, Informative)

    by onemorechip ( 816444 ) on Tuesday January 16, 2007 @02:23PM (#17632192)
    80 Gb/s would be the half-duplex bandwidth. Full duplex is 160 Gb/s (if you can find an application to utilize all of both directions). PCIe uses an encoding of 10 bits to the byte, for numerous technical reasons but primarily to maintain a DC balance (50% ones, 50% zeros) and to ensure maximum run lengths so that the clock (embedded in the serial stream) can be recovered at the receiving end. 160/10 = 16 GB/s.

I've noticed several design suggestions in your code.

Working...