Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

PCI SIG Releases PCIe 2.0 113

symbolset notes that The Register is reporting that PCI SIG has released version 2.0 of the PCI Express base specification: "The new release doubles the signaling rate from 2.5Gbps to 5Gbps. The upshot: a x16 connector can transfer data at up to around 16GBps." The PCI-SIG release also says that the electromechanical specification is due to be released shortly.
This discussion has been archived. No new comments can be posted.

PCI SIG Releases PCIe 2.0

Comments Filter:
  • Yay (Score:4, Funny)

    by Anonymous Coward on Tuesday January 16, 2007 @02:24AM (#17625610)
    Now I can play games at 600fps- I've so been needing the boost- 200fps just doesn't cut it.

    But seriously- the data acquisition and video rendering markets should benefit from this. Cool.
    • Re: (Score:3, Funny)

      by killa62 ( 828317 )
      yes, but will it run duke nukem forever?
      • Re: (Score:2, Funny)

        by Anonymous Coward
        Well, this [jjman.org] card might be up to the task.
      • No, you need an Infinium Labs Phantom to do that. It's a launch title.
      • yes, but will it run duke nukem forever?


        Yes, provided no ill side affects from the vapor.
      • by HTH NE1 ( 675604 )
        It will, though the game will have to be delayed a bit more so that it can be rewritten to take advantage of this advance... and the next one... and the next one... and....
      • Not forever, but for at least a few minutes before it crashes.
    • I never understood the need for 180fps in a game. Anything higher than the monitors refresh rate is a waste. I mean, if you're running an LCD monitor (typically refreshes 60 times per second) and you've got a video card that's rendering 120 frames per second - where's that every-other-frame going? The monitor can only display 60/s.

      Now, if they offloaded the physics to the GPU to use the extra capacity...
      • Re:Yay (Score:4, Interesting)

        by jandrese ( 485 ) <kensama@vt.edu> on Tuesday January 16, 2007 @09:16AM (#17627922) Homepage Journal
        The thing about 120FPS is that when someone is quoting you that number, they're talking about the framerate when you're sitting in the entrance room after first booting up the game. In complex environments where you have lots of monsters and particle effects on the screen, that number can quickly drop down into the 30-60 range. While this is still more than playable, if you'd only started at 60 or 45FPS the game would bog down in the difficult sections (and those sections are typically where you need that extra accuracy and quicker response time).
        • by afidel ( 530433 )
          Which is why you need MINIMUM framerate numbers, not average. In NWNW2 I get around 40fps with fairly minimal settings, but in difficult scenes I bog down as low as 8fps and average around 11fps, to me that is unplayable.
        • by Ant P. ( 974313 )
          Variable framerates are one thing I hate in games. The crap level/model design that causes it is another. When you only have a low-end card and 30fps is as good as it gets, even having something walk in front of you on screen can be fatal. Not to mention bloody annoying.
    • Don't worry, the Unreal 3 Engine running at 2560x1600 can take full advantage of this stuff... just wait until Vanguard: Saga of Heroes goes Unreal 3 :)
    • You may want to separate your sentences with periods (or possibly colons), so people don't mistake 200fps with -200fps.

      Just some -lame advice? Cool-

  • Outstanding (Score:1, Funny)

    by cmdtower ( 1051808 )
    After these many years of reading slashdot.org (ala 1998) - This is my first post. Boy am I glad I'm waiting for the nextgen motherboard. 16 GB/s. Yikes!
    • After 5 minutes of being registered I'm already testing the new discussion system. PCIe 2.0 ftw!
      • by GFree ( 853379 )
        Make sure to provide at least one pro-Linux comment during your time here, or you will be forever considered a pariah of the Slashdot community.

        Extra karma for one good "Soviet Russia" joke.
        • by PSXer ( 854386 )
          It's easier to just put it in your sig. That way you won't forget.
        • In soviet russia, slashdot comments YOU!!

          oh wait... a good Soviet Russia joke you say? I'll be back...

          • In soviet russia, slashdot comments YOU!!

            oh wait... a good Soviet Russia joke you say? I'll be back...

            No, no you won't.
  • by scwizard ( 941758 ) on Tuesday January 16, 2007 @02:31AM (#17625646) Homepage Journal
    Slower than they get easier to create.
    • Re: (Score:1, Funny)

      by Anonymous Coward
      I'd rather put a network card in that 16 Gbps slot. Imagine how fast one could download porn!
    • by bky1701 ( 979071 )
      Not really. As systems become more capable, the need to optimize (a major part of modeling) starts to go down. Also, major improvements can be done to things like lighting - that do not take a whole lot of work, that make the whole model look better. Real-time ray-traced lighting? I wish, but hopefully it's not going to be that much longer.
  • by RuBLed ( 995686 ) on Tuesday January 16, 2007 @02:41AM (#17625694)
    The signalling rates are measured in GT/s not Gbps (correct me if I'm wrong). The new release doubles the current 2.5 GT/s to 5 GT/s. As a comparison, the 2.5 GT/s is about 500 MB/s bandwith per lane thus 16 GB/s in a 32 lane configuration.

    I tried to do the math but I just can't get it right with Gbps instead of GT/s.

    http://www.intel.com/technology/itj/2005/volume09i ssue01/art02_pcix_mobile/p01_abstract.htm [intel.com]
    • by Kjella ( 173770 ) on Tuesday January 16, 2007 @02:51AM (#17625738) Homepage
      It's 2.5 and 5.0Gbps, but with a 10 bits to encode 1 byte (8 bits), so net 250MB/s to 500MB/s, which works out to 16GB/s in a 32-lane config. "The upshot: a x16 connector can transfer data at up to around 16GBps." in the article is simply wrong.
      • Re: (Score:2, Informative)

        Well, one 16x link can transmit 8 Gibibytes per second. As PCIe is full-duplex, stupid salesdroids and marketingdwarves can be expected to simply add both directions together and use that figure instead. But I agree with you, it is misleading.

        PCIe 1.0 does 2,500,000 Transfers per second per lane in each direction. Each transfer transmits one bit of data.
        It uses a 8B/10B encoding, therefore you need 10 transfers in order to transmit 8 bits of payload data.
        Disregarding further protocol overhead, the best rate
      • No, the 16GBps on a 16x connector is TRUE. They just count both ways, as the PCIe are one way, point to point transfer and they have one lane per direction (so a 16x slot has 16 lanes up AND 16 lanes down). This way, you have 16GBps total transfer: 8GBps in each direction.

        A bit confusing and slightly misleading, but true.
    • The signalling rates are measured in GT/s not Gbps (correct me if I'm wrong).

      I'd have to know what a GT/s was first. Gross Tonnes per second? Gigatonnes per second? Gigatexels per second? Gran Turismos per second?
      • Gigatransfers/second. I think somebody on the committee thought Gigabits/second was underselling the bandwidth, since it only represents the bandwidth of one lane in a multilane link. So a "transfer" is the number of bits that can be moved simultaneously over all the lanes in a UI (unit interval, which is 0.2 ns at the new rate); consequently the size of a transfer will vary with the link width.
  • Now I can type at 16GBpS! Imagine how fat and poorly-designed our operating systems can be now! Th1s r0cks[shift one]
  • Sigh (Score:4, Interesting)

    by Umbral Blot ( 737704 ) on Tuesday January 16, 2007 @02:52AM (#17625746) Homepage
    I know this is news, and actually relevant to /. (for once), but I find it hard to care. Sure the specification is out, but it will take a long time I suspect to find its way into computers (since the existing version is so entrenched), and even longer for cards to be made that take full advantage of it. Is there something I am missing that will make this new standard magically find its way into computers in the next few months? Do I have to turn in my geek card now?
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      It took long enough for the first version of PCIe to start displacing PCMCIA. At least now the slot's the correct size... It might be comparable to going from PS/2 to USB 1.x to USB 2.0.
    • Re:Sigh (Score:5, Insightful)

      by Indy1 ( 99447 ) on Tuesday January 16, 2007 @03:14AM (#17625858)
      its a good thing. By getting the standard approved way before its needed, it gives everyone (hardware oem's, os developers, etc) plenty of time to integrate support for it. Rushing critical standards like this leads to nasty problems (think VLB) or outright non adoption.

      When the ATA standards 33, 66, 100, etc, were adopted, everyone was saying the same thing - why in the hell is it needed. But by getting it adopted and published before it was needed, it gave all the chipset and motherboard vendors time to build it in their products. Result - in the past 10 years hard drives have NOT been bottlenecked transferring data between the drive and motherboard. You can get a screaming fast hard drive, stick it in an older motherboard (say with in 2-3 years of the hard drive's date), and it almost always works without issues.

      Pci-e 1.0 took too long to come out. The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well), scsi and raid controllers, ethernet cards (pci cant even give a single gig nic enough bandwidth), usb 2.0, firewire 400 and 800, etc etc etc. Pci-X was complex, expensive, and not widely available. It also ate up too much of the motherboard real estate.

      By getting on the ball with Pci-e 2.0, we won't see the same problem again for a while. Now only if firewire 800 and e-sata could be more common........
      • Can someone explain why it's called 'PCI' Express when it doesn't have much to do with PCI, and the slots aren't backwards compatible? Is it a variant of the marketing rule that any successful network transport must be called Ethernet?
        • by vadim_t ( 324782 )
          It's backwards compatible to the OS.

          Any OS that supported PCI supports PCI automatically supports PCI Express without any modifications.
        • Because PCI stands for "Peripheral Component Interconnect" PCI Express still connectes peripheral components to the computer.

          So the name makes perfect sense dosn't it.
        • Re:Why 'PCI'? (Score:5, Informative)

          by Andy Dodd ( 701 ) <atd7@@@cornell...edu> on Tuesday January 16, 2007 @08:01AM (#17627334) Homepage
          It has more to do with PCI than you think.

          While the electrical interface has changed significantly, the basics of the protocol have not changed much at all, at least at a certain layer.

          The end result is that at some layer of abstraction, a PCI-Express system appears identical to a PCI system to the operating system (as another poster mentioned). BTW, with a few small exceptions (such as the GART), AGP was the same way. Also, (in theory) the migration path from PCI to PCI Express for a peripheral vendor is simple - A PCI chipset can be interfaced with a PCI Express bus with some "one size fits all" glue logic, although of course that peripheral will suffer a bandwidth penalty compared to being native PCIe.

          Kind of similar to PATA vs. SATA - Vastly different signaling schemes, but with enough protocol similarities that most initial SATA implementations involved PATA-to-SATA bridges.
          • by Micah ( 278 )
            Hi,

            since you seen to know what you're talking about ... :)

            I've been trying to figure out if an ExpressCard eSATA interface would work with Linux and, if so, with just the in-kernel SATA driver or would it require something additional?

            Since ExpressCard supposedly uses PCIe internally, my strong hunch is that it would work just fine with the in-tree driver, but even extensive Googling did not come up with anyone who had actually tried it.

            Just trying to find some confirmation before plunking down hundreds of d
        • Several people have answered this already, but I'll just add a few small points. Configuration space in PCI Express is a superset of PCI's configuration space. So a BIOS or OS that can talk to config registers on a PCI device can talk to config registers on a PCIe device. Also, the transaction layer mode (split transactions) is based on the PCI-X (not conventional PCI) model. You can think of the relationship between PCI and PCIe as similar to that between parallel SCSI and the serial interfaces that imp
      • The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well), scsi and raid controllers, ethernet cards (pci cant even give a single gig nic enough bandwidth), usb 2.0, firewire 400 and 800, etc etc etc. Pci-X was complex, expensive, and not widely available. It also ate up too much of the motherboard real estate.

        PCI-X is PCI, just clocked a little higher, and it is fairly common on server hardware where the increased bandwidth is actually useful.

        • by Indy1 ( 99447 )
          you forget one thing. PCI is a shared bus. So you end up having usb 2, ethernet, etc, all fighting for that same bandwidth. Also, the 133MB/s is peak bandwidth. In reality, the sustained bandwidth is quite a bit less, depending on how well the southbridge chipset is designed (early VIA chipsets often had problems here).

          Pci-e gives each slot dedicated bandwidth, which is the biggest advantage of the technology.
          • by afidel ( 530433 )
            So what, good PCI implementations have multiple bus's, for instance my HP DL585 has two PCI-X 133 slots, each one is on a dedicated PCI bus, it also has six PCI-X slots on three more bus's. In total there are 8 PCI bus's in the system to deal with internal connections to the built in peripherals.
          • you forget one thing. PCI is a shared bus.

            thank you, I do know that

            So you end up having usb 2, ethernet, etc, all fighting for that same bandwidth.

            Sure, assuming that your machine has only one bus. Even low-end desktops at least have seperate busses for video, integrated peripherals, and add-in cards. (You seemed to have missed that part of the email you replied to.)

            Also, the 133MB/s is peak bandwidth. In reality, the sustained bandwidth is quite a bit less, depending on how well the southbridge chipset is

        • There is a AM2 board with sli and pci-x
      • When the ATA standards 33, 66, 100, etc, were adopted, everyone was saying the same thing - why in the hell is it needed.

        Meanwhile, "everyone else" knew why it was needed and had been using SCSI for several years because of the performance advantages.

        The Pci bus has been overwhelmed by modern video cards (which led to the AGP hack, which fortunately worked fairly well),

        Probably because it wasn't a hack, but a well-documented and planned industry standard.

      • The standard is needed now, and could be used now. Graphics cards are already at the limits of PCI-E 1. It's not a problem the ordinary user sees because released games and applications are carefully built around the PCI-E bottleneck, but for game developers and people doing physics or scientific calculations on graphics cards it's a big issue.
    • Re: (Score:3, Insightful)

      by evanbd ( 210358 )
      Well, it's back-compatible with PCIe 1.0 and 1.1, so it aside from price there's on disadvantage to including it. Whether there's demand for it is another question, but I'm sure the graphics card makers will find something to do with it. Think back to AGP 2x/4x -- those made it onto cards and motherboards fairly quickly, iirc.
    • Re:Sigh (Score:5, Insightful)

      by mabinogi ( 74033 ) on Tuesday January 16, 2007 @03:36AM (#17625966) Homepage
      I'd assume it'd be backwards compatible, similar to the AGP standards - in most cases you could stick any AGP card in an AGP 8x slot (as long as the motherboard still supported the voltages used by the older AGP versions, which was true in most cases).

      If that's the case, then there's no barrier to adoption and manufacturers can just start cranking them out as soon as they're ready. It's only when a technology requires a completely new platform at multiple levels that adoptions is slow, and that was why PCIe took so long.
    • by be-fan ( 61476 )
      Since PCIe 2.0 is electrically compatible, the transition is going to be more like the one between AGP versions (which happened quite quickly), rather than the transition from PCI to PCIe.
    • I think you just canceled your geek card.

      Some interest in the next generation of technology, rather than just what you can buy in the local store today, is required for membership.
    • Intel is scheduled to start shipping their X38 (aka "Bearlake") chipsets Q3 of this year. The final v2 spec may have just been released but it's been in development for sometime allowing engineers to at least rough out designs. Also, much of the logic from previous v1.x chipsets can be reused as v2 is an evolution not a completely new interconnect standard.
    • by ozbird ( 127571 )
      Dammit! I was about to upgrade my computer, now I have to wait for PCIe 2.0...

      Will somebody please think of the procrastinators?
    • Sure the specification is out, but it will take a long time I suspect to find its way into computers
      Actually, the article states that:

      Intel is expected to release its first PCIe 2.0 supporting chipsets, members of the 'Bearlake' family, next quarter.
    • Is there something I am missing that will make this new standard magically find its way into computers in the next few months?

      Multi-channel RAID adapters like those made by 3ware could benefit from the larger bandwidth.
      • by archen ( 447353 )
        That depends. For a simple mirror configuration I believe I worked it out that a 1 lane channel was enough bandwidth by itself. For more aggressive Raid 5 setups you will need more than one lane. That's why you could get a 4 lane card from a vendor like 3ware.

        Here's the hitch though, many mainboard manufacturers are cutting corners and giving you a 4 lane slot with 1 lane bandwidth. So in a way this could help because of the crap the Chinese give us, the upgrade to PCIe2 will at least force the 1 lane t
    • Actually, Intel will have chipsets supporting it out next quarter (Q2). The preliminary spec has been pretty stable, so most companies should be well on their way to having the chips ready. I don't know how long before companies like nVidia sell cards for it, mind you -- but since it's backwards compatible, having the slots on your next motherboard has no downside.
    • by ossie ( 14803 )
      Probably not in the next few months, but vendors will probably have products out in 2008.

      I know the /. crowd is primarily concerned with video performance, but there is a lot more to PCIe than just video. The new speeds will probably be more benificial for switch to switch PCIe connections.

      There is a lot of cool stuff going on in the PCI-SIG, the SR and MR (single root and multi root) specifications for I/O virtualization are especially cool. SR allows an endpoint (PCIe device) to export virtual functions
    • Yes, no and maybe, but not necessarily in that order.

      I attended the PCI SIG conference on virtualization for the new spec. There are two forms of virtualization that will (eventually) be supported - multiple operating systems on the same machine having access to their own private virtual PCI bus, and multi-mastered PCI busses where you can have multiple mainboards driving a virtual PCI bus that spans multiple machines.

      The latter is a godsend for cluster builders - why bother with having tightly-coupled

  • by suv4x4 ( 956391 ) on Tuesday January 16, 2007 @02:55AM (#17625754)
    It'll be interesting to compare the performance of the built-in GPU unit in the new Fusion AMD processors, and the latest PCIe.

    That said, of course PCIe has more applications than hosting a GPU card.
    • High-end GPUs are so large there's no room to fit a processor on the same die. So Fusion will inevitably have lower performance than the best discrete GPUs.
    • I wonder if you can even play games at a decent frame rate on Fusion, as others have noted that the CPU die real estate is somewhat limited, and modern GPUs are large.

      What I wonder even more, though, is if you install a PCIe video card (which will supercede Fusion's GPU) could you then still directly address the on-die GPU to run non-video but parallelizable (is that a word?) code? Could that be more efficient than parallelizing accross multiple cores or CPUs? Even if that were so, I wonder if any program
  • the electromechanical specification is due to be released shortly.

    I hope this means they will release the specification to the public unlike the AGP spec.
    • No. PCI specs are only available to members of PCI SIG; to get them or to develop hardware you need to be a member. If you thought AGP was secret, wait until you try to get technical details on PCIe.

      Even then it isn't easy - my company is a member but it's easier for me to go to the store and buy a copy of the Anderson book on PCIe [amazon.com] than to get the official spec.

  • Who cares! (Score:3, Interesting)

    by winchester ( 265873 ) on Tuesday January 16, 2007 @03:35AM (#17625958)
    2.5 to 5 Gb is still "only" 250 to 500 MB (roughly). My SGI Octanes could do that 7 years ago! (And still do that regularly, for the record). So what's the fuss?
    • by prefect42 ( 141309 ) on Tuesday January 16, 2007 @04:33AM (#17626232)
      Anyone who pretends octanes are high performance (or even *were*) needs help. And I've got a pair in the cupboard.
    • by cnettel ( 836611 )
      That's for one lane... 4x is quite normal for slots not intended for graphics, and these days you can find motherboards with 2 true 16x slots and one 16x physical/8x electrical slot.
    • Each slot may have up to 32 lanes. This means the maximum speed is going from 8000MB/s to 16000MB/s. Does this sound a bit more impressive?
  • Radio frequency bluetooth style bus architectures with decent range set up in a fashion to share resources with nearby devices creating mainframe style computers for all to use. This should be the new standard for bus architectures, whaddaya think?
    • the what if machine says it's a horrible idea.
    • you cannot get the bandwidth out of it - if you could, wireless monitors would be all the rage at the moment, but unless you want to run 640x480 it isn't going to happen. Also, interference is a problem, and while errors can be corrected out at low data transmission rates, if you try to pump massively more data through it, you will get problems.

      Otherwise, I think its a great idea. I think its perfect for cars, get rid of the wiring loom and replace much of it with cabling only for critical parts, and all th
      • You can get 300MB/s over 802.11n. This is more than enough for a lot of peripherals. My monitor runs at 1920x1200. In 32-bit colour, this is 9216000 bytes per frame, or 8MB. I would need 480MB/s to run the monitor at 60Hz (standard for TFTs) if the data is not compressed at all. Even doing some simple lossless compression it would be quite easy to get the data rate down to under 300MB/s.
        • Remember the difference between 300 megaBITS per second over 802.11n, and 480 megaBYTES that your calculations show you need for your display.

          I'm not sure you can get 300Mbps over 802.11n, all the web pages I've just googled say 100mbps, possibly up to 200mbps in real world situations, but if we assume your monitor isn't that far away from the transmitter and you can get 200mbps, you're still quite a way under what you need (for 640x480x16@60 Hz = 350 mbps)

          While these new wireless standards appear to offer
          • Remember the difference between 300 megaBITS per second over 802.11n, and 480 megaBYTES that your calculations show you need for your display.
            Yes, quite right. I can't count up to eight.

            On the plus side, 100Mb/s is enough for X11 at a decent resolution to be usable, even for playing Quake remotely with GLX. It would be very easy to put an X server in a screen and have it connect to clients that were close by...

  • This is just another device us Linux users will have to pay the extra fee, for useless vista compatibility
    • by dave420 ( 699308 )
      Yes. That's what happens when you are in a niche market. Capitalism is a bitch, huh?
      • It aint capitalism (Score:1, Insightful)

        by Anonymous Coward
        until you get rid of copyrights, patents and incorporation.

        Oh, and laws like DMCA and the newer laws on madatory DRM.

        They aren't capitalist. They're socialist.
        • by dave420 ( 699308 )
          No, if it costs too much to port an application to another platform (development costs > predicted market taking), it isn't ported. The decision has nothing to do with socialism, copyright or DRM - the only factor is the price. If it's too expensive, it doesn't happen. Linux's market share is TINY compared to windows, so if it costs similar amounts of money to develop an application for Windows and Linux, and Windows's market share is 10x that of Linux, Linux gets ignored. I don't know how you can at
    • by Andy Dodd ( 701 )
      How is that?

      Other than being a "bump" of PCI Express, it is no different from PCI Express. It is most definately no different in terms of licensing and implementation in the O.S.

      Actually, the nice thing is that even PCI Express was no different from PCI at the OS level. To an operating system, PCI Express peripherals just appear as really fast PCI peripherals - at that level of abstraction they are the same.

      PCIe 1.0 and 1.1 are perfectly supported under Linux, why would 2.0 be any different?
  • by waynemcdougall ( 631415 ) <slashdot@codeworks.gen.nz> on Tuesday January 16, 2007 @04:06AM (#17626104) Homepage
    This is a firmware upgrade, right?
  • Why does this post not come from any department?

    How am I able to see how trustworthy posts like this are when I don't know where they are from? ;)
  • In your article, you say "The upshot: a x16 connector can transfer data at up to around 16GBps." (GigaBytes per second) Everyone else is reporting as 16gbps (Gigabits per second). I realize it is only a factor of eight off, but I expect better from /.
    • 16 Gigabits per second (Gb/s or Gbps, not gpbs) on an x16 would be 1 Gb/s per lane. This spec goes to 5 Gb/s per lane (per direction), so your figure is off by a factor of 5 (or 10, if you consider the case of both directions simultaneously saturated).

      The reason it's 16 (for full duplex) and not 20 is that 8b10b encoding requires 10 bits on the serial link to encode 1 byte of data.

    • Both are incorrect. The current PCIe spec allows for 250 megabytes/sec per lane, or 8 gigabytes/sec over all 32 lanes. The new PCIe 2.0 spec doubles that to 500 megabytes/sec per lane, or 16 gigabytes/sec over all 32 lanes.

      So a x16 connector can transfer data at up to 8 gigabytes per second, not 16 gigabits or gigabytes per second.

      See Ars Technica article [arstechnica.com]

      Quote:

      Each x1 lane of PCIe 1.1 offers a 250MB/s transfer rate, which puts a x16 link like the ones that host some GPUs at 4GB/s. The new PCIe 2.0 spec will

  • I did not get the following from the article,

    Does this mean it will work on my computer now if i get a firmware upgrade, or do I need to replace the part on the motherboard with a newer one to allow me this new speed.

    If I need an firmware upgrade, will I get it from windows, from the motherboard manufacturer, or just any site will do.

    If I do need to buy it, how long before any cards are made and what price can we expect to pay.

    I can get a P4 used, with all the bells and whistles for about 200$ CND -minus th
  • How long in tell we start to see HTX video and other cards?
    • Never. The marginal benefit of HTX is outweighed by the cost of making two different variants of the GPU.

      HTX should really be renamed to the PathScale slot, since they're the only ones who use it (and probably the only ones who ever will).
  • What is SIG doing to make it 2.0 by 2x the signal rate, put in DDR? PICMG might be fine even without physical layer provided. I hope in 3.0 they would take advantage on the lanes and p2p to do interesting things like mesh + BT to boost. With Infiniband and 10GbE made an impact and PCI-X 1066 backplane bbq grilling, which would come out first, 100GbE on copper or this.

Trap full -- please empty.

Working...