Sandy Bridge Chipset Shipments Halted Due To Bug 212
J. Dzhugashvili writes "Early adopters of Intel's new Sandy Bridge processors, beware. Intel has discovered a flaw in the 6-series chipsets that accompany the new processors. The flaw causes Serial ATA performance to 'degrade over time' in 'some cases.' Although Intel claims 'relatively few' customers are affected, it has stopped shipments of these chipsets and started making a revised version of the silicon, which won't be ready until late February. Intel expects to lose $300 million in revenue because of the problem, and it's bracing for repair and replacement costs of $700 million."
Intel caught this one first? (Score:5, Insightful)
I don't recall seeing any complaints online about degraded SATA performance, so it looks like Intel caught this internally and took the appropriate action before the issue became widespread in the wild. The bug sucks but it just goes to show how difficult it can be to test complex hardware under all situations. Kudos to Intel for being proactive... they have learned from the FDIV bug fiasco, and some other companies with fruity logos might learn from the example.
Re:Given Intel's reaction... (Score:5, Insightful)
What makes the hair on the back of my neck stand up about this one is the "may gradually degrade" stuff. That makes it sound much less like the "100% of people who do X get bitten/0% of others do" logical bugs and more like the "component degradation in the field can be unpredictable, except at a population level" type of bug that, say, happened to Nvidia not too long back...
Re:Intel caught this one first? (Score:2, Insightful)
integration can be evil.
now, if they had separate chips on the mobo for sata then the damage would have been contained and you simply disable the onboard controller and install a pci-e card instead. intel would make new mobos but NOT chipsets. chipsets are a big undertaking.
also, I wonder if there were engineers in intel who said 'hey, whats with this too-fast churn of new socket types and chipsets? didn't we JUST release, not long ago, sockets for i3/i5? what wrong with using them again?'
then some intel product mgr probably spoke up 'but we can get users to rebuy ALL new hardware instead of just a new cpu!'.
money won out over logic and reason.
well, intel LOST big-time on this cash grab move. now, since they did NOT leverage the older chipsets like a normal thinking company might, they can't even sell bare cpus right now. haha!
lesson: don't put all your eggs in one basket. sometimes ultra integration will bite you in the arse.
The most reasonable explaination (Score:5, Insightful)
My educated guess is that the SATA Input/Output Pads have a digital timing compensation circuit that tries to center the data sampling window (e.g., the clock edge where data is sampled). Since the appropriate data sampling window that won't cause a setup/hold violation changes with process variation and temperature it needs to have lots of potential settings in a large window and may need automatic tracking.
Probably someone didn't design that window large enough to center the data sampling timing offset (or the step size isn't small enough or the auto adjustment circuit that tracks temperature and adjusts the window appropriatly has an algorithmic flaw in some cases, etc). It might be okay now (in early production tests), but as the part ages, the required data sampling window can shift significantly, and if the chip can't adjust the data sampling window appropriatly, then data errors are inevitable.
As a silly example, let's say a hw engineer put in a clock trim circuit that could adjust +-100ps in steps of 10ps. No driver update can make that adjustment -110ps.
Conversely, if the hw control algorithm that tracks temperature and adjusts the window has a postive temperature coefficient over time (say gets slower), but the actual I/O circuit has a negative coefficient over time (say gets faster), after a while, that feedback algorithm may become unstable, that might not be fixable with a driver update either (if the control algorithm is in hw).
Of course, I have no real infomation, but it's my guess having designed high speed I/Os in the past...