Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug Security

Exploiting the DRAM Rowhammer Bug To Gain Kernel Privileges 180

New submitter netelder sends this excerpt from the Project Zero blog: 'Rowhammer' is a problem with some recent DRAM devices in which repeatedly accessing a row of memory can cause bit flips in adjacent rows. We tested a selection of laptops and found that a subset of them exhibited the problem. We built two working privilege escalation exploits that use this effect. One exploit uses rowhammer-induced bit flips to gain kernel privileges on x86-64 Linux when run as an unprivileged userland process. When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access (PDF) to all of physical memory.
This discussion has been archived. No new comments can be posted.

Exploiting the DRAM Rowhammer Bug To Gain Kernel Privileges

Comments Filter:
  • Impressive (Score:5, Insightful)

    by Anonymous Coward on Monday March 09, 2015 @11:28PM (#49222255)

    Don't have much more to say than that's an impressive exploit.

    • I'll second that, and add that I suspect this could also be used for hypervisor/sandbox escapes on practically *any* platform that doesn't use ECC memory.

      • Re:Impressive (Score:5, Informative)

        by Shinobi ( 19308 ) on Tuesday March 10, 2015 @04:48AM (#49223189)

        And, if you had read the actual paper, you'd see that ECC isn't proof against it either

        • so when they said:

          We also tested some desktop machines, but did not see any bit flips on those. That could be because they were all relatively high-end machines with ECC memory. The ECC could be hiding bit flips.

          they actually said that ECC doesn't matter?

          I guess we read differently.

          • Re:Impressive (Score:4, Informative)

            by MachineShedFred ( 621896 ) on Tuesday March 10, 2015 @08:32AM (#49224219) Journal

            I see - you were looking at the PDF link, which is more theoretical. yes, if you can get two or more bits to shift inside a 64-bit chunk, then ECC doesn't help. There's got to be a low probability of that actually happening though - the Google Project Zero wasn't able to make it happen with ECC at all.

            • Two bits would trigger double-error detection.
              • Yes, it would be detected, but it could not be corrected. It would likely crash the software with a memory fault, in which case you would have a denial of service attack rather than defeating protected memory and allowing something access from outside the box.

                Not good, but better than complete privilege escalation.

                • by Agripa ( 139780 )

                  It would generate a report listing the error and may or may not halt the machine depending on how that is set.

            • by Shinobi ( 19308 )

              That depends on how you do it though. Through some smart row walking you could probably increase the frequency a lot. And then you have ASLR complicating things from both ends(though I think it could be used to help in an attack actually)

            • ECC != parity check. It can detect two errors and correct one.

            • by Agripa ( 139780 )

              if you can get two or more bits to shift inside a 64-bit chunk, then ECC doesn't help.

              Reporting on the event or halting the machine upon detection of a double bit error *is* better than missing it completely and some triple bit errors would be detected as well. If chipkill ECC was being used which is commonly available, then up to 4 bits within a nibble boundary would be detected and corrected.

          • by Shinobi ( 19308 )

            Nowhere did I say that ECC didn't matter. I just said that ECC isn't guaranteed to protect you. It just reduces probability.

            • by AaronW ( 33736 )

              It makes it completely ineffective when there is ECC. If three bits haven't flipped then the error will be reported and logged and probably blasted out on every console. The likelihood of this attack being detected well before it is exploited is extremely high. If it's two bits then either the application will be killed or the machine will halt, most likely the latter.

        • by AaronW ( 33736 )

          You must have read a different paper than me.

          ECC on a decent machine would make this attack almost impossible.

          For one thing, the memory controller will likely support memory scrubbing, which will detect and correct single memory errors long before enough bits are flipped for ECC to no longer detect the corruption. ECC will typically correct one bit error and detect two bit errors. Since the chance of a bit flipping is random and takes thousands of operations, the chance of having three bits flip and not be

    • Re:Impressive (Score:4, Insightful)

      by twistedcubic ( 577194 ) on Tuesday March 10, 2015 @12:53AM (#49222621)
      Double bonus if this result gets manufacturers of laptops to FINALLY include ECC memory.
      • by gnupun ( 752725 )

        I would be happy with giving a single bonus if some EE engineers or physicists can explain how this exploit works. Right now we only know what it does -- flip bits in some memory locations by writing to other memory locations.

  • by muphin ( 842524 ) on Monday March 09, 2015 @11:33PM (#49222273) Homepage
    is this possible to exploit on an iPhone?
    • by jandrese ( 485 )
      There is a definite maybe. One caveat of the process is that it requires access to special instructions to flush the cache constantly (hundred of thousands of times per second), and a processor fast enough to pound the memory controller. Those could be handicaps to running this exploit on a smartphone platform. It looks like modern phones use DDR3 derived memory (older phones like the iPhone 4 are DDR2 style and this won't work) so it's not impossible.
  • Geez, who knew that writing 'NSA' to 0xdeadbeef over and over would give you kernel access? Those NSA guys really broke into everything.

  • ECC Memory (Score:5, Interesting)

    by Frobnicator ( 565869 ) on Monday March 09, 2015 @11:35PM (#49222287) Journal

    Yet another reason to push shared providers for ECC memory. The error correcting memory is so far not vulnerable to this attack, all the researchers that have tried it report that ECC memory identifies and corrects the corruptions. Of course some attackers may have found a way, but ECC minimizes the risk

    Amazon says it uses ECC in their AWS machines [amazon.com], but other big hosts like Equinix say that ECC memory is "available". Be careful about your hosting, folks.

    • Yes, you beat me to it. A correctly-configured ECC motherboard with real ECC memory would defeat this. Watch out for fake ECC memory that just simulates the correction bits.

      Once memory starts being vulnerable to row interference, having a machine without ECC becomes much more dangerous, regardless of this exploit.

      • Do you have an example of this fake ECC memory that interested parties should avoid?

        • Re: ECC Memory (Score:5, Interesting)

          by Macman408 ( 1308925 ) on Tuesday March 10, 2015 @03:05AM (#49222947)

          I hadn't heard of this either, but a quick google turned up a description of false parity RAM: http://en.wikipedia.org/wiki/R... [wikipedia.org]
          TLazy;DR: To save cost where parity RAM was required by the hardware but not by the operator, modules existed that would calculate the parity bit upon reading the RAM, rather than storing the parity bit. I don't see any evidence that this type of module ever existed for ECC though.

          To make sure memory is ECC, it's probably sufficient to count the memory chips on a DIMM. If there are 9 or 18 (or even 36, if it's a particularly large DIMM) identically-marked chips, that's ECC. If there are 4, 8, 16, or 32 chips, then it's probably not. If one of the chips is marked differently than the others, it might be a little more complicated; it might be possible that it's a different memory chip (e.g. if there are 4 x16 memory chips, you'd only need one x8 to get a x72 ECC DIMM, so that last chip would be different). But it's also possible that it's buffered/registered memory, and the different chip is the buffer/register.

          And an aside on the topic of buying RAM for yourself:
          In general, I'm not a fan of cheaping out on memory. I did computer repair for a while, and it shocked me how many problems were caused by bad RAM - from the obvious ("my computer crashes every time I boot it") to less obvious ("every few days, an application crashes") to the rather insidious ("it was running fine, and now I can't mount my hard drive any more"). It got to the point where, when a computer came in with nonspecific symptoms like that, I'd open up the computer and peek at the RAM chips first. If they had no recognizable manufacturer, they were certainly garbage. If they were recognizable but not top-tier, they probably needed some stress testing on our RAM tester. And if they were the good stuff (Samsung always had my vote there, though it's hard to find because they don't sell directly to consumers), then it was probably something else.

          That's also where I learned that things like memtest86 or other software diagnostic tools were basically useless too. Only the absolute worst memory would fail a test, even a looped test run for days. Most bad RAM was marginal - after all, it probably passed some manufacturing tests. We had a rather expensive (~$4k-8k) box that would test memory, doing things like varying the supply voltage or self-heating the RAM. When RAM is installed in your PC, you're still limited by the hardware - i.e. the voltage regulator and the memory controller - which probably keep the memory as close to nominal conditions as possible. Obviously, those machines are rather hard to come by, so you have to make do with software tests instead - but a pass on those just means I can't prove it's bad; it doesn't mean the memory is good. Even if I pass all memory testing, I'll still swap/remove/replace DIMMs in an attempt to find which one is bad, because it's often not obvious.

          • If there are 9 or 18 (or even 36, if it's a particularly large DIMM) identically-marked chips, that's ECC. If there are 4, 8, 16, or 32 chips, then it's probably not

            In the days of the AMD586, it was common for mother boards to be sold with fake ECC. There was actually a "fake ECC" chip soldered where the ECC should be. Often, these boards had defective RAM in too, but would pass the BIOS fake memory test! Memtest86 was written because of these boards.

            I bought one myself and was astonished that it was cos

            • by pla ( 258480 )
              I bought one myself and was astonished that it was cost effective to deliberately engineer defective machines.

              16GB (as 2x8GB) of ECC will cost you at least $160 for the absolute bottom of the barrel. The same 16GB of non-ECC goes for just about $100. That gives you a 60% markup for only 12% more chips. Really, it surprises me we don't see more fraud like that.
              • by jandrese ( 485 )
                Except that the ECC memory only costs so much because so few people buy it. It's a "business part". I don't think most consumer mobos are equipped to handle ECC memory either. It's a shame too because if the costs were in line with the actual hardware (it cost $112 instead of $160) and it was supported by the mobo manufacturers then I think a lot of system builders would go for ECC memory. $12 is not a bad price to pay to know when it is a faulty memory chip that's causing your system to crash and not s
                • by tlhIngan ( 30335 )

                  Except that the ECC memory only costs so much because so few people buy it. It's a "business part". I don't think most consumer mobos are equipped to handle ECC memory either. It's a shame too because if the costs were in line with the actual hardware (it cost $112 instead of $160) and it was supported by the mobo manufacturers then I think a lot of system builders would go for ECC memory.

                  It's chipset and processor support, actually.

                  Intel, for example, typically mandates ECC on the Xeon line (modern CPUs ha

                  • by jandrese ( 485 )
                    I'd rather most system came with ECC memory by default and "enthusiasts" could special order non-ECC memory to try to eek out another couple of FPS in the benchmark. It would be treated like overclocking. You trade off some system life and maybe a little stability to get a few percentage points more performance.
              • That is not fraud.
          • by gTsiros ( 205624 )

            can you think of a way to do those tests with less cost?

            • The place I used to work used to offer it as a service for $5-10 per SIMM/DIMM. If you can find a local shop that has such a tester, maybe they'd do the same. Otherwise, it's probably not terribly likely to get a low-cost solution, since the volume of such testers is pretty small.

            • by Agripa ( 139780 )

              can you think of a way to do those tests with less cost?

              The problem with the software only test is that it does not verify the operating margin in timing, voltage, and temperature. If the motherboard supports it, then raising the memory operating frequency by say 10% and lowering the DRAM operating voltage by say 10% and *then* running the software only test like memtest86 or maybe the Prime95 stress test should work to detect marginal DRAM.

      • Re: ECC Memory (Score:4, Informative)

        by Shinobi ( 19308 ) on Tuesday March 10, 2015 @05:04AM (#49223257)

        In the paper, they actually state that ECC isn't entirely proof either, because you can get multiple bit errors as per their testing. SECDED will be defeated by that. Chipkill might work.

      • Re: ECC Memory (Score:4, Interesting)

        by bluefoxlucid ( 723572 ) on Tuesday March 10, 2015 @09:42AM (#49224813) Homepage Journal

        Wouldn't it be better design to put ECC into the memory controller, and arrange the chips to support ECC? That is: physically wire the memory chips and the memory bus to write far from each other (64 chips = interleave every 1 bit across all chips; 32 chips = interleave 2 bits per chip, starting on high bits in addressing so you put them half a chip's distance away from each other), physically protecting against chip-local anomalies. Have the MMU perform a single rotation, logically reserving an amount of RAM on each slot to carry ECC for the previous slot. Write ECC bits to those areas.

        Doing it in this way provides zero-cost physical isolation of single-chip memory errors (it's just the wiring layout), while also isolating the ECC from its corresponding module (avoiding RAS/CAS thrashing, allowing simultaneous access to the ECC bits and the corresponding RAM). It lets you pop in an ECC chip and use ECC, or pop in a non-ECC chip and sacrifice 12.5% of your RAM to error correction. That's a lot of RAM: on an 8GB system, it's almost 1GB.

    • I can't see how it would be possible to defeat ECC.

      The attacker would have to construct a write that affects the desired bits in the row-to-be-hammered and has check bits that affect the row-to-be-hammered's check bits such that the altered row is validated. This is probably nigh impossible to do in all but a select few constrained cases.

      • by thogard ( 43403 )

        ECC might be able to help the attack. If you know the state of memory and the associated ECC values you would like and can calculate a designed bit pattern with the same ECC that meets the requirements, you may be able to get the ECC hardware to flip the bit for you as you hammer bits that don't matter as much.

        Hammering memory to induce writes where they shouldn't happen has been done for decades. It was used back in the days when you needed high voltages to do writes in eeproms when people found out that

      • by tlhIngan ( 30335 )

        I can't see how it would be possible to defeat ECC.

        The attacker would have to construct a write that affects the desired bits in the row-to-be-hammered and has check bits that affect the row-to-be-hammered's check bits such that the altered row is validated. This is probably nigh impossible to do in all but a select few constrained cases.

        It's possible, but very unlikely.

        Rowhammer is not new - it's been known since the 90s since it affects NAND flash memory as well (the same stuff in an SSD) - here there are

      • ECC is able to correct one error, but find more than one. If by some strange probability you were able to shift 2+ bits in the same 64-bit chunk of DRAM, ECC would detect it, but just mark it as errored rather than accept what the value is.

        It would more likely be a denial-of-service attack rather than being able to manipulate values. The fact that Google's "Zero Labs" guys couldn't make it happen on ECC systems speaks to the probabilities though - ECC may be enough of a protection until the problem gets s

    • Re: ECC Memory (Score:5, Insightful)

      by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday March 09, 2015 @11:50PM (#49222357) Homepage Journal

      It has yet to be established whether hammer techniques can result in a correct data+ECC pattern. If so, it should be possible to permute the memory in a way that defeats this, either on the memory module or the memory controller.

      That would make a good research paper for someone.

    • other big hosts like Equinix say that ECC memory is "available"

      I suspect they'd change that policy in a hurry if people started using this for hypervisor escapes.

    • by Rich0 ( 548339 )

      I was looking at motherboards and even finding ones that support ECC is difficult, unless this is one of those situations where any motherboard works as long as the CPU supports it.

      As far as I understand it, most AMD processors do support ECC, and Intel only supports it in their upper-end products (artificial restriction to segment the market - i7/Xeon/etc). What I don't understand is where the motherboard fits in.

      Heck, Newegg doesn't even track that as an option on their product selector.

  • by PassMark ( 967298 ) on Monday March 09, 2015 @11:48PM (#49222339) Homepage

    It is worth noting that the row hammer issue isn't new. It as been known about for some time. Including this old Slashdot post
    http://hardware.slashdot.org/s... [slashdot.org]

    There has been an implementation of row hammer testing in MemTest86 V6.0 for over 6 months now as well. MemTest86 implements just the single sided hammer, whereas Google used a double sided hammer.
    http://www.memtest86.com/ [memtest86.com]
    While the double hammer might produce more RAM errors, this pattern of memory accesses isn't very likely to occur in real life software. So is of limited use as a RAM reliability test.

    What is new in this report is the fact that they manipulated the RAM bit flips to turn them into an exploit. Something that was previously speculated on but considered too hard to implement.

    What they didn't show however is any results from desktop machines. All their testing was on laptops. In fact they state, "We also tested some desktop machines, but did not see any bit flips on those". So the problem isn't as grave as it might at first appear. They speculate that ECC RAM blocks the bit flips and this has also been the experience with MemTest86, most (but not all) of the flips are single bit flips, which ECC would correct.

    Disclaimer: I'm one of the MemTest86 developers.

    • Multi-threaded programs really do need those cache flushes to implement their interprocessor communications, don't they? It seems to me that they would be the ones most likely to hit this problem.

      • by Pelam ( 41604 )

        I'm not sure. The locked instructions, compare and exchange and mfence ensure cache coherency so in my experience the flushes are not necessary.

        Maybe driver code needs the flushes. Driver needs to know data is really in the RAM before hardware with DMA can get it.

        Cache flush instructions seem to be a late addition with SSE2.

        • Compare-and-exchange and mfence would be doing cache flush all of the way to RAM and global cache line invalidation, wouldn't they? So, they can potentially be used to hammer too.

          • Re:Multiprocessing (Score:4, Interesting)

            by TheRaven64 ( 641858 ) on Tuesday March 10, 2015 @05:30AM (#49223335) Journal

            Nope, no cache flush for compare and exchange. Modern CPUs use a modified version of the MESI protocol, where each cache line has a state associated with it (modified, exclusive, shared, invalid in MESI, a few more in modern variants). When you do a compare and exchange, you move your copy of the cache line into exclusive state and everyone else's into invalid. Before this, you must have the line in the shared state (where multiple caches can have read-only copies). When another core wants access to the memory, it will request the line in shared state. If another cache has it in its exclusive state, then the exclusive line will be downgraded to shared and a copy of its contents sent to the requesting site.

            If atomic operations had to go via main memory then they would be significantly slower than they are and would be a huge bottleneck for multicore systems.

            • I suspect that we could persuade those caches to flush to RAM, simply by exhausting the number of possible lines for that address - if the cache is set-associative. Of course modern processors have multiple levels of cache, so that makes it harder.

              • I don't think I understand what you think you're trying to do. You can't make a cache flush a line that you're modifying with an atomic operation to RAM, because atomic ops require the value to be in cache. Given an n-way set associative cache, however, you can typically force cache flushes (without requiring special cache flush instructions) by writing N+1 values at cache-line offsets (e.g. at address X, X+64, X+128,...) repeatedly. This probably wouldn't trigger the rowhammer issues though, because it'
      • Re:Multiprocessing (Score:4, Interesting)

        by TheRaven64 ( 641858 ) on Tuesday March 10, 2015 @05:27AM (#49223321) Journal
        They don't flush, no. They will add memory fences, which will generate cache coherency bus traffic, but won't trigger a write back to main memory (modern CPUs can snoop the cache of other cores, so the data will be sent cache to cache).

        The main reasons for flushing the cache are:

        • If you have some non-volatile DRAM and want to ensure consistency.
        • If you're doing DMA on anything other than the latest Intel chips, so that the DMA controller will see the data that you've flushed from the cache.
        • If you're writing a JIT compiler or some other form of self-modifying code (including a run-time linker) and need to ensure that i-cache and d-cache are consistent (I think x86 does this automatically, but I could be wrong).
        • If you're writing a crypto algorithm and want to make side-channel attacks via the cache difficult.
    • What is new in this report is the fact that they manipulated the RAM bit flips to turn them into an exploit.

      From the paper;

      Left unchecked, disturbance errors can be exploited by a malicious program to breach memory protection and compromise the system. With some engineering effort, we believe we can develop Code 1a into a disturbance attack that injects errors into other programs, crashes the system, or perhaps even hijacks control of the system. We leave such research for the future since the primary objective in this work is to understand and prevent DRAM disturbance errors.

      They have demonstrated the bit flip but not the exploit. An exploit would be much more difficult as you would need access to memory right next to the location you need to flip. Then flip it in just the right pattern to not crash. They have done the easy part and left the hard part to someone else.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Similarly, this is a relatively well documented issue in the embedded world as well...

      ARM have erratum for half a dozen IP revisions over the past few decades, as do many IHVs that make SoCs based on them.
      Similar deals in the MIPS and PowerPC realms.

      Some architectures thankfully have a rather locked down MMU/MPU, where unprivileged code simply can't even attempt to rowhammer (some ARMs are like this) - alas, most do not - thus you're at the mercy of all the other hardware vulnerabilities in your system (not

      • by Shinobi ( 19308 )

        It's a matter of speed and density too. The vast majority of embedded stuff operates at significantly lower speeds/throughput/memory size requirements, which means there's less vulnerability. Also, embedded stuff tends to be aimed at running very specific applications, which also reduces the vulnerability.

    • What is new in this report is the fact that they manipulated the RAM bit flips to turn them into an exploit.

      That's bigger than being able to corrupt memory in the first place. What it means is that every computer (laptop I guess, without ECC) is vulnerable to a privilege escalation exploit, and the difference between root and a normal user is meaningless.

      Next all we need is a way to exploit this from javascript. :)

    • by TCM ( 130219 )

      What about servers that employ data scrambling? From the sound of it, this should completely defeat the Hammer exploit.

      http://en.wikipedia.org/wiki/M... [wikipedia.org] /scrambling

  • Deja vu... (Score:5, Interesting)

    by sotweed ( 118223 ) on Monday March 09, 2015 @11:49PM (#49222347)

    This problem is remarkably similar to a problem I encountered in the memory of a 7094 (old
    IBM computer) which had a core memory which stored 36-bit words. The memory was supposed
    to work by operating on 6 bits at a time at 200 nanosecond intervals. The reason for this was to avoid
    creating a magnetic field that was too strong. The problem occurred when the timing was off due
    to failure of a component and two of the intervals overlapped. This meant that when one attempted
    to store a word with 35 1s, the field created was strong enough to store 36 1s. We wrote a
    diagnostic to demo the problem, and with that the engineers were able to isolate and fix the problem
    in short order.

    • by ArcadeMan ( 2766669 ) on Tuesday March 10, 2015 @12:02AM (#49222403)

      Alright, alright! We're getting off your lawn!

    • Re: (Score:2, Offtopic)

      words do not mean anything today as each cpu has a different number of bytes representing each word.

      How much ram is that?

      • Re:Deja vu... (Score:5, Interesting)

        by sotweed ( 118223 ) on Tuesday March 10, 2015 @12:42AM (#49222571)

        I was describing something that happened in a machine that was built before the world settled
        on 8-bit bytes. The machine had 36-bit words, and each word had an address. The 6-bit
        nibbles were not addressable. It was 32,768 (2**15) words of 36 bits. Equivalent
        to a little over 100K bytes!

        • Wow! That was HUGE for the day! I grew up on PDP-8's which were limited to 4096 12-bit words (unless you used bank selection extensions - some machines had more) and soon migrated to 64K 16-bit words on the PDP-11, but I did my time with IBM's 360 and Control Data's 6xxx series (with its odd 60-bit word), too. Christ on a crutch, how impoverished the world is as far as computer architectures go. And, I'm getting old. Time to go lawn-yell.

        • I was describing something that happened in a machine that was built before the world settled on 8-bit bytes

          I know, you'd implement ECC by visually inspecting the pins.

      • Clearly you don't work in the embedded world, where VLIW processors, using word lengths that are not a multiple of a byte, are common.
    • We wrote a diagnostic to demo the problem, and with that the engineers were able to isolate and fix the problem
      in short order.

      That claim alone is enough to date your story back at least 30 years.

  • by AaronW ( 33736 ) on Tuesday March 10, 2015 @02:12AM (#49222811) Homepage

    In reality this would be difficult to exploit on a server since servers typically use ECC memory. ECC memory can typically detect two bits and correct one bit error and will likely catch this unless you can flip enough bits correctly so that the ECC remains correct. Doing this means that you would need to know the contents of those memory locations prior to flipping the bits.

    I don't know about X86, but on the CPUs my company makes we support hardware address randomization, so that the address lines going out to memory are randomized such that finding adjacent rows or columns can be very difficult to figure out.

    It's a shame that Intel only supports ECC memory with their XEON processors rather than all of their processors. ECC does not add much in terms of cost, only 12.5% more DRAM chips and a few more traces and a few other miscellaneous parts (resistors, capacitors). Even the lowest end processors my company makes (for things like small routers, wireless access points, etc) supports ECC and address line randomization.

    • by hottoh ( 540941 )
      Modern i3s support ECC, not just XEONs. See below.

      http://ark.intel.com/products/77480/Intel-Core-i3-4130-Processor-3M-Cache-3_40-GHz

      I picked the first ARK link & the i3 supports ECC RAM.

      .
  • So I'm not (quite) a paranoid nutcase for running server-class hardware, including always using ECC DIMMS. Current desktops are older Dell T3500s, with nearly top bin Xeons, upgraded supplies and graphics, plus, of course, 24GBytes of ECC RAM.

    First big splurge on a desktop had a Tyan mainboard with the ServerWorks chipset (since Intel's were pathetic, at the time), dual P-IIIs, PCI-X, PLUS an AGP slot. Awesome, for its time.

    http://www.tyan.com/archive/l_chinese/html/pr01_s2567.html [tyan.com]

  • Thanks to Wang (Score:5, Informative)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Tuesday March 10, 2015 @03:51AM (#49223045) Homepage
    All RAM on PCs used to be parity RAM until Wang started suing RAM manufacturers in the 90s over its patents on parity SIMMs.
    • by fuzzyfuzzyfungus ( 1223518 ) on Tuesday March 10, 2015 @04:18AM (#49223101) Journal
      That sounds like a real dick move on their part.
      • Calling their global support service Wang Care wasn't a great move.

        (the story goes that the European head had to answer directly to Dr. Wang about why the name had been changed from Wang Care to whatever it ended up being)

        Opening an office in Cologne, Germany wasn't a successful one either - no one wanted to go to Wang Cologne ;-)

    • How was that patent even slightly valid? Parity was known about before WW1 - and by implication probably before An Wang was born!
      • Because it was using parity ... in a computer!

        Though surely those patents have expired by now. Time for parity ram to make a comeback?

    • [citation needed] (Score:2, Interesting)

      by Anonymous Coward

      I've followed the PC market since the 1970s. I was paying a good bit of attention to memory cost. As far as I could tell, people noticed that parity RAM hardly ever caught errors, and that non-parity RAM was cheaper than parity RAM. Parity became optional, then eventually went away entirely, simple because it was marginally more expensive.

      Patent issues came and went, but I don't think they were a major driver away from parity.

      • The Wang patent was actually for having nine chips on a SIMM. When Wang started enforcing its patent, competitors switched to putting three chips on a SIMM instead. During that transition, parity RAM was scarce and expensive -- 9-chip because it was being phased out and 3-chip because quantities weren't available at first. It got people to reconsider whether parity was necessary, and it became "socially acceptable" to have non-parity RAM.

        Back in the days of discrete RAM chips [altervista.org], they were always installed in

    • Re: Thanks to Wang (Score:4, Informative)

      by hottoh ( 540941 ) on Tuesday March 10, 2015 @12:37PM (#49226515)
      <quote> All RAM on PCs used to be parity RAM until Wang started suing RAM manufacturers in the 90s over its patents on parity SIMMs.</quote>

      Not so.

      Many had ECC. AAPL computers skipped ECC, to save money and look stupid at the same time (I made the stupid part up).

      AAPL = Apple Inc., FYI
  • Weird how most bug exploits result in pretty much every OS reacting with: "I don't know what you want, so here's the keys to the kingdom! (escalation)" instead of: "I don't know what you want, so I'm jumping out of the window and taking you with me! (system crash)."
    • by ledow ( 319597 )

      Why? Because you cannot make code run in different sections. Here, the physical hardware is PROVIDING the facility to access a table which is normally privileged, which determines whether a program is allowed to access ANY AND ALL RAM.

      The privilege is not normally available, and would normally block almost all such attacks. This is a complete way around all the hardware features that are supposed to stop this kind of access and so, of course, the kernel can do NOTHING about it.

      The problem is that most so

      • by mlts ( 1038732 )

        I wonder if there is -any- way to mitigate this in software, similar to how the Linux kernel intercepted the instructions to prevent the FDIV bug from happening in early Pentium chips. The only way I see would be to use a Bochs style emulator, and deal with its immense performance hit that its style of emulation does (where hardware virtualization hooks are not used.)

        • Of course there is. The question is how quickly it could be implemented, and what type of performance overhead it would take, and if it is worthwhile.

          • The sensible response is just to run the test and find out if your DRAM has this bug. If not, then the attack already fails: no amount of coding is going to make you more resistant. If you do have the bug, return the defective product for replacement. Again, no amount of coding is going to make you more resistant. Why do people persist in thinking that everything should be fixed in software?
            • by mlts ( 1038732 )

              Easier said than done in a lot of cases. For example, if a newer Macbook has this bug, the only way to fix the problem is to toss the entire thing.

              Having some form of software remedy, even if it is something that might see it happening and do a hard reset or a power down, may be better than a compromise in come environments, especially with regards to virtualization where getting ring 0 on the bare metal can be an incredible catastrophe.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...