Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Security

'Next Generation' Flaws Found on Computer Processors (reuters.com) 144

An anonymous reader shares a report: Researchers have found eight new flaws in computer central processing units that resemble the Meltdown and Spectre bugs revealed in January, a German computing magazine reported on Thursday. The magazine, called c't, said it was aware of Intel's plans to patch the flaws, adding that some chips designed by ARM Holdings, a unit of Japan's Softbank, might be affected, while work was continuing to establish whether Advanced Micro Devices chips were vulnerable. Meltdown and Spectre bugs could reveal the contents of a computer's central processing unit -- designed to be a secure inner sanctum -- either by bypassing hardware barriers or by tricking applications into giving up secret information.
This discussion has been archived. No new comments can be posted.

'Next Generation' Flaws Found on Computer Processors

Comments Filter:
  • by FudRucker ( 866063 ) on Thursday May 03, 2018 @01:59PM (#56548782)
    until the CPU manufacturers resolve this issue, if necessary will scour craigslist and second hand PC shops and buy used junk for cheap, no more high dollars spent on new desktops & laptops & tablets & phones until this CPU vulnerability issue is resolved in a proper and long term way
    • by Anonymous Coward on Thursday May 03, 2018 @02:04PM (#56548824)

      Nothing will ever be 100% secure, so just give up.

      • Z80 is 100% immune. Time to dust off the old TRS-80.

        • by skids ( 119237 )

          Grab a solar panel and as many old MIPS WRT boxes as you can carry and run for the hills!

          • The original MIPS processors are immune, because they had a 3-stage pipeline and a branch delay slot and so always had the branch destination available by the time that they needed to fetch the target instruction. Almost all later MIPS cores are vulnerable to variations of Spectre, though I believe some of the Cavium ones aren't because they have lots of hardware threads and simply pause a thread on each branch and execute another until they get the branch target.
    • Comment removed based on user account deletion
    • by ravenshrike ( 808508 ) on Thursday May 03, 2018 @02:23PM (#56549018)

      Except they won't. At least not till quantum computers actually become usable by the regular consumer. Until then all processors will be vulnerable to some extent to SPECTRE class attacks(not however meltdown, that was purely Intel's fuckup) because you lose way too much performance dropping speculative execution entirely. There will merely be mitigation in place to make exploiting such attacks as difficult as possible.

    • by Anonymous Coward

      Why, are you running a public virtualization service?

      I'm not, so I'm not really worried about Meltdown/Spectre attacks on my infrastructure.

    • by imgod2u ( 812837 )

      You realize this flaw exists in almost every CPU built in the past 2.5 decades right? The newer CPUs are actually less susceptible...

    • Comment removed based on user account deletion
  • by FeelGood314 ( 2516288 ) on Thursday May 03, 2018 @02:22PM (#56549002)
    CPUs have always had flaws and as a developer there was always an errata sheet you had to read and understand. The problem today is cloud computing and to some extent javascript. People are now running untrusted code on the same systems as their trusted programs. It was assumed that as long as your sandbox for these programs was secure and well defined that this was safe. Spectre and Meltdown proved this wasn't true.
    • Comment removed based on user account deletion
      • This is why I have been saying for years Javascript as GOT to go.

        Would you prefer a form submission and full page reload for every action that you perform in a web application?

        IDK if we should go to a locked sandbox with very limited tools

        That's what JavaScript was supposed to be.

      • Comment removed based on user account deletion
        • by FeelGood314 ( 2516288 ) on Thursday May 03, 2018 @06:30PM (#56550950)
          It's not the language it's the CPU instruction pipeline. On your old 8 bit computer it would take 4 ticks to fetch the instruction, fetch the arguments, do the calculation, store the result. Then we got a pipeline where each tick you would do all 4 things, fetch instruction 4, get the arguments for instruction 3, do the calculation for instruction 2 and store the result of the instruction 1. Over the years pipelines got longer and more complex. An inefficiency in pipelines occurs when you do a branch, then have to wait for the pipeline to fill. The solution to this is to fetch both instructions and speculatively do both until you know which way the branch went. Unfortunately there were two security problems with this. Intel wasn't checking if you had permission to gather the arguments until after they were fetched and second some effects of following the branch that wasn't taken could be seen by the branch that was. So the trick was to get the speculative branch, the one your code won't take in the end, to fetch something you shouldn't have access to and then in the other branch look at that data.

          It is actually very easy to exploit Meltdown and Spectre in assembly and C and much harder in JavaScript. However, my web browser doesn't regularly download and run binary files, it does regularly load JavaScript and automatically run it.
          • Unfortunately, even if it's "harder," it's still possible to exploit in JavaScript, and with development of portable assembly language variants, it'll be easier. And once written as POC, it's easy to deploy in a vast variety of contexts.

          • by complete loony ( 663508 ) <Jeremy.Lakeman@NoSPAM.gmail.com> on Friday May 04, 2018 @02:07AM (#56552292)

            Right, each of the variants use that same model; code that is executed speculatively, reads from memory. Your code can see some side effect, and work out what values are in that memory. To extend that simple description slightly to the currently known variants;

            Meltdown (CVE-2017-5754). Speculatively executed code can bypass features of Intel CPU's that would normally prevent you from reading the kernel memory of the operating system. The workaround to this problem required changes to how the kernel swaps from "user mode" to "kernel mode", making this process much slower.

            Spectre-V1 (CVE-2017-5753). Untrusted code, like JIT compiled Javascript, running inside the same process as trusted code, speculatively executes a read from an array that's out of bounds. This can read any memory that the trusted process can normally read. The linux kernel includes a JIT compiler, so you could use this flaw to read any memory from the kernel. A work-around for this is specific to each program that combines trusted and untrusted code and would probably make every read from an array slower.

            Spectre-V2 (CVE-2017-5715). This one is hard to explain in a simple way, but I'll try. For some types of assembly branch instructions, you can train the CPU into branching somewhere the program wouldn't normally go. You use this to trick a trusted program into speculatively reading it's own secrets from memory (which it does normally have permission to do). Then your program can see the effects of this execution. The trusted program could be any another program, the OS kernel, or even running in another VM. It just has to be running on the same physical CPU. A work-around can be built into every compiler, by avoiding using these assembly instructions in every trusted program.

            Note that you can combine Meltdown & Spectre-V1 so that Javascript can read from kernel memory. Lots of discussions of these issues have been very murky and confusing, often getting the specific details mixed up. Like which issue can be used to read from the kernel, and which of Intel and AMD is vulnerable.

          • by Agripa ( 139780 )

            The solution to this is to fetch both instructions and speculatively do both until you know which way the branch went.

            Speculative execution relies on the branch prediction to execute *one* of the paths after the branch. Eager execution executes both sides of the branch in which case branch prediction is not required. Only research processors use eager execution as it is incredibly inefficient compared to branch prediction which has gotten very good.

    • WTF? That's not new at all and has nothing to do with "cloud" computing! Multiple users on multi-user systems used to be the rule, and of course the users were running untrusted code from their user accounts. There used to be Unix and Linux boxes online everywhere. It also used to be easier to pawn computers unless the sysadmin knew what he was doing.
  • Direct article link (Score:5, Informative)

    by isj ( 453011 ) on Thursday May 03, 2018 @02:24PM (#56549028) Homepage

    here [heise.de] (German)

  • Next generation (Score:5, Insightful)

    by 110010001000 ( 697113 ) on Thursday May 03, 2018 @02:27PM (#56549056) Homepage Journal
    Very possibly the next generation of Intel processors are going to be slower than the previous generation once they have to fix these architectural issues.
    • I don't follow your logic.
      • by Anonymous Coward

        The short version is that Intel used BlackMagic(TM) called 'Speculative Execution' to enhance performance but it turns out that this introduces unwanted security problems. (Actually, this was known, but those who knew thought no one cared.)

        The slightly longer version is that Intel took (performance) shortcuts in their implementation of speculative execution. A better (more secure) implementation will not have those (performance) shortcuts. This is generally understood to result in slower performance. AMD us

  • contents of a computer's central processing unit -- designed to be a secure inner sanctum --

    All these nerds who have been using the computers since they were toddlers would find this description of the CPU really really fresh, novel and eh, yes, news.

  • The process of reserving CVE numbers clearly discloses timing of discovery of vulnerabilities. The CVE numbering authority should close that potential security hole.

    I'm at least half serious about this. Arguably, knowing that vulnerability disclosures are coming reduces the value of current and upcoming products and can even have an effect on stock prices. It may also embolden black-hat security to step up efforts to discover vulnerabilities, knowing of the presence of them, and encourage them to attempt to subvert security measures to keep them secret until patches are available.

    • Arguably, knowing that vulnerability disclosures are coming reduces the value of current and upcoming products and can even have an effect on stock prices.

      That would be the best thing to happen in security in the last two decades (since ssh was invented, basically). Companies would suddenly start caring about security.

  • by crow ( 16139 ) on Thursday May 03, 2018 @03:02PM (#56549368) Homepage Journal

    It is likely that there are other bugs related to speculative execution that can leak data. For example, you could have code that leaks data through timing instead of through direct cache impact. You measure the number of cycles after writing clever code that consumes one more or less based on a bit of restricted data.

    • by imgod2u ( 812837 )

      That's also what Spectre (and Meltdown did). They timed cache accesses before and after speculative loads using secure data as the "forwarding address".

      The other variants (BranchScope if you're interested) uses a similar technique except it trains the branch predictor using secure data bits and then times the execution time.

  • I bet this is why the new line of Intel processors was delayed significantly. Anyone else suspect that?
  • by duke_cheetah2003 ( 862933 ) on Thursday May 03, 2018 @04:46PM (#56550220) Homepage

    Maybe the entire architecture paradigm needs a start-from-scratch perspective?

    We've been doctoring and hacking the PC architecture for what, 30 years now? Under the hood, everything still basically laid out the same as it was with the first 286 and 386 machines. Not much has changed. Maybe it's time to redo everything?

  • It's clear that it's time to grab the old 6502 design and modernize it â" let's call it the 656464. A 7nm, 64-core, 64-bit version (basically change nothing else other than needed glue between the chips, memory linkages, and the instruction width), with a decent cache attached, would not take up all that much die space, and would be really interesting, albeit slow in many ways due to a good number of modern tricks not being in place. But without those tricks many security issues they cause could be avo
    • by nester ( 14407 )

      UltraSPARC T1

    • by Agripa ( 139780 )

      Well, let's see. With modern high performance caches executing at 3 GHz with a 4 cycle load-to-use penalty, a 6502 would come out at like 375 MHz. There are faster processors than that which do not employ speculative execution including many current ARM cores.

  • Why don't we hear anything about him being prosecuted? I can't think of a more obvious case ...

If you aren't rich you should always look useful. -- Louis-Ferdinand Celine

Working...