Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Businesses Google The Internet

Google NativeClient Security Contest 175

An anonymous reader writes "You may remember Google's NativeClient project, discussed here last December. Don't be fooled into calling this ActiveX 2.0 — rather than a model of trust and authentication, NaCl is designed to make dangerous code impossible by enforcing a set of a rules at load time that guarantee hostile code simply cannot execute (PDF). NaCl is still in heavy development, but the developers want to encourage low-level security experts to take a look at their design and code. To this end Google has opened the NativeClient Security Contest, and will award prizes topping out at $2^13 to top bug submitters. If you're familiar with low level security, memory segmentation, accurate disassembly of hostile code, code alignment, and related topics, do take a look. Mac, Linux, and Windows are all supported."
This discussion has been archived. No new comments can be posted.

Google NativeClient Security Contest

Comments Filter:
  • by iamacat ( 583406 ) on Monday March 02, 2009 @07:12PM (#27046483)

    Simply has to be taken with a grain of salt!

    • Just wait till the KDE project gets their hands on this concept; we'll be seeing a new SourceForge project for KCl any day now.
    • by gravos ( 912628 ) on Monday March 02, 2009 @07:18PM (#27046539) Homepage
      I'm sorry, I just don't buy this whole thing. x86 in the browser? Ugh... Because all that we need is to further promote an archaic instruction set that won't die because of all the pre-existing code compiled for it. An instruction set that was finally starting to loosen its grip as the industry worked toward more abstract solutions.
      • by pohl ( 872 )

        I was thinking the same thing. The least they could do is use a nice neutral intermediate representation like LLVM bc and JIT compile it to whatever.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          I was thinking the same thing. The least they could do is use a nice neutral intermediate representation like LLVM bc and JIT compile it to whatever.

          But what would they call this wonderful new concept? I suggest Java.

          • Re: (Score:3, Informative)

            by pohl ( 872 )

            Amusing joke, but entirely dissimilar. It seems to me that if you want to prove that code doesn't do anything nasty, then a single-static-assignment IR could be very useful. JVM bytecode could never pull that trick. Also, LLVM imposes no runtime requirements whatsoever. None. It and Java are at opposite ends of that spectrum.

      • Re: (Score:3, Insightful)

        by tlhIngan ( 30335 )

        I'm sorry, I just don't buy this whole thing. x86 in the browser? Ugh... Because all that we need is to further promote an archaic instruction set that won't die because of all the pre-existing code compiled for it. An instruction set that was finally starting to loosen its grip as the industry worked toward more abstract solutions.

        Better yet, we don't have any existing code for this system yet. It won't run ActiveX, so there's no code loss there. And now we're going to have to put the equivalent of Bochs o

        • Well, you're close to hitting the nail on the head but not quite there.

          You need to consider the difference between running on a non-x86 device, and running well on a non-x86 device. Let's say you write a Photoshop killer as a web app using NaCl. Do you really lose because NaCl is x86-specific? No. Even if there was some platform independent bytecode layer it would not help you. It'd just convert a webapp that didn't run at all on your iPhone to one that ran unusably slowly.

          Phones and desktops/laptops are j

      • Personally, I look forward to the day when I can use *any* language as a web language- at native speed, no less. Sounds like a great reason to embed x86 to me.
      • Your "abstract solutions" all take orders of magnitude more memory than C, and still suffer garbage collection pauses.

        Why isn't your web browser written using these "abstract solutions"? The answer is the same reason that having real machine code on the client is a win, for those who want to go to the trouble.

    • by neomunk ( 913773 )

      So this contest will be a salt assault?

  • NaCl? (Score:2, Insightful)

    by Lumpy ( 12016 )

    Who else was confused how Salt was going to help software security?

    I shook my head and said What??? as I read it for the third time to realize that they simply have a poorly chosen acronym

  • by thermian ( 1267986 ) on Monday March 02, 2009 @07:23PM (#27046581)

    I doubt that. More likely they intend to make its detection and negation easier.

    After all, the best language man can devise can only work as well as the coders who utilise it. If they are forced to cut corners in order to meet deadlines, errors will creep in, and we all know the urge to be first to profit is a prime reason for such things.

  • by michaelhood ( 667393 ) on Monday March 02, 2009 @07:26PM (#27046609)

    Don't be fooled into calling this ActiveX 2.0 â" rather than a model of trust and authentication, NaCl is designed to make dangerous code impossible by enforcing a set of a rules at load time that guarantee hostile code simply cannot execute (PDF).

    So what you're saying is..

    Using just one half of NaCl could be poisonous, but when sprinkled atop the web as one all is well?

  • by Anonymous Coward on Monday March 02, 2009 @07:30PM (#27046637)

    Beware of bugs in the above code; I have only proved it correct, not tried it.
    - Knuth

  • by Sloppy ( 14984 ) on Monday March 02, 2009 @07:31PM (#27046651) Homepage Journal

    where the scientist is saying he's covered all the bases, and nothing can go wrong.

  • 2^13? (Score:5, Insightful)

    by Moraelin ( 679338 ) on Monday March 02, 2009 @07:40PM (#27046693) Journal

    Admittedly, it's after past 1AM, so maybe my maths stopped working by now, but isn't 2^13 about 8000 dollars for the grand prize? It seems a bit low for all the work of basically reviewing their code and concepts.

    Hostile code disassembly? If it were that simple to disassemble someone else's code and automatically prove that it can't do anything wrong -- including by having security holes exploitable by a third party -- forget the browser, we'd have it standard in the OS or in the last step of make/ant/whatever. We could all stop worrying about antiviruses (who, in turn, would stop needing signatures and heuristics updated all the time anyway), reviewing code by hand to see if all buffers are checked, etc. Just run the magic utility and it'll tell you.

    I'm willing to bet that at least the antivirus makers have tried that before, you know, what with all of them offering some forms of heuristics by now, and none of them got it past the level of hit-and-miss. More miss than hit, in fact.

    Not saying that Google couldn't have got some genius that actually made it work, but at the very least it's not going to be a trivial job digging through all their cases to check if they really checked all possible attack vectors.

    And 8192 dollars doesn't really seem to be much incentive for doing that work.

    • Re:2^13? (Score:5, Funny)

      by cjfs ( 1253208 ) on Monday March 02, 2009 @07:45PM (#27046737) Homepage Journal

      Admittedly, it's after past 1AM, so maybe my maths stopped working by now, but isn't 2^13 about 8000 dollars for the grand prize?

      I contacted Google and their reply [google.ca] confirms your approximate amount.

      • by Alsee ( 515537 )

        My PC says the exact amount $8,191.997425203.

        What? Why are you looking at me that way?
        So what if my PC is a Pentium?

        -

    • Re:2^13? (Score:4, Insightful)

      by Cyberax ( 705495 ) on Monday March 02, 2009 @07:49PM (#27046761)

      NaCl just does not check that there's no buffer overflows, instead it isolates the program to make sure that buffer overflows do not cause problems.

      I.e. you can can overflow, use dangling pointers and cause all sorts of access violations to your heart's contents inside the NaCl sandbox. But it won't cause a security breach.

    • by White Flame ( 1074973 ) on Monday March 02, 2009 @07:55PM (#27046815)

      They're asking for people who are familiar in low-level x86 security fields to point out any issues from their experience that could compromise their sandbox.

      • by White Flame ( 1074973 ) on Monday March 02, 2009 @09:40PM (#27047407)

        Okay, they do preemptive code analysis inside the sandbox, too. Relevant portion of the PDF (section 2.2):

        The inner-sandbox uses static analysis to detect security defects in untrusted x86 code. Previously, such analysis has been challenging for arbitrary x86 code due to such practices as self-modifying code and overlapping instructions. In Native Client we disallow such practices through a set of alignment and structural rules that, when observed, insure that the native code module can be disassembled reliably, such that all reachable instructions are identified during disassembly. With reliable disassembly as a tool, our validator can then insure that the executable includes only the subset of legal instructions, disallowing unsafe machine instructions.

        This then happens inside a sandbox where CPU segments are strictly enforced and any OS calls are trapped.

    • Re:2^13? (Score:5, Interesting)

      by Tyger ( 126248 ) on Monday March 02, 2009 @08:09PM (#27046917)

      The PDF was an interesting read, though I agree that the money they are dishing out is pretty paltry for all the free review they are trying to garner. Furthermore, I think they are taking platform neutrality in the wrong direction by locking the idea in to the x86 architecture.

      But about how it would work, they are basically enforcing strict limits on how the code can be structured. The limits are designed to make the code easily analyzed. Anything that falls outside the strict requirements is rejected. It doesn't work for antivirus because they have to deal with any code that comes in without restriction.

      As to why it doesn't work for OS... There is no reason the basic concept wouldn't, aside from the performance penalty and increased code size. (Though further compiler optimization could minimize or eliminate some of that).

      However, if you want to go that route of making an OS do it, you might as well pick up a decent modern RISC architecture, because you're already breaking compatibility with any past program for any OS on the x86 CPU. Most of what they are doing is basically taking something that is standard on RISC and shoehorning it into the CISC architecture of the x86. Namely that instruction boundries can be reliably tested for jumps. They enforce that by requiring jumps only to 32 byte boundries, and then verifying each 32 byte block for correctness. Combined with disallowing self modifying code and eliminating the stack completely, all code that executes can be properly analyzed ahead of time.

      The concept looks sound to me (Experience working low level with x86 architecture) but the security still relies on the implementation. Off the top of my head I can think of several ways to break the sandbox depending on how it is implemented. However the PDF is quite short on the details to evaluate the implementation. Namely, what exactly qualifies as an allowed x86 instruction, and for the syscalls that are checked, what the check is, not to mention the potential for bugs in the syscall handler for what would otherwise be valid calls, and even potentially the state of the OS or process when the protected code is executed.

      Overall, I don't think this is the right direction for the web platform. Theoretically interpreted byte code should be more secure because it doesn't do anything that the interpreter doesn't explicitly allow (Javascript, Java, Flash, etc) and we see where that got us.

      • Re: (Score:3, Informative)

        As to why it doesn't work for OS... There is no reason the basic concept wouldn't, aside from the performance penalty and increased code size. (Though further compiler optimization could minimize or eliminate some of that).

        Java generally runs at ~30% slower than C. Unless NaCL can run faster than this then there's no point in it. The demo talk shows it running Quake at ~40 fps. Java quake runs at 200 fps [bytonic.de], on a much slower computer.

    • Re: (Score:3, Insightful)

      by uid8472 ( 146099 )
      The idea is that programs are written in a heavily restricted subset of x86 that can be proven safe, where "safe" here basically boils down to not being able to make system calls or otherwise interact with the world without going through the sandbox library. The program might still be compromised by a third party if it's badly written, but then the attacker won't be able to escape the sandbox either.
      • by gutnor ( 872759 )

        And then we will get the same problem as currently.

        The attacker will not be able to escape the sandbox, but since all the things that matter to you will be running in the sandbox, that means you are still as fucked as before.

        Unless of course you are a sysadmin and the only thing that matters to you is the system, not the user data.

    • If it were that simple to disassemble someone else's code and automatically prove that it can't do anything wrong -- including by having security holes exploitable by a third party -- forget the browser, we'd have it standard in the OS or in the last step of make/ant/whatever.

      Forget that, you could solve the halting problem, prove P=NP and create true AI. Any effort towards this will be no more or less secure than the implementation of the virtual machine, simple as that.

    • by Nick Ives ( 317 )

      It could be worse, Knuth only offers 80 hex dollars [wikipedia.org] for bugs in TeX. The actual check is worth far more in bragging rights alone!

  • Oops... (Score:5, Funny)

    by TheUni ( 1007895 ) on Monday March 02, 2009 @07:41PM (#27046711) Homepage

    ...guarantee hostile code simply cannot execute (PDF)

    Hah! Was that a jab at Adobe?

    • Actually, it sounds more like a challenge to me.

      I learned a LONG time ago to NEVER, EVER give slashdot a challenge you don't want fulfilled!
      • Re: (Score:3, Funny)

        by utnapistim ( 931738 )

        NEVER, EVER give slashdot a challenge you don't want fulfilled!

        Chalenge:
        1. RTFA!
        2. ???
        3. I win! (profit)

  • ...their calendar is off by about a month. It's not April just yet. Either way, even if this is more than an elaborate practical joke, what can it do that Java Applets can't do? I don't know if we need yet another framework to run binaries on computers.

  • tags: "google salt insane crap it security"

    somehow that seems to sum up all the comments above me ...

    anyway, formal software analysis is "hard"; its what compiler developers have been trying to do forever. Proving that code cannot do a specific subset of actions is not quite so hard, but some areas of security such as buffer overflows are inherently run-time only, and very hard to detect at the x86 level which doesn't quite have the concept of data structure, only a blob of memory assigned to a process.

    IMO,

  • by ADRA ( 37398 ) on Monday March 02, 2009 @08:01PM (#27046863)

    You could beat them up for many things, but they have a working system of arbitrary code execution that generally doesn't expose your system to risk (unless you tell it to, and only when the code is signed, though self-certs are ok).

    I'm sure there have been plenty of security advisories over the years, but the general philosophy is sane.

    The problem with Java's implementation is that the code is run within Java, which itself has sand-box protection for all executed code. Unless Google is seeking an interpreted language client-side or Google only wants to allow execution of trusted signer code, I think their task is probably a fruitless one.

    • by Cyberax ( 705495 )

      No, no, no.

      You can sandbox x86 code. It's been done multiple times in pre-VT virtual machines that use JIT to speed code execution. In short, VMs rewrite the code so 'safe' instructions are executed directly on CPU and 'unsafe' instructions are replaced with calls into virtualization layer.

      NaCl uses the similar approach. But they don't bother with 'unsafe' instructions and just ban them.

    • Re: (Score:3, Interesting)

      by Jahava ( 946858 )

      I am, myself, curious about this. Java seems to have a decent sandbox model already implemented through the JVM and Runtime APIs. Additionally, it is (or can be) platform- and architecture-independent, which seems more conducive towards Internet usage since it doesn't require an x86 instruction set.

      While the Applet model is still viable, I think more mature alternatives (Flash, for example) obsolete it. Rather, I would wager Google is targeting two specific scenarios:

      1. Full-scale protected/sandboxed applica
      • Java has a few problems NaCl doesn't, which is why NaCl has a chance at success and Java didn't:

        • Java was massive in every aspect. It took ages to download, install, and start. It nagged the user to update itself. This led to a very poor user experience. Nobody enjoyed accidentally visiting a web page that used a Java applet because your entire system would grind to a halt for 10 seconds whilst the JVM would heave its bloated carcass into RAM. I am told the very latest versions of Java fix this, but I care

        • Tons of useful and well debugged code is written in Java. If there's any general task, chances are very good there's a free Java library to do it. I doubt C/C++ comes anywhere close. I agree with some and disagree with some other of your statements, but I'm not gonna bite, why start a flamewar.

      • Re: (Score:3, Insightful)

        by tucuxi ( 1146347 )

        Sure, Java has a great security model and will not cause buffer overflows. But you have to write it in (duh) Java.

        The fun part about NaCL is that it can eat existing (C, C++, pick-your-own compiled language) code with only minor modifications to the compile chain, as long as that code does not make weird system calls. Just make sure that the compiler does not echo any of the 'forbidden' instructions, aligns jumps to 32-bit boundaries, and uses the prescribed instruction sequences for jumps and system cal

  • I spent about 20 minutes trying to defend Google and how cool it would be to have a browser like extension where a user can specify a sandbox and configure it (like sandboxie), then have developers exploit it to execute in that environment. Alas, it is novel at best.
  • We're talking about something that's got to be turring-complete in the end, gets evaluated before hand, turned loose to run directly on the processor. I can break that sandbox. A 14 year old could break that sandbox. There is no magically unsafe instruction.
    • Comment removed based on user account deletion
    • That hasn't been true since like 1980, when processors started coming out with MMUs. Learn: http://en.wikipedia.org/wiki/Memory_management_unit [wikipedia.org]

      • Re: (Score:3, Insightful)

        Of course, Windows didn't have memory protection until Windows NT (1993), and Macs until MacOS X (2001). http://en.wikipedia.org/wiki/Memory_protection [wikipedia.org]

      • Hey, unless this is an OPERATING SYSTEM LEVEL PATCH to control the program? The MMU doesn't matter here. Considering they want to control WHAT INSTRUCTIONS EXECUTE, there's the obvious weakness that if you can get %eip pointed at an arbitrary binary stream, you bypass their entire security model. Win.
  • by Animats ( 122034 ) on Monday March 02, 2009 @11:24PM (#27047901) Homepage

    I've read Google's paper, and I'm reasonably impressed. Basically, they've defined a little operating system, with 42 system calls, the same on all platforms, and defined a subset of 32-bit x86 machine code which can't modify itself and can't make calls to the regular OS. This requires using the seldom-used x86 segmentation hardware, which is quite clever and rarely used. But 64-bit mode has no segment machinery, so this approach doesn't translate to the current generation of 64-bit CPUs.

    The biggest headache with Google's model is that they have to use a sort of interpreter to check all "ret" instructions, so someone can't clobber the stack and cause a branch to an arbitrary location. What's really needed is a CPU with a separate return point stack, accessed only by "call" and "ret" instructions, so return points are stored someplace that code can't clobber. (Machines have been built with such hardware, but there was never a compelling reason for it before.) Google has to emulate that in software. This adds a few percent of overhead.

    Note that you can't use graphics hardware Google's OS. Conceptually, they could add a way to talk to OpenGL, which is reasonably secureable. But they haven't done that. They have a Quake port, but the main CPU is doing the rendering.

    Interestingly, it would be quite possible to make a very minimal operating system which ran Google's little OS directly. You don't need the gigabytes of baggage of Windows or Linux.

    It would also be possible to develop an x86 variant which enforced in hardware the rules of Google's restricted code model. That would be useful. Most of the things Google's model prohibits, you don't want to do anyway. (I know, somebody who thinks they're "l33t" will have some argument that they need to do some of the stuff Google prohibits. Just say no.)

    The main question is whether the implementers will have the guts to say "no" to things that people really, really want to do, but are insecure. The Java people wimped out on this; Java applets could have been secure, but in practice they trust too much library, and library bugs can be exploited.

    NSA Secure Linux has a similar problem. If you turn on mandatory security and don't put any exceptions in the rules, the security is quite good. But your users will whine and applications will have to be revised to conform to the rules.

    Incidentally, the people who talk about "undecidability" and "Turing completeness" in this context have no clue. It's quite possible to define a system which is both useful and decidable. (Formally, any deterministic system with finite memory is decidable; eventually you either repeat a previous state or loop. For many systems, decidability may be "hard", but that's a separate issue. If termination checking is "hard" for a loop, just add a runaway loop counter that limits the number of iterations, and termination becomes decidable again. Realistically, if you have an algorithm complex enough that termination is tough to decide, you need something like that anyway. Formal methods for this sort of thing are used routinely in modern hardware design. Anyway, none of this applies to Google's approach, which merely restricts code to a specific well-formed subset which has certain well-behaved properties.)

    • What annoys me is that by adding simple capabilities to the real operating system's syscalls, the operating system could do the same job as CaCl without having to compile programs specially. It's simple:

      1. Open a FIFO (or shared memory, or other IPC method).
      2. Fork.
      3. Close all file descriptors except the FIFO.
      4. Free up unused memory.
      5. Drop all capabilities to system calls except for sys_read, sys_write, and sys_exit.
      6. Read the code to execute from the FIFO.
      7. Execute the code.

      As long as the OS does its

      • That's too slow for many classes of apps though. For instance, good luck making Mirrors Edge run as a web app with that technique ...

      • by kasperd ( 592156 )

        5. Drop all capabilities to system calls except for sys_read, sys_write, and sys_exit.

        I like this idea (I even suggested it myself at some point in the past). I don't think it has been implemented, but it should be a fairly simple change to the kernel. There is one issue with memory management though. The kernel already has the ability to restrict the amount of memory a process can use, but the process still need to make some system call to allocate it. This leaves a few options.

        • Allocate a fixed amount of
  • I heard that much of the technique behind the design is to create x86 segments with a limit such that the sandboxed program can only access certain areas of memory within the process space. If this is true, the technique won't work at all in 64-bit Windows: Win64 doesn't have an API, documented or otherwise, to create segments in the LDT, unlike Win32. In fact, XP 64 and Vista 64 hardcode the LDT register to null. (Windows 7 64 has a limited LDT that appears to be related to SQL Server's "User-Mode Sched

  • Today's kdawson WTF is proudly brought to you by the writers guild and Microsoft.

    In an attempt to appeal the the nerds amongst us, kdawson today cited prize money amounts in Power notation.

    For those of us who aren't binary nerds, and weren't able to pull the magic number straight the the subconscious regions of our brains, $2^13 = $8192 dollars.

    I mean come on, does anyone still take this person seriously ?
     

    • Hate replying to myself, but does anyone else see the irony of this statement from the summary ?

      that guarantee hostile code simply cannot execute (PDF)

      Bearing in mind that PDFs are now also toxic, and very dangerous to download from anywhere.

  • It seems like the real benefit is not performance, but the ability to reuse existing C/C++ code bases on the web. A lot of people are looking at making web versions of well-established desktop apps (look at photoshop.com, for example). Currently you have to do this in Javascript/DHTML or Flash, which means throwing out all the code from your desktop app and writing something new from the ground up, which hopefully ends up looking more or less like the original system. It's a huge amount of work, and you end

  • Would you like a check for:

    $$$$$$$$$$$$$8192

If money can't buy happiness, I guess you'll just have to rent it.

Working...