Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Android Security IT

Android Jelly Bean Much Harder To Hack 184

A reader tips this quote from an article at Ars: "The latest release of Google's Android mobile operating system has finally been properly fortified with an industry-standard defense. It's designed to protect end users against hack attacks that install malware on handsets. In an analysis published Monday, security researcher Jon Oberheide said Android version 4.1, aka Jelly Bean, is the first version of the Google-developed OS to properly implement a protection known as address space layout randomization. ASLR, as it's more often referred to, randomizes the memory locations for the library, stack, heap, and most other OS data structures. As a result, hackers who exploit memory corruption bugs that inevitably crop up in complex pieces of code are unable to know in advance where their malicious payloads will be loaded. When combined with a separate defense known as data execution prevention, ASLR can effectively neutralize such attacks."
This discussion has been archived. No new comments can be posted.

Android Jelly Bean Much Harder To Hack

Comments Filter:
  • +1 headline (Score:5, Funny)

    by Smallpond ( 221300 ) on Monday July 16, 2012 @09:13PM (#40668725) Homepage Journal

    Android Jelly Bean Much Harder To Hack

    I can't wait to show this headline to my non-computer-type friends and watch their heads explode.

  • here is an idea why not just use unix permission built into linux? give me sudo privileges by default so i don't have to hack it, and let me worry about the security.

    • There's a device [google.com] for that.
      • not really it has a unlocked boot loader which is great but i still don't have sudo privileges and i don't have fine grained permission on a user group read write execute level. not to mention SELinux security which would be great. ideally i would be able to give each app its own permissions to every resource. dose this app get contacts read permission? i say nope it doesn't even know there is a contacts list. does it see my 3g Internet and use it to serve me ads and cost me minutes or toward my date c

        • Re:unix permissions? (Score:4, Informative)

          by mr_exit ( 216086 ) on Monday July 16, 2012 @11:59PM (#40669627) Homepage

          Then you want a phone with CyanogenMod It's got pretty fine grained control for denying apps certain permissions. Take a look:

            http://www.androidpolice.com/2011/05/22/cyanogenmod-adds-support-for-revoking-and-faking-app-permissions/ [androidpolice.com]

          • by data2 ( 1382587 )

            What they really need to do is to include fake data. Some apps just crash when being revoked access to certain things, while giving them fake access to an empty phone book e.g. would be a good way to circumvent this.

            • They have that [xda-developers.com] too. Gave up on CyanogenMod for anything but my Nook (it's the only passable mod for it that I've found) when they made their reasoning clear on why that would never, ever be in the mod itself.

            • that's exactly why android's permission model requires users to grant the permission before installation. writing an app that behaves reasonably with all possible combinations of permissions either granted or denied by the user is a nightmare.

        • by f3rret ( 1776822 )

          Actually IIRC CM has support to do something like that. Or it was planning to have that feature, I forget.

          Either way, the problem was that messing with permissions like that would break some apps for whatever reason.

          • Either way, the problem was that messing with permissions like that would break some apps for whatever reason.

            It's because the applications don't catch the SecurityException that gets thrown when an application accesses an API that requires a permission that the application doesn't have. This is done on the assumption that the installer will grant all privileges that the application requests, which is true of all official Android releases (and particularly all that come with a lawfully made copy of Google Play Store).

    • What? Exploiting a flaw in a vulnerable web browser on a mobile device has little to do with standard Unix/Linux permissions. The malware inserted into the browser will run with the privileges of the browser, which is more than enough to cause a lot of grief. Even if the browser is sand boxed, the malware can steal any data put into the browser such as credit card #s or email/banking logins. It's very useful to make this as hard as possible.
      • no but the issue it security of the devise. patching a memory leak on a browser is like putting bars on all of the windows of your house but leaving the garage door wide open. First shut the gurage door in this case give me control over permissions to my devices so i can control what apps can send home as it is every apps has permissions to everything they can think of wifi, cellular connection, contacts, microphone, etc etc etc. why would joe hacker bother trying to exploit a memory leak when he can just g

        • if we had descent permission we could stop super pony free app from phoning home our stolen information.

          For one thing, I thought only Parallax Software had Descent permission. For another, if the application weren't allowed to phone home at least every so often, it'd just error out "can't contact high score server" back to the home screen, just like the various Android app stores do while offline.

    • The Unix permissions model that is part of the Linux kernel within Android is used extensively and is central to application isolation within Android. It's just not used as you think it is. Each app runs under its own UID and each app has full permission to its own directories and resources (owner has full control) while no other apps have any permissions to those resources (by default, this can be changed by the app's developers and by you, assuming you have root-level access to your phone).
    • here is an idea why not just use unix permission built into linux? give me sudo privileges by default so i don't have to hack it, and let me worry about the security.

      if you want root, go root your device. please don't act like it's a good idea is give grandma a rooted device off the shelf.

  • by GoodNewsJimDotCom ( 2244874 ) on Monday July 16, 2012 @09:18PM (#40668755)
    From my experience with hackers, if you say your platform is more challenging to hack, it attracts more hackers to try and hack it. Never taunt happy fun hackers.

    Obfuscating your security is okay, but obfuscating the fact you have a bunch of anti-hacks in place seems even better.

    I've been looking into anti hacker theory ever since Starcraft 1 maphackers ruined ladder.
    • Re: (Score:1, Funny)

      by Anonymous Coward

      I was with you right up to your sig where you revealed you've suffered repeated mental breakdowns.

    • by mjwx ( 966435 ) on Tuesday July 17, 2012 @12:45AM (#40669837)

      From my experience with hackers, if you say your platform is more challenging to hack, it attracts more hackers to try and hack it. Never taunt happy fun hackers.
       

      Thats why all hackers target Linux/Unix and leave windows alone, because we all know Windows is nowhere near as secure (or can be as secured) as Linux. Therefore according to your theory, because windows is easy to hack it does not attract hackers.

      oh wait...

  • by Ukab the Great ( 87152 ) on Monday July 16, 2012 @09:19PM (#40668759)

    Will be hackable with a sonic screwdriver.

  • by lennier ( 44736 ) on Monday July 16, 2012 @09:35PM (#40668825) Homepage

    Address space layout randomisation sounds like a good idea, long overdue, and I'm glad it's slowly being rolled out.

    That said - I think it's an extremely sad reflection on the state of software engineering that we simply accept that "memory corruption bugs in complex pieces of code are inevitable".

    Memory corruption has such far-reaching consequences - causing the failure of pretty much every assumption of guarantee that it simply shouldn't be possible, let alone inevitable, in any industrial-strength language. We don't accept that, say, integer addition should sometimes randomly fail - although that used to happen back in the days of vacuum tubes. But we found and fixed the problem, and now our hardware is (relatively) trustworthy - yet our software is worse than ever. That we just shrug and accept memory corruption as normal - with an entire ecosystem of cybercrime and cyberwarfare created because of it - and don't seem to even think about why it might be, and how to fix the issue, but just keep slapping half-thought-out bandaid after bandaid is shameful to our profession.

    (Insert image of Edsger Dijkstra surveying our burnt-out CloudPad 2.0 PHP/C++/Javascript cyberjungle with a single tear.)

    • by Jeremi ( 14640 )

      Isn't the impossibility of memory corruption supposed to be one of the benefits of Java (and other managed-code languages too of course)?

    • by phantomfive ( 622387 ) on Monday July 16, 2012 @10:23PM (#40669103) Journal
      Do you have a solution? Because I really want to know it. Really, writing software is something I do for a living.
    • It's because we don't have professional standards for software engineers the same way as we have for other types of engineers. Software is still in the "bridge collapse" phase. At one time, anyone could build a bridge. You know what happened? A lot of them fell down, and that was just considered normal. Eventually, society got fed up with that crap and made standards, with jail time for violators.

      I've tried suggesting that software should be held to the same standards - oh, you should have seen the lo

      • by Anonymous Coward

        I've tried suggesting that software should be held to the same standards - oh, you should have seen the looks I got.

        I've worked with gizmos built for an industry where by law the software has to follow a complex development and testing process with a stack of paperwork six miles high to prove that it's certified for use. The end result is bugs don't get fixed because the manufacturers can't afford to rerun the certification process. Worse than that, even when the bugs do get fixed, end-users don't want to install the new software because they've learned to work around the old bugs and don't want to have to deal with new

        • And not all hardware followed GP's model either.

          I remember a TV show about how the Russians built the Proton rockets. Instead of modeling, testing, checking and being safety conscious, they built the rocket, tested it---and it blew up. So they did it again. And it blew up, again. So they did it again. And again. And again. Until it worked. Net result was a booster more powerful than the Saturn V (AFAIK). Quite a different mode of working.

          Along the way they also learned that their observation bunkers were to

          • Not that the Soviets didn't do any modelling and testing, but your analysis of their method overall isn't wrong - except that they never accomplished anything like the Saturn V. Even a modern Proton-M is only about 1/6 as powerful. Their N1, which was Saturn V class, blew up all 4 times they tested it and then they ran out of money. The method works with small things that you can afford to build a bunch of times, but with big expensive things it starts to be cheaper to just do the testing.

            • Thanks for reminding me. Yes, the Proton is a wonderfully efficient rocket, but it was the N1 [wikipedia.org] which was the super Saturn V. However, as I recall, they did not so much run out of money as run out of Sergei Korolev [wikipedia.org], the leader of the N1 program. After his death, the N1 program never recovered and was shut down.
          • by f3rret ( 1776822 )

            And not all hardware followed GP's model either.

            I remember a TV show about how the Russians built the Proton rockets. Instead of modeling, testing, checking and being safety conscious, they built the rocket, tested it---and it blew up. So they did it again. And it blew up, again. So they did it again. And again. And again. Until it worked. Net result was a booster more powerful than the Saturn V (AFAIK). Quite a different mode of working.

            Along the way they also learned that their observation bunkers were too close to the rocket and not as blast proof as they had hoped. I'm not saying that this is necessarily the best way of working, just that there are other ways to do things if your values are somewhat different.

            I love the Soviets.

        • I think the problem is trying to standardise and certify things which can't be.

          We should start with the small things which can be standardised. For example, every programmer should know how to sanitise database inputs from the user, and check that buffers can't overflow, and always apply that knowledge if they want to be considered competent.

      • Who cares about that shit anyway, they scoffed. We're here to do low-quality work at the lowest price...you think we're building this software to last?

        Precisely. Who's still going to use programs written in Cobol in the year 2000?

      • by dkf ( 304284 )

        It's because we don't have professional standards for software engineers the same way as we have for other types of engineers. Software is still in the "bridge collapse" phase. At one time, anyone could build a bridge. You know what happened? A lot of them fell down, and that was just considered normal. Eventually, society got fed up with that crap and made standards, with jail time for violators.

        I see you're advocating a large increase to the cost of software for very little gain in actual reliability. Any proper engineer (or True Scotsman) would know that it is important to balance cost and benefit; for a significant fraction of applications, that balance is firmly towards the cheap end. Guess what? Most of the cheap software is also perfectly fine for its purpose after a few iterations. (Well, that's true if we exclude anything written in PHP, which appears to be the only language and community e

        • PHP, which appears to be the only language and community ever to actively hate consistency to the point of working directly against it

          I agree with you. However, PHP hangs on in part because entry-level web hosting plans that offer only PHP are often cheaper than plans allowing other, more consistent languages. Case in point: visit this page [securepaynet.net], click "Linux plan details", and search for "Language Support".

    • by f3rret ( 1776822 )

      That said - I think it's an extremely sad reflection on the state of software engineering that we simply accept that "memory corruption bugs in complex pieces of code are inevitable".

      I don't get what is so sad about this. It seems to me like people have realized that mistakes can and do happen and the exact nature of a mistake in this sort of context is often unpredictable, so you can either roll out patches after the error has been discovered (and a number of devices have been hacked) or you can build in measures that makes the system fault tolerant.

      Of course the ideal situation is one where there are no bugs ever and we can full predict every single interaction between every single li

    • by tlhIngan ( 30335 )

      That said - I think it's an extremely sad reflection on the state of software engineering that we simply accept that "memory corruption bugs in complex pieces of code are inevitable".

      How is this an admission that memory corruption is inevitable?

      I see it as defense in depth - the application is the first line of defense against memory corruption (usually caused intentionally). If that wall fails, the OS has a wall of its own to help protect the system as well.

      A well-written program would check its inputs, bu

  • It seems like the big vulneraibilites in mobile platforms these days involve apps doing things they shouldn't. Android is, for the most part, way ahead of Apple in terms of technical mitigations. Android sandboxes apps with explicit permission grants. Apple just vets them, incompletely. iOS also seems vulnerable to odd things, like this [macstories.net]. Apparently executing unsigned code on iOS, if you can pull it off, sidesteps part of the sandbox. Android is based on the assumption that any app can execute unsigned
    • Apple is further than Android. Sure, the app has to ask for permission, but whom does it ask? The end user. The end user is asked to agree to *bunch of things* to get his new app that he just chose out of a gazillion other apps. At that point in time, the end user is determined to get the app on his/her device and is willing to agree to anything because they already decided they want the app. In practice, those explicit permission grants mean nothing because for all practical purposes, over 50% of end user

      • I have to agree (and I'm an Android user, and no fan of Apple). Users want security and total convenience, and while technically the permissions side of Android is fine, the average user just wants their funny talking dog app to run - they don't care to look through the permissions list and wonder why it wants to be able to dial numbers and access all their personal data. Similarly, when Microsoft implemented UAC in Windows, people complained about the intrusive pop-ups and would automatically approve them
        • All I can think of is initially exposing devices to a curated version of the Play Store ala Apple

          AppsLib tries to do this: it ranks "approved" applications higher than other applications. Amazon Appstore is also a curated store and the default store on Kindle Fire tablets.

  • A quick look at Oberheide's site shows a talk from a week ago at Summer Con detailing problems with Google's 'Bouncer' system, designed to detect malicious apps before they enter the Android Market:

    http://jon.oberheide.org/blog/2012/06/21/dissecting-the-android-bouncer/ [oberheide.org]

    http://jon.oberheide.org/files/summercon12-bouncer.pdf [oberheide.org]

    The executive summary:

    Bouncer doesn't have to be perfect to
    be useful
    â-- It will catch crappy malware
    â-- It won't catch sophisticated malware
    â-- Same as AV, IDS,
    â-- How much does Bouncer raise the
    bar?
    â-- Currently: not much
    â-- Future: hopefully more?

  • by 93 Escort Wagon ( 326346 ) on Monday July 16, 2012 @10:39PM (#40669167)

    I'm reading through this thread, and the standard response made by anyone who disagrees with a post is to either call them a moron, idiot, motherfucker, or to insinuate they are gay.

    How about this? If you guys think that a post is inaccurate or simplistic - consider responding and explaining why the post is wrong. If you can't do that, then maybe your level of understanding on this topic is lower than you think it is.

    I mean, come on. I realize this is Slashdot, and there are always a few people like that hanging around - but this story seems to be attracting an inordinate number of guys that have nothing to offer but anger and venom.

    • by f3rret ( 1776822 )

      I'm reading through this thread, and the standard response made by anyone who disagrees with a post is to either call them a moron, idiot, motherfucker, or to insinuate they are gay.

      How about this? If you guys think that a post is inaccurate or simplistic - consider responding and explaining why the post is wrong. If you can't do that, then maybe your level of understanding on this topic is lower than you think it is.

      I mean, come on. I realize this is Slashdot, and there are always a few people like that hanging around - but this story seems to be attracting an inordinate number of guys that have nothing to offer but anger and venom.

      Dude, is this your first time on the internet?

      It's pretty much immutable fact that any argument that does not involve profanity and/or the questioning of the original posters sexuality is not going to be read or replied to.

      Welcome to the internet, everyone here are assholes.

  • by NicknamesAreStupid ( 1040118 ) on Monday July 16, 2012 @10:47PM (#40669201)
    What ever happened to processors designed to keep data and code execution spaces separate? It was done in the 1980s on processors with far far fewer gates. While it made application design a bit more 'thoughtful', I don't remember any designers complaining about it. Maybe I'm old fashioned, but aren't buffer/stack overruns/underruns a hardware architectural issue? If so, then why don't they fix the hardware?
    • by Anonymous Coward

      typed registers are more useful for this problem. harvard machines use two memory interfaces, which preclude any multiplexing gains
      in both bandwidth and capacity.

      i guess you could divide the address space, but the X bit does that perfectly well.

    • by wiredlogic ( 135348 ) on Monday July 16, 2012 @11:25PM (#40669421)

      Harvard architecture parts are still around but largely confined to microcontrollers and the simpler DSPs at this point. The separation doesn't fix the software problem of buffer over/underruns. It just means you can't easily spill over into a code segment and do nasty things as a byproduct of that. You can still do dirty things in the data segment, though.

    • JIT compilers are necessary for today's workloads. These pretty much require that you are allowed to execute code you just wrote to your data segment.

      • iOS works fine without JIT compilers outside of JavaScript in Safari.
        • Are most iOS applications written in a Java-like language?

          • Are most iOS applications written in a Java-like language?

            No. iOS applications don't need recompilation because they are written in the unmanaged language Objective-C, perhaps with some C++, except for those applications that run mostly inside a UIWebView which are written in JavaScript. The W^X enforced on applications other than Safari's JavaScript interpreter is stricter than that on platforms allowing general recompilation: no page designated for writing can ever switch to being designated for execution, nor vice versa, without being cleared first.

  • memory corruption bugs that inevitably crop up in complex pieces of code...

    Well, if they didn't use an OS written in C, or they used a static verifier, they wouldn't have that problem.

    Address space randomization only protects against inept attackers. If the attacker can get anything running at a low privilege level, and there's an overflow exploit that lets them look into the address space they're attacking, they can find whatever is being moved around.

    Address space randomization at best turns attacks into system crashes.

    • But it may be hard to look around the memory space using the code you can fit into a specific buffer under/overrun.

      Turning attacks into system crashes is a good thing, because it means the admin will notice something is wrong, and it'll take ages for the attacker to succeed (which means the attacker will probably realise it's pointless and not try, which means no crashes).

    • Address space randomization at best turns attacks into system crashes.

      right, and after a newly installed app crashes, how many times do you keep re-starting it? or do you just uninstall it and install one of the other 1k apps that do the same thing?

      seems like a pretty useful result to me.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...