Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Apple

"Jekyll" Test Attack Sneaks Through Apple App Store, Wreaks Havoc 206

An anonymous reader writes "A malware test app sneaked through Apple's review process disguised as a harmless app, and then re-assembled itself into an aggressive attacker even while running inside the iOS 'sandbox' designed to isolate apps and data from each other. The app, dubbed Jekyll, was helped by Apple's review process. The malware designers, a research team from Georgia Institute of Technology's Information Security Center, were able to monitor their app during the review: they discovered Apple ran the app for only a few seconds, before ultimately approving it. That wasn't anywhere near long enough to discover Jekyll's deceitful nature."
This discussion has been archived. No new comments can be posted.

"Jekyll" Test Attack Sneaks Through Apple App Store, Wreaks Havoc

Comments Filter:
  • by Anonymous Coward on Monday August 19, 2013 @01:25PM (#44609007)

    BUT MACS DON'T GET VIRUSES.
     
    Unless they're too slow.

    • by Immerman ( 2627577 ) on Monday August 19, 2013 @01:34PM (#44609139)

      Why waste your time with viruses when people will pay to run your Trojan?

    • But Macs DON'T GET VIRUSES.

      Except when they do.

      Fixed that for you.

  • by Anonymous Coward on Monday August 19, 2013 @01:28PM (#44609053)

    There is no point to the closed system if you let just anyone come in.

    • by Anonymous Coward on Monday August 19, 2013 @01:33PM (#44609123)

      There is no point to the closed system if you let just anyone come in.

      Of course there is, silly! It's called "style". More specifically, "illusion of security", which is a style. Apple's big on that sort of thing, you know.

    • by Anonymous Coward on Monday August 19, 2013 @01:33PM (#44609129)
      I found it shocking that they ran it for only a few seconds. I would have expected them to have at least run through all screens/features of the app to ensure that it does what it claims to do. This is a classic case of prioritising volume instead of quality.
      • by stewsters ( 1406737 ) on Monday August 19, 2013 @01:41PM (#44609219)
        I know some people who were working on an MMO, and during the testing phase someone created an account, logged into the server, walked about 10 feed, opened an escape menu and left, and they were approved. I assume they have some sort of automated scans too, but it doesn't seem like the walled garden provides much security, only an additional chance to charge people.
        • by Sarten-X ( 1102295 ) on Monday August 19, 2013 @01:50PM (#44609327) Homepage

          Checklist for approval:

          • Does the app crash on our profiler?
          • Does the app look like it does something useful?
          • Will users feel like they've been lied to by the App Store listing?

          Note that Apple's motivation is not to ensure that only quality apps get into the store. Rather, they just want to make sure that the store itself isn't tarnished. If 30% of your downloaded apps are just shells around scam-laden videos, you'll stop using the store, so they just test each app long enough to make sure that it kinda-sorta does what's claimed. Any problems after that are going to be blamed on the developer, not Apple.

          • by tlhIngan ( 30335 )

            Checklist for approval:
            Does the app crash on our profiler?
            Does the app look like it does something useful?
            Will users feel like they've been lied to by the App Store listing?

            Note that Apple's motivation is not to ensure that only quality apps get into the store. Rather, they just want to make sure that the store itself isn't tarnished. If 30% of your downloaded apps are just shells around scam-laden videos, you'll stop using the store, so they just test each app long enough to make sure that it kinda-sorta d

            • by Bogtha ( 906264 )

              Granted, perhaps some of the things it does it shouldn't have access to (e.g., contacts and such), but that's something that's changing in iOS7 anyways.

              Apple already prompts the user for address book access in iOS 6. iOS 7 adds camera and microphone I think.

          • by AmiMoJo ( 196126 ) *

            So as long as the hardcore porn only kicks in after the Colouring Fun For Kids app after 11 seconds Apple will approve it. Good job.

          • by Immerman ( 2627577 ) on Monday August 19, 2013 @10:43PM (#44614187)

            Don't forget the disapproval checklist:
                    Does the app compete with any of our own current or future products in any way?
                    Does the app violate the sensibilities of the reviewer, his boss, or her mother-in-law's cat?
                    Is my coffee cold?

        • by PIBM ( 588930 ) on Monday August 19, 2013 @01:50PM (#44609335) Homepage

          I've had a game published which wasn't even started, or approved while only displaying 'an internet connection is required to proceed'. It's hard to be checked out less than this..

      • by Anonymous Coward on Monday August 19, 2013 @01:55PM (#44609387)

        Without knowing much about the setup, I'm kind of doubtful that they can have a high level of confidence that it really ran for a few seconds. If I were testing apps like this, I'd run a good bit of my testing on a disposable VM with a faked network. That way it couldn't send connections out and any self-modification it did while in the test harness would be ignored, so nobody but me would have any way of knowing what went on in the harness

        • by Pieroxy ( 222434 ) on Monday August 19, 2013 @05:35PM (#44611813) Homepage

          Without knowing much about the setup, I'm kind of doubtful that they can have a high level of confidence that it really ran for a few seconds. If I were testing apps like this, I'd run a good bit of my testing on a disposable VM with a faked network. That way it couldn't send connections out and any self-modification it did while in the test harness would be ignored, so nobody but me would have any way of knowing what went on in the harness

          In other words, you would reject any app relying on a webservice somewhere on the internet. Good policy I guess. Nobody needs Instagram, Facebook of Twitter apps.

        • by AmiMoJo ( 196126 ) *

          Sure they can, just make the app require an active internet connection and do nothing other than display a message saying "no internet" otherwise. Only launch the attack when you know you can communicate with the researcher's server.

          Remember, the goal here was to probe the vetting process, so it's reasonable to assume they anticipated common techniques like isolated VMs.

        • If I were testing apps like this, I'd run a good bit of my testing on a disposable VM with a faked network.

          Meaning any application that required web services wouldn't work...great plan!

      • TARGETS (Score:5, Insightful)

        by war4peace ( 1628283 ) on Monday August 19, 2013 @02:03PM (#44609479)

        Sadly, it's a matter of expenses stripped to the bone. The "testers" have targets to fill. Here, you have 1000 apps to test and 3 days to do it. You miss this target twice, you get fired.

        It's a method I've seen (generally) pretty much everywhere. UAT or internal testing is considered "money sink" and its attached expenses are minimized by all means.
        I would frankly have been surprised if the testing method were to be any different.

        • by Pieroxy ( 222434 )

          Did you expect Apple to decompile everyone of those apps and pay an engineer 100k/year to read out decompiled code?

          • No, I didn't. If you would have read my post carefully, you would have realized that.
            Apparently other people did expect it, though.

    • by Anonymous Coward on Monday August 19, 2013 @01:34PM (#44609131)

      Not true. A closed system can be used to ban competitors whose work you plan to steal.

    • by h4rr4r ( 612664 ) on Monday August 19, 2013 @01:45PM (#44609271)

      Sure there is.
      They get a cut of all software on the platform. That is the entire point.

  • by glennrrr ( 592457 ) on Monday August 19, 2013 @01:29PM (#44609065)
    Since it was just a proof of concept and was on the store for a few moments.
    • You are showing your human bias. Think in terms of clock ticks and the amount that can be accomplished by a computing device in "a few moments" and it becomes clear that "Wreak Havoc" is justifiable even if harm wasn't necessarily found after their analysis.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        Reminds me of this scene from First Contact:

        (Picard drains the coolant, finds the Borg Queen's head and neck that is still blinking. He breaks the neck)
        DATA: Captain.
        PICARD: Data, ...are you all right?
        DATA: I would imagine that I look worse than I ...feel. ...Strange. ...Part of me is sorry she is dead.
        PICARD: She was unique.
        DATA: She brought me closer to humanity than I could have thought possible. And for a time I was tempted by her offer.
        PICARD: How long a time?
        DATA: Zero point six eight seconds, sir. Fo

    • was on the store for a few moments.

      Agreed. All iOS apps claiming to be "malware" need to actually destroy something or we aren't going to believe you could actually do it.

    • by Zalbik ( 308903 )

      Since it was just a proof of concept and was on the store for a few moments.

      Yes, but it was only on the app store for a few minutes due to the researchers removing it:

      "The researchers installed it on their own Apple devices and attacked themselves, then withdrew the app before it could do real harm.

      A better headline may have been:
      "Researchers demonstrate that havoc-wreaking malware can bypass Apple's app store review process"

  • by swb ( 14022 ) on Monday August 19, 2013 @01:36PM (#44609155)

    Let's say you submit an app to the app store, and like many it's designed to do something fairly idiotic that today's kids find funny, say, take a picture and then superimpose the picture onto a set of background images included with the app.

    Now, let's say the app writer has steganographically embedded "naughty" code in the background images, maybe even going so far as to spread the code across all the images, encrypt, etc. to make it difficult to find.

    Can the app modify itself by taking its hidden code from the images and actually execute it? Can you download "new" code from the internet, even if its steganographically hidden? It seems like you shouldn't be able to do this, like the apps should be sandboxed from modifying their own code just to prevent importing unapproved code.

    • by schneidafunk ( 795759 ) on Monday August 19, 2013 @01:41PM (#44609225)
      From my understanding, compiled code is reviewed once. However, in the cell phone app that I made, a lot of content was pulled from a database that I controlled, meaning product information could be updated by me without the need of review from Apple. We joked about replacing images with NSFW images, but I imagine what this team did was have a compiled app that ran code from a DB and was similarly able to be updated later.
    • How would you stop it? Code is just instructions, you make it scan the image (easily concealable for an image editing program) and then have the poorly written (or obfuscated) objective c code conceal code that executes the data in the image. Without removing all inputs its hard to do.
    • by h4rr4r ( 612664 )

      Why would it need to modify its own code?
      Why not just have an interpreter in there to begin with? Or just have a simple date check. Don't be evil for X days.

      Since they only have the compiled program they have no idea what it will do in the future.

      • by cusco ( 717999 ) <brian.bixby@[ ]il.com ['gma' in gap]> on Monday August 19, 2013 @02:02PM (#44609463)
        One of the voting machine vendors (not Diebold) actually did this in order to pass testing to get approval. From Date 01 to Date 07 it would only run locally available code, but then from Date 08 onwards it would check for scripts available on the inserted compact flash card and run them if they existed. The CF cards were only supposed to be used for recording votes, but the company was also using it to update the machine's firmware. No one knows for sure whether the scripts were used to change votes or anything else, but the possibility was certainly there.
    • by sjames ( 1099 )

      Build an interpreter into the app. No need for it to modify it's own code, just the data that tells it when to do what.

    • by Bogtha ( 906264 )

      For the most part, yes, but not in the way you think. Objective-C is a very dynamic language. It's not really about sandboxing - apps can't modify their own code. What they can do is include components that do fairly generic, innocuous things, then take external input and construct messages to those existing components on the fly based on that input.

    • Can the app modify itself by taking its hidden code from the images and actually execute it? Can you download "new" code from the internet, even if its steganographically hidden? It seems like you shouldn't be able to do this, like the apps should be sandboxed from modifying their own code just to prevent importing unapproved code.

      It may be quite possible that you can create code on the fly. However, the app is still sandboxed. It has permissions, and it cannot do anything that it isn't permitted to do. Which _should_ be a protection against viruses (even if some virus or malware can attack your app, it cannot break out of the sandbox and do things that the app isn't allowed to do), but it also protects against the app maker doing naughty things himself.

  • Q&A (Score:5, Interesting)

    by tuo42 ( 3004801 ) on Monday August 19, 2013 @01:38PM (#44609189)
    When I read this article, it strengthens my opinion that the Q&A process for the App Store is absolutely flawed. Don't get me wrong, regardless of wether you like or hate the walled garden, I actually am of the opinion that the guidelines - especially the UI guidelines - developers have to follow to beeing approved for the app store are a good thing in and itself. The Google Play store has similar guidelines, allthough - IMHO - not as focused on user experience.

    I had a apps declined due to improper usage of a certain widget in another certain widget which was not deemed "correct" (switch button in a table footer for example), but always was able to either find a similar solution or - in one rare case (the one mentioned) - explaining WHY that switch button is there, and how if you take a look at the UI, understand what it does.

    Then again I saw apps in the store which completely failed most of the even basic guidelines, described as (between the lines): "fail these, and your app will 100% be NOT approved", and I wondered "how did they get in there"?

    Talked to other developers, same experience. Some knew they had a few things in there against the guidelines (custom springboards, views not conform with the UI guidelines) and hoped to get through. Sometimes they managed, sometime not, so they also got the feeling that the Q&A for the App store is somewhat like tax declaration. They don't seem to have enough time/ressources to check all, so if you something that is against the guidelines, you have to hope that you are one who doesn't get checked thoroughly.
    • by tuo42 ( 3004801 )
      Help, I need someone repair my brain, fast!

      Of course I meant QA! How could that go through my Q&A..... ;)
    • Talked to other developers, same experience. Some knew they had a few things in there against the guidelines (custom springboards, views not conform with the UI guidelines) and hoped to get through. Sometimes they managed, sometime not, so they also got the feeling that the Q&A for the App store is somewhat like tax declaration. They don't seem to have enough time/ressources to check all, so if you something that is against the guidelines, you have to hope that you are one who doesn't get checked thoroughly.

      It sounds like someone is rubber stamping apps, and not doing his job.

      • by h4rr4r ( 612664 )

        Well it is an impossible task, so no surprise there.

        I can easily make an app that has a good mode and an evil mode, it decides which by downloading some images from my website. Since the app is used for some image related task you would never notice.

        Unless Apple has solved the halting problem and they failed to tell us.

        • by Richy_T ( 111409 )

          Unless Apple has solved the halting problem and they failed to tell us.

          There's an app for that. Wait, why has my screen filled with skulls and crossbones?

    • Re:Q&A (Score:5, Insightful)

      by Bogtha ( 906264 ) on Monday August 19, 2013 @02:08PM (#44609511)

      I'm an iOS developer, and the approval process can be a real problem for me sometimes, but I still think the App Store is far better with it than without it.

      I've seen a lot of clients ask for dumb stuff. Using UI elements in confusing ways. Doing user-abusive stuff. Being generally annoying and self-serving rather than being designed with the user's best interests as a goal.

      The great thing about the approval process is that I can tell those clients "Apple won't allow it" and it instantly shuts them up. The alternative would be hours of trying to convince them not to do something horrible, which leaves everybody unhappy no matter what decision is made. And this is the best case scenario, when you've got a developer willing to go to bat for the users. There's plenty of developers out there who will blindly do whatever the client asks, no matter how shitty it makes the UX.

      It's not just bad decisions. It's QA as well. Do you have any idea how keen people are to just push stuff live and then fix it after? I don't know about you, but I don't want a dozen updates every morning as developers meddle with their apps trying to get things right. The approval process gives developers the stick necessary to perform proper QA. We don't dare push anything live if there's the possibility of a crasher, because Apple will reject it and we have to wait another week to get reviewed again.

      If the approval process wasn't there, then the quality of the apps on the App Store would plummet. You think it's bad with Android, but Android doesn't attract the worst kinds of ambulance chasers. The App Store would be 75% Geocities level quality in no time at all.

      What I do disagree with is making the App Store the only way to get applications onto the device. There's really no legitimate reason for not allowing side-loading for people willing to go into settings and agree to a disclaimer.

      • by Myopic ( 18616 ) *

        Running a closed app store with a tight approval process is fine. Preventing use of outside apps or app stores is not fine. That's where the line is, and Apple is over the line [youtube.com]. They could still have their branded kid-safe no-porn carefully-checked pre-installed app garden, and everyone would trust it and use it and they would make tons of money, but Apple has an ideology of control which means they can't abide [youtube.com] alternatives.

  • by SuperKendall ( 25149 ) on Monday August 19, 2013 @01:52PM (#44609353)

    I can totally see getting an app through the submission process that does something a bit sneaky. Sometimes the app reviewers hardly look at a thing (though sometimes they look very carefully, it just depends on the reviewer).

    But the claim the app could "wreak havoc" needs some proof. They said:

    a Jekyll-based app can successfully perform many malicious tasks, such as posting tweets, taking photos, sending email and SMS, and even attacking other apps â" all without the users knowledge

    Every single one of those, requires permission from the user to do - posting tweets an app cannot do directly, it brings up a sheet. Same thing for email/SMS. Taking photos requires an OK from the user to access the camera. You cannot "attack other apps" because of the sandbox.

    Extraordinary claims, like a complete breaking of the sandbox, require more proof than they have presented. I would bet they are saying they THEORETICALLY could break out of the sandbox but have absolutely no actual working exploits that go outside of existing user permissions and the sandbox...

    • by h4rr4r ( 612664 )

      You can't attack other apps?
      So how does jailbreaking work?

      If you can jailbreak, you can use that to attack other apps or do any of those other wonderful things.

      Sure you would need to use a jailbreak after being installed, but chaining together attacks is a well known thing to do.

      • You can't attack other apps?
        So how does jailbreaking work?

        Jailbreaking works by attacking the device over USB, generally the update mechanism - not something you can do through an app.

        • by h4rr4r ( 612664 )

          Only the most recent one.
          There was a time you could jailbreak via pdf or just visiting a webpage. It is quite possible another such exploit will be found in the future.

          • by SuperKendall ( 25149 ) on Monday August 19, 2013 @02:41PM (#44609853)

            There was a time you could jailbreak via pdf or just visiting a webpage.

            The only reason THAT worked is because the Safari javascript engine has native code JIT that an app cannot use. And now you know why...

            So still true that you cannot jailbreak out of an arbitrary app, only ever from system apps that have elevated privileges, and then only once years ago...

            Im not saying such an attack will never exist, it's just exceedingly unlikely and far more unlikely inside of an app you deploy to the store.

            • by h4rr4r ( 612664 )

              Either way security problems will exist and pretending that their app auditing is anything more than a justification for a walled garden is simply burying your head in the sand.

              • by Bogtha ( 906264 )

                Either way security problems will exist and pretending that their app auditing is anything more than a justification for a walled garden is simply burying your head in the sand.

                The walled garden is probably one reason for the approval process, but it's certainly not the only one. Apple seem genuinely motivated to use it to raise the quality of the end-user experience.

                Here's one example: a few years ago, developers were complaining that Apple was rejecting their apps for having an icon of a phone in t

                • by h4rr4r ( 612664 )

                  Then why not allow outside markets and advertise theirs as the premium one?

                  The simplest answer is, they want their 30%.

                  • by Bogtha ( 906264 )

                    Then why not allow outside markets and advertise theirs as the premium one?

                    I'm responding to what you said here:

                    Either way security problems will exist and pretending that their app auditing is anything more than a justification for a walled garden is simply burying your head in the sand.

                    You were arguing that the promotion of a walled garden was the sole purpose for the approval process for the App Store. That's a silly thing to say, and I explained why. You've now changed your argument to someth

    • by Bogtha ( 906264 ) on Monday August 19, 2013 @02:15PM (#44609585)

      Every single one of those, requires permission from the user to do - posting tweets an app cannot do directly, it brings up a sheet.

      Read the paper - they watched the interaction in a debugger to find the right messages to send to the right private classes in order to bypass this.

      This only worked with iOS 5 - last year Apple moved sheets like these into external processes and used a proxy view controller to show them in applications instead of embedding the functionality directly, so attacks like this aren't possible any more where this technique has been used.

      I agree that this is somewhat sensationalised, but they were able to do this without the normal user approval in the 4% or so of people still running a two year old version of iOS.

      • Aha (Score:3, Informative)

        by SuperKendall ( 25149 )

        I looked for the paper but could not find the link. Thanks for the extra info.

        As I thought, they did not break the sandbox at all. Attacks that don't work in iOS6 are irrelevant at this point...

        It's totally sensationalized. It remains true there's no way a real app can "wreak havoc" even if you inject code later.

        • by AmiMoJo ( 196126 ) *

          That's a naive view. Getting users to grant permission for just about anything is pretty easy. Everyone knows this and it works on all platforms.

      • by Zalbik ( 308903 ) on Monday August 19, 2013 @02:45PM (#44609897)

        This only worked with iOS 5

        Some items only worked in iOS 5.

        Based on Table 1 from their paper here [usenix.org], the following items could be accomplished by their app on iOS 6:
        - posting tweets
        - using the camera
        - dialing
        - using bluetooth
        - crashing safari
        - stealing device

        It was only sending SMS messages, sending email, and rebooting the system that were limited to iOS 5.

        • Where are my mod points when I need them. I figured that here on /. that people would me modding up things that prove that ap
        • by Wovel ( 964431 )

          But they never discuss if the app had to be given permissions. Sure they say they did it all secretly and hidden using private apis, but they never indicate they did not give the app permissions when it was run.

          Hard to believe this paper has been peer reviewed.

    • by Minwee ( 522556 )

      Every single one of those, requires permission from the user to do - posting tweets an app cannot do directly, it brings up a sheet. Same thing for email/SMS. Taking photos requires an OK from the user to access the camera. You cannot "attack other apps" because of the sandbox.

      Good point. I guess that this never happened [arstechnica.com] because of the tight limits put on app capabilities.

      Extraordinary claims, like a complete breaking of the sandbox, require more proof than they have presented. I would bet they are saying they THEORETICALLY could break out of the sandbox but have absolutely no actual working exploits that go outside of existing user permissions and the sandbox...

      Ah, the old "That vulnerability is completely theoretical" [l0pht.com] defense. It worked so well for Microsoft in 1992, and it's still working for Apple today.

      • Good point. I guess that this never happened

        Not in iOS6 it didn't. Apple started taking user security much more seriously in iOS6, anticipating a potential for such attacks. I always thought prior to that it was kind of nuts you could access the address book without permission - now you cannot.

        Ah, the old "That vulnerability is completely theoretical" defense.

        And yet it turns out to be true. The vulnerability is not real, only a theoretical possibility that relies on breaking the sandbox, which they hav

      • by Wovel ( 964431 )

        Are you certain users did not give the app permission to use their twitter account? Nowhere in the article on the dictionary app does it claim they broke out of the sandbox and took control of people's twitter accounts.

        You seem to have a tendency to use completely unrelated links to try and bolster a pretty weak position. Your l0pht link is equally puzzling.

      • by Wovel ( 964431 )

        In fact, from the article you linked:
        " A notification appeared locally on the device and if the user had authorized the app to access their Twitter account, a tweet of the notification was sent out under their account with a hash tag #softwarepiracyconfession"

        See the if part of the statement? Maybe it would help to read what you link? Maybe a little?

    • by Zalbik ( 308903 )

      Extraordinary claims, like a complete breaking of the sandbox, require more proof than they have presented.

      No, they do not. Their claims require more proof than the reporter presented in the article.

      The researchers wrote a fairly in-depth paper on the attack which can be read here [usenix.org]

      In the case of tweets, they make use of "private API's" to avoid notifying the user:

      the public API called by the app will present a tweet view to the user, and let the user decide whether to post it or not, as shown in Figure 9.

    • Every single one of those, requires permission from the user to do - posting tweets an app cannot do directly, it brings up a sheet.

      This example isn't correct. Apps can get access to the social framework, which allows them to do things directly to the Twitter web API, which as far as I know, includes posting Tweets. This is used for apps that want to have their own custom UIs but also have access to the user's Twitter data (for example, Twitter clients.)

      Now, you are right in that this would spawn a dialog that requires user authorization. Trying to access the user's Twitter account token will cause the system to ask the user's permissio

      • by Wovel ( 964431 )

        It's also a reminder that slashdot headlines have reached new levels of absurdity.

  • by Above ( 100351 ) on Monday August 19, 2013 @01:58PM (#44609419)

    No review process will ever catch all bad actors. I think Apple should be doing a better job with reviews in several dimensions, but that's not the prime advantage to the Apple ecosystem.

    The main advantage is Apple can revoke the application. If this app started doing bad things Apple can remotely prevent it from running, and in fact revoke all apps by the same developer. This central control is what scares people, but it's also what makes long term exploitation impossible. The Google ecosystem doesn't have this feature, with no centralized control.

    • Re: (Score:3, Insightful)

      by berj ( 754323 )

      No review process will ever catch all bad actors. I think Apple should be doing a better job with reviews in several dimensions, but that's not the prime advantage to the Apple ecosystem.

      The main advantage is Apple can revoke the application. If this app started doing bad things Apple can remotely prevent it from running, and in fact revoke all apps by the same developer. This central control is what scares people, but it's also what makes long term exploitation impossible. The Google ecosystem doesn't have this feature, with no centralized control.

      I'm pretty sure (though not 100%) that this isn't true.

      I've downloaded many apps that have since been pulled from the app store (some MAME apps and some tethering apps). They all still run. Apple can pull apps from the store so that they can't be downloaded again but once you've got them on your device they can't do anything.

      • by Richy_T ( 111409 )

        Apple can pull apps from the store so that they can't be downloaded again but once you've got them on your device they can't do anything.

        I wouldn't put money on that. There just haven't been any transgressions worth the Amazon "1984" outrage, more likely.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          There is a difference between removing an application from the store because it goes against the terms and removing an application because it is malware. Apple is certainly able to make this distinction.

          Google is able to remove applications remotely, they did so in the past, google it up.

      • by Above ( 100351 )

        There are plenty of articles on the remote kill switch, here's one of the first: Steve Jobs confirms iPhone application "kill switch" [macworld.com]

    • -1, wrong.

      Yes Google does have the ability. If I get an app from the Play Store and it is removed by Google, they have the ability to remove it from my phone. Its happened a couple times with emulators. Now if I decide to circumvent the Play Store that is a different story.

      However, that is what Android gives... choice. With the App Store you don't have that choice; you only use what Apple lets you use. If you want to be a moron and run any old app, you can't.

  • Monitored? (Score:5, Interesting)

    by wiredlogic ( 135348 ) on Monday August 19, 2013 @02:00PM (#44609435)

    What kind of two-bit operation is Apple running if apps can phone home during the vetting process.

    • load external content = phone home.

      There are a lot of apps whose purpose is to present external data in a useful way. That's only marginally different than phoning home - you still want to proxy the data through your own domain for compatibility changes with the data provider if it's not your own data.

    • I have one that logs into my company's server to retrieve configuration information (and has a special 'Apple' account that the Apple testers use to validate the app.)

      I can see them testing releases in real-time on the server monitoring dashboard - "Oh, look, Apple just ran the app..."

      Many business applications require this type of functionality when being tested by the App store (as mine does.)

  • by gnasher719 ( 869701 ) on Monday August 19, 2013 @02:48PM (#44609943)
    1. The only people downloading the app were the developers. No "havoc" happened.

    2. The app is sandboxed. It doesn't escape out of its sandbox. Therefore, it can only do things that it is allowed to do.

    3. The identity of the developers was known to Apple. If malware was delivered to end users, Apple could get hold of the developer.

    4. To actually attack an end user, you still have to create an app that does what it claims it does, and that does things interesting enough to make people download it.

    5. If an app did "wreak havoc", then Apple could kill it dead on all iOS devices.
  • Just gonna lie in the headlines now? Come on...

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...