

Android 15's Virtual Machine Mandate is Aimed at Improving Security (androidauthority.com) 52
Google will require all new mobile chipsets launching with Android 15 to support its Android Virtualization Framework (AVF), a significant shift in the operating system's security architecture. The mandate, reports AndroidAuthority that got a hold of Android's latest Vendor Software Requirements document, affects major chipmakers including Qualcomm, MediaTek, and Samsung's Exynos division. New processors like the Snapdragon 8 Elite and Dimensity 9400 must implement AVF support to receive Android certification.
AVF, introduced with Android 13, creates isolated environments for security-sensitive operations including code compilation and DRM applications. The framework also enables full operating system virtualization, with Google demonstrating Chrome OS running in a virtual machine on Android devices.
AVF, introduced with Android 13, creates isolated environments for security-sensitive operations including code compilation and DRM applications. The framework also enables full operating system virtualization, with Google demonstrating Chrome OS running in a virtual machine on Android devices.
Re: (Score:2)
AFAICS this is basically just a standardization on a Google implementation of a hypervisor, the Android Compatibility requirements used to leave it up to the device manufacturers how to implement isolation for keystore encryption/decryption algorithms.
I assume this is mostly because of passkeys, with synced passkeys Google wants more control. Forcing their own hypervisor is a lower cost option than forcing a secure processor and also useful for DRM.
Re: (Score:2)
Re: (Score:3)
Besides, while you're of course not directly exposing the host's kernel ABI to guests executing in VMs, in order to get a nice integration for all your possible workloads you would still need to expose host interfaces to the guest somehow, meaning guests wouldn't be completely isolated anyways.
I've never looked into the details, but what knowledgeable people I trust tell me is that with pKVM the guests are completely isolated from the host and from one another (and the host has no access to the guests either), with a single exception: There is a way for host/guests to communicate with one another by passing messages. So as long as each VM (including the host) is careful to avoid vulnerabilities in their message parsing and processing code, a compromise in one cannot leak into others -- though of
Re: (Score:2)
AFAICS they simply assume that during the early parts of booting the Android kernel is secure, it loads the the KVM low-visor into EL2 and after that it can't mess with it any more even if it gets compromised down the line.
Re: (Score:1)
Re: (Score:2)
AFAICS they simply assume that during the early parts of booting the Android kernel is secure, it loads the the KVM low-visor into EL2 and after that it can't mess with it any more even if it gets compromised down the line.
That makes perfect sense and is a lot simpler than I though. Thanks!
And, yes, the assumption is definitely that the system is secure until the "low-visor" (I like that term) has been loaded into EL2. If the attacker can compromise anything before that, the game is over. Which does make the VMs potentially less secure than TrustZone apps, because the TCB is much larger. And I expect the really critical security stuff will actually stay in TZ, where it will benefit from moving all the rest out of TZ.
Re: (Score:2)
QubesOS is a good implementation on how it should be done on the desktop.
I'd like to see iOS do this as well. What would be really nice is a total separation of profiles, where I can have one profile for work stuff, one for general social media, one for private items (banking), and so on. This way, each VM would not just be separate, but could be encrypted separately, so if someone is on their home profile, and their phone gets stolen, it won't affect their work or other profiles, as those would be using
Probably for the DRM (Score:3)
The summary mentions DRM as just another feature, but I'd guess it's the motivating one.
Rooting, too (Score:2)
Wouldn't be surprised if AVF neutered root access as well. To be fair, this probably *is* more secure - I appreciate the ability for Aegis to extract the private keys from Authy to facilitate migrating, but readily admit that it's pretty horrifying in an adjacent context. Running each app in its own isolated VM would likely prevent the nefarious version of this, but also the legitimate one.
To your point, I'm sure Netflix and Disney and HBO are thrilled to have a more effective defense against the Widevine f
Re: (Score:3)
Curious... how does one accept that Google encourages and supports syncing those private keys (the TOTP keys, as well as passkeys), which involved them extracting them, uploading them to their servers, long term storage on their servers, and sharing them with other devices (hopefully your own), while accepting and defending the practice of not allowing the end user to extract those same private keys for their own personal backup or so they can manually add them to other devices to which they have physical access?
Oh, I don't...it's why I run /e/OS, and the reason I was moving from Authy to Aegis was because Authy started getting touchy about running on a rooted phone after their hack...I'm definitely a fan of keeping my keys where only I can access them for the very reasons you specify.
The best answer I can give to your question, though, is that the fundamental tenet behind being all-in on the Google ecosystem is the idea that Google does better data and device administration than the user...and for many, many peopl
Re: (Score:2)
But Aegis can only do it with root. Otherwise the existing jails are sufficient to prevent such things.
Re:Rooting, too (Score:5, Informative)
Wouldn't be surprised if AVF neutered root access as well.
I can't think of any way in which the AVF move will have an effect on rooting (and I'm the TL of the Android HW-backed security team, so my knowledge is pretty deep here). The AVF move is more about enabling compartmentalization of stuff that is currently all lumped together in TrustZone, as well as potentially enabling new security features which are currently too big/complex to be implemented in TrustZone.
For one example, I'd like to see the user authentication screen moved out of Android and into something like TrustZone. This would prevent remote software-based attacks that attempt to capture the user's password for later replay (e.g. to unlock KeyMint keys used to authenticate to bank accounts or whatever). But doing that requires putting a whole UI into the secure environment, which would have an enormous attack surface. Smaller than the attack surface provided by Android, of course, but much larger than what lives in TrustZone now. This is particularly true since the auth screen would need to implement accessibility features, to make it usable by people with disabilities.
It's not that such a beast can't run in TrustZone, it's that putting it in TrustZone poses an unacceptable risk to the other components in TrustZone. But if we have secure VMs, we can put it in one of those.
That's just one example, there are others. Perhaps someday app developers will even be able to spin up their own VMs for security-sensitive code. In the short term I'm arguing against that, but it doesn't seem unreasonable once we have a solid handle on how to make it available safely and efficiently.
It's also worth noting that a VM-based strategy is helpful on platforms that don't have TrustZone. VMs are how non-ARM platforms implement the required Android security components now, but there's a lot of variability in the details, and moving the entire ecosystem to a standardized architecture has many advantages.
Re: (Score:3)
I'm the TL of the Android HW-backed security team
Since you would be pretty knowledgeable on the topic, could this be used to still allow rooting, while offering a secure environment for the apps that really need it? And finally end the cat and mouse game with root by making both sides happy? I ask this, because while I certainly understand concerns about malicious access to banking or payment apps, there are still some uses for root where there isn't an alternative. For example, the app Network Signal Guru needs root to access the modem diagnostics interf
Re: (Score:3)
I'm the TL of the Android HW-backed security team
Since you would be pretty knowledgeable on the topic, could this be used to still allow rooting, while offering a secure environment for the apps that really need it?
Maybe in the long run?
It could eventually be possible for apps to create their own VMs that rely only on known-trustworthy code, enabling the wider Android system to be less trusted. But that would require a lot of work, and I don't think enabling rooting is a use case that would motivate the investment. It's not that the Android team opposes rooting, we don't. But there's just not much motivation to expend a lot of effort to support it, since it's of interest to a very small group of users, relative to
Re: (Score:2)
Passkeys are probably the biggest reason. They like better DRM, but with all the older devices and PCs they have to support for the next decade or so it will take a long time to become relevant.
Re: (Score:2)
When security weaponizes the device against the owner, it isn't a security system, but is in fact malware and that is exactly what this will be.
Re:Probably for the DRM (Score:5, Interesting)
When security weaponizes the device against the owner, it isn't a security system, but is in fact malware and that is exactly what this will be.
In what way do you think this weaponizes security against the owner? Note that if you have questions, I'm the TL of the relevant Android team and happy to answer them. This work is all being done directly in the Android Open Source Project, so there aren't any secrets. It's all there in the publicly-accessible code.
Re: (Score:2, Insightful)
DRM limits ownership rights by restricting backups, transfers, and uses of digital content. Owners can’t freely use what they’ve bought if it conflicts with the DrM’s rules.
It controls hardware and software choices, often forcing owners to use compatible, costly products, effectively locking them into a specific ecosystem.
DRM also impacts repairability, a
Re: (Score:2)
In what way do you think this weaponizes security against the owner? Note that if you have questions, I'm the TL of the relevant Android team and happy to answer them. This work is all being done directly in the Android Open Source Project, so there aren't any secrets. It's all there in the publicly-accessible code.
So why can't we have things using our own signing keys for bootloaders? Sure, give us a notification that it's not stock, that's totally fine. And don't tell me that you don't have leverage over vendors on this because I already know otherwise, mandating support for GSI proved that.
I get that you want to take away a lot of nice stuff like payment card support when we're root, so why can't we have it isolated for stuff we do want? This isn't a piracy thing either. For example, the main reason I use root is s
Re: (Score:2)
So why can't we have things using our own signing keys for bootloaders?
That would be cool, and it seems like it should even be technically possible, since ChromeOS has done it forever. We'd have to do something like the CrOS "dev mode" which disables access to secret keys in the firmware (e.g. for DRM, etc.), but it is possible. If we did it, though, it would have to be done in such a way that we can guarantee user mods can't cause device attestation to lie about the state of the device.
As for why not... because it would be a huge amount of work to do it even on Google's ow
Re: (Score:2)
If we did it, though, it would have to be done in such a way that we can guarantee user mods can't cause device attestation to lie about the state of the device.
Doesn't the bootloader already do this? Sure, the kernel could drop whatever into the PCRs (or whatever you guys use, I've only done development against the only hardware attestation mechanism that has an open specification, namely TPM) after that point, but other than that you can at least attest the base state. With TPM at least, the kernel can't alter the PCRs that hash the firmware.
Though what would be nice is it if you guys could have a certification program for third party AOSP variants like GrapheneO
Re: (Score:2)
If we did it, though, it would have to be done in such a way that we can guarantee user mods can't cause device attestation to lie about the state of the device.
Doesn't the bootloader already do this? Sure, the kernel could drop whatever into the PCRs (or whatever you guys use, I've only done development against the only hardware attestation mechanism that has an open specification, namely TPM) after that point, but other than that you can at least attest the base state. With TPM at least, the kernel can't alter the PCRs that hash the firmware.
Though what would be nice is it if you guys could have a certification program for third party AOSP variants like GrapheneOS that doesn't prevent stuff like Netflix from working. (The way grapheneos is designed doesn't even make it practical to break netflix DRM.) Also PCI transactions.
At present, the Android bootloader is the root of trust for device lock state and system partition integrity. It is literally the software component that verifies all of that. We don't use TPMs, but if we did, the Android bootloader would be the component that fed the device state and system hash into the PCR. Assuming no change in the architecture, if you can write your own bootloader you can make the device lie about any of this.
TCG DICE, BTW, provides a solution, because the DICE attestation describ
Re: (Score:2)
which means it would be possible for us to allow user-signed bootloaders
Just to clarify, I mean loading our own public keys that we can ask the bootloader to trust, not signing our own firmware.
TCG DICE, BTW, provides a solution, because the DICE attestation describes the entire software stack and is rooted in the CPU's boot ROM.
Interesting, I hadn't heard of that. My interest here is mainly for device integrity for protecting systems at the company I (do security) work for. We're the kind of company that has no problem rolling our own stuff vs vendor solutions if they're inadequate, so we've rolled our own attestation framework.
As a consumer though...meh. I really hate giving such tight controls over device sta
Re: (Score:2)
which means it would be possible for us to allow user-signed bootloaders
Just to clarify, I mean loading our own public keys that we can ask the bootloader to trust, not signing our own firmware.
Ah! Pixel has always supported that. If you want to sign your own system images, buy a Pixel and enjoy.
Android verified boot has four states, based on device configuration and the result of signature verification:
Green: The system was verified by the OEM keys.
Yellow: The system was verified by user-installed keys.
Orange: The system was not verified because the bootloader is unlocked.
Red: Verification failed. The device refuses to boot in this state.
All OEMs are required to support red and green
Re: (Score:2)
I was actually surprised to find it's not standard, or not even an option in the hotspot settings... So please count my demand. ;-)
Re: (Score:1)
Re: (Score:2)
Couldn't this be abused to, effectively, bring closed-source components into Android as a separate OS, without violating its license?
First, Android is licensed under Apache 2.0, so including closed-source components is not a license violation. Linux is GPL2-licensed of course, but it's well-established that that only means you have to provide source to the kernel, not anything on top of, or underneath, the kernel. Even kernel modules can be closed-source. So, there's nothing license-related that blocks closed source. And there is no Android device, AFAIK, that ships without including closed source. I'm not sure there's any desktop Lin
Re: (Score:1)
Are you sure? All these sound like any 'centralization' will be easier and more circumvention-proof:
Re: (Score:2)
Are you sure?
The move to VMs won't make DRM stronger. It's already implemented in the TEE (Trusted Execution Environment), so breaking it requires finding and exploiting a vulnerability in that constrained and isolated environment. There is other TEE attack surface, but it's small; DRM is the biggest attack surface there.
The primary security benefit of the VM move is to protect the other security components in the TEE from DRM vulnerabilities. The DRM implementations have a long history of vulnerabilities, and expl
Re: (Score:1)
Okay, but what about the other two points? Aren't these related to AVF:
Still, would be better (from the POV of some of us) .
Well, as you say, only 'primary'. And previously one could've set a vulnerable DRM against another secure one.
Re: (Score:2)
Okay, but what about the other two points? Aren't these related to AVF:
Sorry, I'm not clear what points you're referring to?
Still, would be better (from the POV of some of us) .
What would be better? If you mean more attack surface, I guess that's true if you prioritize being able to defeat DRM over being able to keep your own data safe.
Well, as you say, only 'primary'.
I'm not a lawyer, and I'm not trying to hide things behind qualifiers like "primary". I said primary because there are many reasons. DRM hardening is not among them.
And previously one could've set a vulnerable DRM against another secure one.
I'm not sure what you mean here? Set a vulnerable DRM against a secure what?
This is also a bigger deal than it sounds, because earlier, incovenience (to the user and/or coder) would've had stayed an abusive authority's hand.
Interesting... what sort of abusive authority are you
Re:Probably for the DRM (Score:4, Informative)
The summary mentions DRM as just another feature, but I'd guess it's the motivating one.
It's relevant, but probably not in the way you think. The Android security team doesn't care about DRM. However, existing DRM strategies have in the past harmed platform security, and one of the motivations for using VMs is to compartmentalize DRM to eliminate the risk.
Currently, Android video DRM is implemented in ARM TrustZone on ARM devices (and all Android devices are ARM-based, to a first approximation). Several critical security features are also in TrustZone, notably the cryptographic services component, KeyMint, and the user authentication component, GateKeeper, though there are other important bits.
Moving DRM out of TrustZone won't make DRM stronger (or weaker), but it will remove a large attack surface with a long history of vulnerabilities from TrustZone, protecting the security components from risks created by co-location with DRM. In an ideal world the TrustZone OSes would be sufficiently secure that they could protect trusted apps from one another, but there are many TZ OSes in the Android ecosystem, and many of them aren't very good. IMO, none of them provide separation as strong as what VMs will provide.
In the long run, what is perhaps the ideal from a security perspective is to move all of these components into separate VMs, so a vulnerability in one of them does not affect the security of the others. In addition to compartmentalization, VMs are a little easier to standardize and update, facilitating validation and patching. Of course, security concerns have to be balanced against performance concerns, and each VM consumes non-trivial RAM, so there will still be some grouping of functions into VMs. DRM will not be co-located with, e.g. KeyMint, however.
Android is also halfway through migrating to a remote authentication strategy based on TCG's DICE [trustedcom...ggroup.org] architecture which will make it possible to remotely verify whether a device is up to date, in a way that attackers cannot spoof unless the chip's boot ROM contains an exploitable vulnerability, and the DICE authentication will include the contents of these security-focused VMs.
Re: (Score:2)
Android is also halfway through migrating to a remote authentication strategy based on TCG's DICE [trustedcom...ggroup.org] architecture which will make it possible to remotely verify whether a device is up to date, in a way that attackers cannot spoof unless the chip's boot ROM contains an exploitable vulnerability, and the DICE authentication will include the contents of these security-focused VMs.
I appreciate you being in the thread...and I accept your answer to my earlier statement regarding rooting...but here, especially in the emphasized areas, is where I start to get a bit concerned. Remote Authentication has its use case; I certainly wouldn't want a server to be trusting a client's word on whether or not to access an e-mail account...but why should some other server be able to tell what version of the OS I'm running and then use it for...basically anything other than telling me that an upgrade
Re: (Score:3)
why should some other server be able to tell what version of the OS I'm running and then use it for...basically anything other than telling me that an upgrade is available?
If there is a policy associated with the data that the server holds such that it may not do transmit data to an out of date device, then the server has an interest in knowing if the client is up-to-date on security patches. It should definitely be a distinct permission you can decline to grant to the client application/website, but then the service might not function.
An example might be a device used by physicians to access patient data. The hospital IT policy might be that all devices that access the phy
Re: (Score:2)
Sure, we believe (Score:1)
Re: (Score:2)
Re: (Score:3)
Time to leave Android. It did well first 5 years or so. Now users control is taken away bit by bit.
How does AVF take away the user's control?
Re: (Score:2)
Re: (Score:2)
creates isolated environments
How is that different from what TrustZone does now?
Re: (Score:2)
Time to leave Android.
And use what instead? Apple's offering is even worse in that sense, and the freely available ROMs are always hopelessly incomplete and lagging far behind, assuming that you can even install one in your device of choice.
Re: (Score:2)
Protected mode? (Score:2)
When everything runs in a VM, could we just get rid of protected mode. Seems unnecessary when there is only one program running. We are going back in the DOS days.
Android Virtualization Framework (AVF) (Score:2)
I said as much years ago!