AI Agents 'Perilous' for Secure Apps Such as Signal, Whittaker Says 16
Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions.
Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is "pretty perilous" for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised.
Whittaker called this "breaking the blood-brain barrier between the application and the operating system." "Our encryption no longer matters if all you have to do is hijack this context window," she said.
Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is "pretty perilous" for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised.
Whittaker called this "breaking the blood-brain barrier between the application and the operating system." "Our encryption no longer matters if all you have to do is hijack this context window," she said.
Containers (Score:4, Interesting)
I'm increasingly convinced that if you're running an AI interaction at all it needs to live in a container. Somehow the sci-fi wisdom of "no seriously, don't give an AI access to the internet" flew right out the window when AI could tell us when our boss' emails actually had something in them worth reading. I get that, but ESPECIALLY for software developers, if you're going to make use of agentic AI systems, you need to have a metaphorical (if not literal) moat around the agent before you just turn it loose.
That was true before we started talking about the security implications of an AI with privileged access coming under attack.
Re: Containers (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
The issue is that to maybe have some grand purpose, these "agents" require access to both data and a mean to action. And you can bet that for most people, if you have actual safeguards outside of the control of these "agents", say, a confirmation box with a sequence of actions, most people will blindly click "ok", while most other people will look for a way to disable the confirmation.
It's not a new problem either. The attack surface provided by the human interface was always a good one; keylogger, screen c
Re: (Score:2)
While I agree on the isolation need, the problem is that AI Agents only make sense if they can actually do things. And then the isolation will probably not help.
Re: (Score:2)
Re: (Score:2)
Indeed. I mean, that is why the idea gets pushed. If people understood the risks, the whole thing would be DOA.
Re: (Score:2)
Regulation will only get so far. Imagine if a person needed a license to operate an AI Agent. It'd be almost comical.
Re: (Score:2)
Indeed. The second problem is that while utterly dumb, AI has some pseudo-intelligence and randomness-faked creativity. That means for effective supervision, significantly better skills are needed, some pattern-based "guardrails" will not do. And that means that critically needed supervision cannot be automatized. When the "best" systems are used as agents, there are no sufficiently better systems left to make sure they, say, do not give away all your money, install malware and set your house on fire.
Re: (Score:2)
Every sane person does this. Either you get just asked "run gcc -o program program.c" or the shell access lives in a container. If someone just gives unlimited access to the home system, they volunteer in doing something dumb.
Root? What century did they crawl out of? (Score:2)
A modern technology ecosystem doesn't really even have a notion of root. Every process has ACLs that give them access to certain things within their sandbox. An AI agent running on a modern device should not have any access to anything from Signal unless Signal deliberately exposes it to a system-wide search system like Spotlight.
The only rational solution is for apps like Signal to integrate local on-device AI with limited capabilities to assist with searching messages, provide a means for the user to e
Re: (Score:2)
If you let an AI Agent get elevated privileges, well... you get everything you deserve.
Re: (Score:2)
Problem is that, even today, there are huge numbers of Windows based apps that require elevated privileges to operate. They shouldn't. But the app developers (who are rapidly being replaced by even less competent vibe coders) don't know any better - or don't care.
That's why I specified "modern". Windows is architecturally a crufty fossil. I meant iOS, macOS, Linux, Android, and maybe some of the *BSDs. Those are the only OS platforms anyone should take seriously for security. With Windows, sandboxed security means one app per virtual machine, which I guess kind of works, but blech.
Duh (Score:2)
Does this even need to be stated? Well, apparently id does because most people have no clue what a massively bad idea AI agents are.