Signal president warns the hyped agentic AI bots threaten user privacy

SUZANNE CORDEIRO / AFP
- Meredith Whittaker, the president of Signal, said agentic AI poses serious security risks to users.
- Agentic AI refers to bots that can reason and perform tasks for humans without their input.
- But having a bot complete tasks for users means giving it access to reams of data, Whittaker said.
Signal President Meredith Whittaker is skeptical about agentic AI โ that is, AI agents that can complete tasks or make decisions without human input.
While some tech titans have touted how helpful agentic AI can be and launched AI agents for users to try, Whittaker warned of the privacy risks posed by the autonomous agents while speaking at the SXSW 2025 Conference and Festivals in Austin on Friday.
"I think there's a real danger that we're facing," Whittaker said, "in part because what we're doing is giving so much control to these systems that are going to need access to data."
Whittaker is the president of the non-profit Signal Technology Foundation that runs the end-to-end encrypted Signal app known for its digital security.
An AI agent is marketed like a "magic genie bot" that can think multiple steps ahead and complete tasks for users so that "your brain can sit in a jar, and you're not doing any of that yourself," Whittaker said.
As an example, she said agentic AI could accomplish tasks like finding a concert, booking tickets, and opening an app like Signal to message friends with concert ticket details. But at every step in that process, the AI agent would access data that the user may want to keep private, she said.
"It would need access to our browser, an ability to drive that. It would need our credit card information to pay for the tickets. It would need access to our calendar, everything we're doing, everyone we're meeting. It would need access to Signal to open and send that message to our friends," she said. "It would need to be able to drive that across our entire system with something that looks like root permission, accessing every single one of those databases, probably in the clear because there's no model to do that encrypted."
Whittaker added that an AI agent powerful enough to do that would "almost certainly" process data off-device by sending it to a cloud server and back.
"So there's a profound issue with security and privacy that is haunting this sort of hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services, muddying their data, and doing things like undermining the privacy of your Signal messages," she said.
Whittaker isn't the only one worried about the risks posed by agentic AI.
Yoshua Bengio, the Canadian research scientist regarded as one of the godfathers of AI, issued a similar warning while speaking to Business Insider at the World Economic Forum in Davos in January.
"All of the catastrophic scenarios with AGI or superintelligence happen if we have agents," Bengio said, referring to artificial general intelligence, the threshold at which machines can reason as well as humans can.
"We could advance our science of safe and capable AI, but we need to acknowledge the risks, understand scientifically where it's coming from, and then do the technological investment to make it happen before it's too late, and we build things that can destroy us," Bengio said.