Skip to main content
Article

What Does Agentic AI Mean for Interoperability, User Freedom, and Privacy?

AEIdeas

August 21, 2025

Agentic AI, or automated systems that are capable of completing tasks and making decisions without human intervention, requires interoperability to remain innovative and competitive. But what does this degree of data access mean for user privacy? And how can this technology provide us with greater agency over our lives?

In this episode of Explain to Shane, Shane Tews is joined by Matt Boulos, head of policy and safety at Imbue. Years of advocacy for user freedom and his extensive knowledge of this next-generation AI agents make for an engaging and informative conversation.

Below is a lightly edited and abridged transcript of our discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review and tell your friends and colleagues to tune in.

Shane Tews: What’s driving the surge in interest in interoperability? And what do we need to do to get access to the data to allow that interoperability to happen?

Matt Boulos: I think the most important thing surrounding interoperability is that it’s been a concern for a really long time. We’re looking at how this thing that people have been talking about long before AI all of a sudden became active and interesting again. At a basic level, we have all this data about us and all the digital experiences that are inferior to what they could possibly be. When people have tried to build something better, they hit a wall because they can’t get the necessary data. As we’ve seen, we really haven’t seen a market for software based on interoperability emerge because that just wasn’t available. But then what AI has come along and done is it’s lowered a lot of barriers. One important barrier was just how to make sense of reams of particularly personal data, but also reams of any kind of data. If you recall, like after Cambridge Analytica, the major platforms set up these portals where you could download your data. And if you did that, it was all useless. I don’t know what this is. Meanwhile, an intelligent model can, without really much effort, tell you exactly what’s going on in there and use it.

What about if I were to create a “Shanebot”? Will there need to have lots of agreements behind the scenes with companies for me to allow the Shanebot to be smart and interact across different platforms?

So today, if you individually wanted to build a Shanebot, you actually probably couldn’t get around the terms of service. What you’re seeing is the very large platforms negotiating these deals. What’s kind of rich, but telling, is looking at what Salesforce has done in the last few months. There is a company, I think it’s too big to call it a startup at this point, but it’s called Glean, and it does enterprise search. The idea is that if you have a company, you want your employees to access information scattered across however many systems you use. You search in one spot, and the idea is that Glean searches across all platforms that contain your information. And Salesforce is saying that you can’t do that. Their terms of service say that you can’t just go ahead and try to index what’s living in Salesforce. The kicker, of course, is that Salesforce has negotiated interoperability deals with other platforms for its own products. So, the Salesforce platform can connect to those other products. This is pretty exemplary of where, if you have the market position and power of Salesforce, you can block out the challengers while negotiating the deals commercially to give access.

Let’s talk about the use case of a “Mattbot.” You wouldn’t be able to have access to the healthcare Epic system at this stage. Do you think there will be a market for consumer-level entry into that, so it becomes a possibility? How do we get around that barrier when it’s information that’s important to you and me?

No, I think this is the exercise that we need to be doing. But it’s also not just a theoretical exercise; it’s also the thing that startups need to start trying to build. What makes what you’re saying so exciting is that there is a version of the world where these platforms that are managing information have a huge custodial responsibility. They have to hold to their obligations, but are they able to create effective interfaces so that if another third party needs to access the calendaring information through an authentication layer, I’ve authenticated as me. This agent goes and checks for my appointment.

That is, frankly, the holy grail. When you’re looking at what OpenAI and Anthropic are building towards, and Google to some extent, is they want it to be like you go and you say: Figure out how I’m going to get to my appointment. And in that case, those sorts of available interfaces will be incredibly useful. Now, whether or not these companies will be driven to do that absent some sort of regulation, that’s the part that’s really unclear to me or, to be polite about it. I don’t think they’re going to do it because they don’t have a real incentive at the moment.

You’re talking about WeChat, which has this amazing “your world in one app” experience. You can do most things there, but that’s because it’s centralized and you have to be willing to give all of your information to the Chinese government. Is there a way to do this in a disaggregated model where you have the capability of saying yes or no to sharing information?

You’re raising a great point, which is every time you centralize, there are huge benefits that come with that at the cost of being surveilled. And it’s a real cost. But let’s walk down what is probably the most likely scenario of what’s going to happen with AI agents with interoperability without centralization, which is that these things are going to be really useful.

What we have learned is that when things are useful, we give them information, we talk to them, we share our lives with them. I care deeply about privacy. I don’t live in a cave, and I have a phone, and I know that for the most part, most of what I’m doing online is being surveilled, and I keep that in mind, but I go on about my day. But an agent will, for instance, need to know more personal information if you want it to help with more personal things and it’s going to collect that information, it’s going to be very particular to you and you’re going to share more as it knows more, and so it’s very unlikely that we’re not barreling towards that WeChat-ish world anyway.

AI has been a catalyst for privacy like I have not seen before. We try to explain why people should care and keep their information. What are your thoughts?

I’m optimistic. I’ve long felt that the popular discourse around privacy has just been silly. Right? Like the words that people would use when they started learning about how much information Facebook was keeping. They say, that’s “sketchy.” It’s “creepy.” That’s not how you decide what’s appropriate. Sketchy and creepy is when your neighbor digs through your garbage bin. That’s not constant surveillance.

The idea that digital copies of us exist in the world and the digital copies determine what’s possible for our digital selves, means that the difference between our digital and physical selves becomes blurred. There is something that got triggered by AI, which is that it feels amorphous and scary in a way that humans don’t. And I think this is operating on a purely emotional level where people are worried about AI. And AI is speeding along the public perception, but I also think that’s being driven by reality, that you could ignore privacy concerns, you could ignore data protection concerns if capabilities were sort of what they were, and nothing super horrible has happened. So, you kind of muddle along, but horrible things can happen now.