👀 Who’s Reading Your Emails?
I was on a call with a client a few weeks ago.
They run a media company, and you'd know the name.
We are building out their AI system - connecting data sources, building skills, getting the team set up to actually use this stuff at scale.
The stuff I absolutely love.
We were just about to get to the spot where we connect services like calendar and email.
And then my client stopped me…
Their partner had asked if it was safe to connect email to the system.
Out of everything we'd done so far…
- Chat history
- Drive files
- Finances
- all the years of history and deals..
…they were asking about email.
EMAIL!? I thought.
But it makes sense when you think about it.
Their email is the biggest historical asset to the business.
Every deal, relationship, decision. It's all in there.
That's not a small concern.
That's a real business question. And it deserves a real answer.
So I gave them one. And now I'm giving it to you.
As AI systems become more integrated into our inboxes, drives, CRMs, and calendars, every business owner needs to know what this means for trust and privacy.
And most people are going to get the wrong answer from someone who doesn't actually understand how this works.
Let me break it down for you…
📩 Handing Over Your Inbox to AI
The short version: connecting your data is not the same as training a model. Not even close.
Key Facts
- 🔌 Access is not absorption - a connector makes your data available on demand. It doesn't download a copy of everything into some unknown database.
- 🚫 API usage doesn't train the model - both OpenAI and Anthropic explicitly state that data processed through their commercial APIs and business products is not used to train their models by default.
- ⚠️ The real risk is agency, not leakage - the actual exposure is what you give the AI permission to do with the access, not the connection itself.
🤔 The Misconception
When most people hear "we're connecting our email to AI," their brain goes to one place:
"So the AI is learning everything about us now?"
It's a fair assumption.
But it's wrong.
True model training is a specific process.
It's called reinforcement learning. You run the model, it produces an output, you tell it if that output was good or bad, it adjusts, and you do it again.
Thousands of times. Millions of times.
That's how a model "learns" something permanently. It's like teaching a dog to sit. Once it's in there, it's in there.
That is not what happens when you connect a Gmail account to Claude or Manus or any other AI agent.
What actually happens is simpler.
You grant access. The AI now knows the data exists and can reach it.
But it doesn't go vacuum everything up the moment you flip the switch. It just sits there, available, until you ask it to do something that requires it.
Think of it like giving your EA access to your inbox.
They have the login and can see everything. But they're not reading every email constantly.
They only go in when you ask them to find something, draft a reply, or pull a thread.
That's the model.
And here's the policy to back it up - straight from the source.
"By default, we do not train on any inputs or outputs from our products for business users, including ChatGPT Team, ChatGPT Enterprise, and the API."
- OpenAI
"By default, we will not use your inputs or outputs from our commercial products (e.g. Claude for Work, Anthropic API, etc.) to train our models."
- Anthropic
So the fear that your private emails are quietly feeding the global brain of ChatGPT?
That's not what's happening.
Your data is yours.
⚠️ But Low Risk Is Not No Risk
Here's where I want to be straight with you, because I told my client the same thing.
There is a risk here. Just not the risk most people are worried about.
The REAL risk is AI agency.
When you connect your email and tell an AI agent to do something, it may decide the best path to your answer runs through your inbox.
It's not model training or a data leak…
But IT IS exposure.
And the best way to set it up safely is to give it rules.
Imagine you asked Claude to clean up your inbox.
A seemingly simple request that we would infer as ‘get rid of the junk and the stuff that doesn’t matter'.
Depending on the strength of the model, without rules, it could decide that the best way to clean up is to delete everything.
That's clean, right?
When I set up any AI system for myself or my clients with access to sensitive data, I build explicit constraints into the skills and prompts:
- You can't delete anything.
- You can't modify anything.
- You only look for what's needed for this specific task.
- You don't take more than is required.
- ..and so on
If you gave your VA access to your YouTube channel to review content and pull clips, technically, their access would allow them to delete videos.
But you trust them not to.
Adding AI rules is the same, once you get clear on what's off limits.
The guardrails are the job. Not the connection itself.
📬 The Real Question: Data Leaks
When the concern came up - "what if something gets leaked?" - it wasn't really about the AI.
It was about trust & privacy.
For my client, there are emails in that inbox from before they even had a real business. Honest conversations. Early mistakes.
The kind of stuff that exists in every long-term business relationship.
And the concern was: what if some of that ends up somewhere it shouldn't?
That's legitimate.
And the answer that I keep circling back to is: you build for it.
- Tell the AI to never pull emails from a specific sender.
- Scope it to only look at your outbound messages.
- Tell it to only surface information you've already approved and acted on.
- Make the rules as tight as you need them to be.
- Build a privacy-sharing skill that acts as a filter for data
The system works for you. Not the other way around.
And here's the REALITY CHECK I shared at the end of that conversation:
Google already reads every email in Gmail.
YEP. That's right.
Every email.
That's how they serve you ads and improve their own products.
If you're using Gmail right now and you're worried about AI seeing your emails - that ship has sailed my friend.
If true privacy is the goal, the answer is a private mail server, but email is already one of the least secure transportation methods in your business.
By design you're pushing and pulling plain text messages into other peoples servers that they store forever.
Don't overthink it.
And within the context of what most businesses are actually doing, connecting your email to an AI agent under your own rules is consistent with how you already operate in the cloud.
💡 What's Possible
The upside to setting up an AI Agent in your inbox is massive.
I had another client who was preparing for a high-stakes investor meeting.
They needed to pull every relevant email thread from the past two years - deals that fell through, follow-ups that never got sent, context on relationships that had gone quiet.
Manually, that's a full day of digging. Maybe two.
With an AI agent connected to their inbox and given a clear task with tight rules - read only, no modifications, pull threads matching these criteria - they had a complete brief in under an hour.
That's the exchange.
And that's not a one-time trick.
Once the system is set up, it works every time you need it.
👉 Sit with these, especially if you run a business with years of history in your inbox:
What would it mean for your business if an AI could instantly surface every email related to a specific client, deal, or decision - without you having to search for it?
Where does your real business knowledge live right now? In people's heads? In inboxes? In drives nobody's organized?
If you gave an AI agent access to your data tomorrow, what rules would you need in place before you'd feel comfortable?
And the big one: are you waiting for "perfect safety" before you start... or are you managing real risk with real rules like you do in every other part of your business?
💪 You Can Do This.
This stuff isn't magic.
It takes time to set up right 👉 But I have something that can help.
You have to think through your data, your rules, your access levels, and what you actually want the system to do.
But it's not out of reach for anyone reading this.
I built a Workflow Mapper Skill 🔗 as part of my Creator AI Skill Stack specifically to make this easier.
It walks you through mapping your existing workflows, identifying where AI can connect, and building the rules before you ever flip a switch.
If you want to do this yourself, that's the place to start.
If you want it done for you - the full build, the connected data, the skills, the rules, the team rollout - that's exactly what a VIP Day with me looks like.
Two spots per month. That's it.
If that's you, apply below and I'll send you the details.
P.S. right now I'm building in a track to my VIP days that will launch and train AN ENTIRE AGENTIC SERVICE in a day. Because more people are applying and asking for this than ever before. Naturally - what used to take weeks - now is no longer about time - it's about your outcome.
Enjoy this edition?
Get CTRL+ALT+BUILDTM delivered to your inbox every week.