We Want AI… And We Don't
81,000+ people were interviewed about AI in the largest qualitative AI‑user study to date.
The part that got me?
It's not the number of people… (although that's pretty cool).
It's how they did it, and what those people asked for when nobody boxed them into multiple-choice answers.
Anthropic basically said: "Tell us, in your own words…" and then let an AI interviewer ask follow-up questions like a human researcher would.
That matters, because humans are messsssyyyyy.
We want contradictory things.
We want speed and certainty.
Support and independence.
We want AI to take the wheel.
…and we also want our hands near the steering wheel the entire time.
This study captures that tension better than almost anything I've seen.
And we're going to jump into this straight away, because its so good.
🧠 What 81K People Told Anthropic They Want (and Fear) From AI
Anthropic ran AI-led interviews with Claude users worldwide.
The takeaway? ..people want help with life - but they’re worried about the cost.
Key Facts
- 🌍 Global Reach - 159 countries represented
- 💬 Method - Claude asked open questions and follow-ups
- ⚖️ Main Pattern - The same feature that excites someone often unsettles that same person
🗣️ How the Study Worked (and Why You Should Care)
Instead of a normal survey, users went through three open-ended prompts, in order:
- How do you already use AI?
- What do you wish AI could make possible?
- What are you afraid AI could do?
Then the "Anthropic Interviewer" asked follow-ups to clarify meaning and pull out details, at a scale that would be impossible with a human-only research team.
❗ A quick reality check on limitations.
This isn't perfect data, and Anthropic is pretty open about that.
→ Sample bias. These are active Claude users. People who dislike AI or quit AI aren't showing up much here.
→ Positive framing bias. The interview starts with hopes, so the overall tone leans optimistic.
→ Regional skew. More responses came from English-speaking and wealthier regions than the world as a whole.
So no, this isn't "what all humans think."
But it is a very clear picture of what engaged AI users want next, and what's keeping them from fully trusting it.
If you build products, AI or otherwise, this is a quiet reminder that research is changing.
You can run real interviews, semi-structured and full of nuance, without needing a giant budget and without reducing your users to checkbox data.
Here's a simple thing you could borrow from this immediately. Call it the "dreams vs. fears" ladder.
Ask your customers:
- "What would 'amazing' look like?"
- "What would make you stop using this instantly?"
- "What feels uncomfortable about it, even if it helps?" Those questions are where trust is won or lost.
Ok, that's your baseline, let's get into the good stuff.
✨ What People Want From AI
Here's the part I think every business leader should sit with: most people aren't asking for a cooler chatbot.
Anthropic grouped the "visions" into categories. The biggest ones looked like this:
- Professional excellence (~19%): "Take the boring stuff off my plate so I can do the work that actually matters."
- Personal change (~14%): emotional growth, mental health support, better habits, self-improvement
- Life organization (~13 to 14%): schedules, logistics, planning, household admin
- More free time (~11%): time back for family, hobbies, rest
- Income freedom (~10%): building a side business, new income, less financial pressure
- Big social change (~9%): healthcare, education, climate, poverty, government services
When you zoom out, the themes get even clearer.
🕜 About one-third of people want AI to help them make room for life. Time, money, mental bandwidth.
🧑💻 About one-quarter want help doing better work.
☀️ About one-fifth want help becoming a better version of themselves.
That's the story.
A lot of the AI conversation online is framed like: "How do we get 20% faster at writing emails?"
But people, in their own words, are basically saying:
"Help me stop feeling behind."
"Help me stop feeling stuck."
"Help me handle the load."
And if you run a company, that should hit close to home. Because those are also the feelings behind burnout, turnover, customer churn, and stalled growth.
I want you to think about something here. What would your customers do if you gave them back 5 hours a week?
Would they buy more?
Stay longer?
Refer friends?
Complain less?
Learn faster?
That's why "time freedom" isn't a soft benefit.
It's a competitive advantage.
And it's one worth protecting.
🌑 What People Fear From AI (The "Shades")
Now the other side.
The most common fears weren't sci-fi. They were painfully practical.
Here are the top fears, by share:
- 26–27% flagged unreliable answers
- 22% worried about pay and jobs
- 22% listed loss of control
- 16% pointed to skill atrophy (or “brain rot” as the kids say)
⚖️ The Same Feature Creates Hope and Fear
Anthropic frames this as a "light and shade" pattern. The very thing people love is also the thing they worry about.
Here are a few examples you've probably felt yourself:
- 🧠 Cognitive help gives you more output, but brings worry about losing skills.
- ❤️ Emotional support gives you comfort, but brings worry about dependence.
- 🔁 Automation gives you time back, but brings worry about losing control.
People who use AI more like an emotional companion are more worried about dependence than average.
People who use AI for learning are more likely to worry about their thinking getting "rewired."
That's a very human response. "This helps me… and that scares me."
If you're building products, this is the part that's easy to miss.
You can't optimize for "helpful" and ignore the emotional and practical cost of that helpfulness. Because if your product becomes essential, users start asking: "What happens if I can't do this without it?"
That question deserves a real answer from you, built into your product, before the user has to go searching for it.
🧩 What This Means for Your Business
Even If You're Not "an AI Company", this study is basically shouting one thing:
People don't want more features. They want more capacity.
So here are a few ways to apply this without rebuilding your company.
1) Trust Is a Product Feature Now
Since unreliability is the number one fear, your AI experience needs ways to reduce "guesswork."
Practical patterns that help:
- Confidence signals in plain language, like "high confidence" or "low confidence"
- Citations or sources when available
- Fast fact-check flows, such as "verify this" buttons or side-by-side references
- Clear audit history showing what changed, why it changed, and who approved it
2) Build Skill Support, Not Skill Replacement
People want help, but they don't want to get rusty.
A strong pattern here is scaffolding:
- AI suggests structure, options, and examples
- The user supplies the key inputs and final judgment
Think about where AI should suggest and where a human should decide.
If you can answer that clearly for each use case, your users will feel safer. And they'll stick around longer because of it.
3) Keep the User in Charge (and Make "No" Easy)
Loss of control is tied for the second-biggest fear.
So if your product makes recommendations, whether schedules, messages, or workflows, build "off ramps" into the experience:
- "Accept / edit / reject" choices that are simple and visible
- An easy way to roll back changes
- A clear way to pause automation without breaking everything
Example, I had too much of something on subscription and just signed in to cancel it. Instead not only did they let me skip the next shipment I can now get it every 1, 2, or 3 months instead of every month. That offramp saved my sale.
Here's a good litmus test 👉 If a user is stressed, can they still understand what the system is doing? If the answer is no, trust will break exactly when they need it most.
🌍 Zooming Out
The study notes higher optimism in places like India and parts of South America, while the U.S., Europe, and Japan showed more neutral sentiment overall.
That's a reminder for anyone selling globally. "Trust" isn't universal. It's shaped by jobs, institutions, and daily life.
If your product goes across borders, spend time understanding what "safe" means in each market. What feels helpful in one country might feel invasive in another.
Share one hope and one fear you have about AI in your work or life.
One sentence each is perfect.
It helps me shape what we build and what I share next week. Your voice is part of how this newsletter gets better. And I mean that.
P.S. VIP day spots are OPEN for April! I only take 2 max per month and am LOVING working with people at this level.
Enjoy this edition?
Get CTRL+ALT+BUILD™ delivered to your inbox every week.