🤫 Your AI Can't Talk About That Anymore

Using ChatGPT for emotional support? Still the most common use.

But using it for medical, legal, or financial advice?

That’s a no-go now.

At least, as of Oct 29, 2025.

OpenAI put down a clear line.Ā ChatGPT is now an educational helper. It won’t stand in for your doctor, lawyer, or financial planner anymore.

Let’s look at what’s happening behind the scenes and what you can do to make the most of this change.

🤐 ChatGPT Takes a Step Back

OpenAI will block ChatGPT from offering personal medical, legal, or financial guidance, affecting solo freelancers, global teams, and everyone in between.

Key Facts

  • šŸ“‰Ā Missed Answers:Ā ChatGPT answered only 31% of recent board-exam questions correctly in a recent test.

  • šŸ™ŠĀ Mandatory Refusals:Ā ChatGPT must decline individualized advice on health, law, and money.

  • āš–ļøĀ A Safety Wake-Up:Ā A user landed in the ER after the model suggested a chemical substitute for table salt.

**I’ve use Chat for all of these, and it’s no surprise. **

The best example I have why this is still risky is a fun one.

I put my 2024 Tax Returns into an AI model and tore them apart looking for more ideas, deductions & tax savings.Ā 

Now, I already have an amazing CPA and bookkeeper, but like most things, it was an experiment to see what I could find.

And it found 3 deductions I told my finance team about.

The plot twist?

Two of the three, the model completely made up. There was no tax code to support it’s confident claims.

Embarrassing.

However the third was something we definitely missed.

With 33% odds, it got me thinking.

Would you trust a doctor who gets two out of three answers wrong on the hard stuff? Would you hire a lawyer with that track record?

Of course not.

The numbers back up the concern. When researchers tested ChatGPT on medical licensing scenarios, only 31% of answers were completely correct.

Think about that.

This isn’t about AI being bad at its job.

It’s about understanding what job AI should actually do. Licensed professionals carry insurance, follow regulations, and face consequences when things go wrong.

AI doesn’t have a license to lose or malpractice coverage to claim. When it guesses wrong,Ā real people pay the price.

A 60-year-old man reportedly got very sick after following a salt substitute suggestion from ChatGPT, requiring weeks of medical care. It is a good example of how something seemingly insignificant can turn dangerous quickly.

Is this change inconvenient? Absolutely.

But it’s also honest about current limits and real-world risk.

šŸ¤–Ā ChatGPT Capabilities

Here’s the new reality:

Medical conversations:

āœ… It can explain how migraines work and what treatments exist āŒ It won’t tell you which medication to take or what dose to use

Legal discussions:

āœ… It can teach you about contract structures and legal concepts āŒ It won’t draft your lawsuit or write terms for your specific dispute

Financial topics:

āœ… It can explain compound interest and investment strategies āŒ It won’t pick stocks for you or plan your taxes

🚧 The Access Problems

Here’s where things get* complicated. *

Not everyone can afford a specialist.

People have been using ChatGPT at midnight when their kid has a fever and the clinic is closed. Small business owners have been drafting basic contracts because lawyer fees would eat their entire profit margin.

This update creates a real tension between safety and access.

**We’re protecting people from bad advice, but we’re also cutting off a resource **that many relied on when professional help was out of reach.

That’s not an easy trade-off, and pretending otherwise would be dishonest.

šŸ“ˆĀ What’s Rising to Fill the Gap

While general chatbots step back,Ā specialized toolsĀ are stepping forward. These aren’t your average AI assistants. They’re built with compliance baked in, audit trails included, and professional oversight required.

In law, tools like CounselPro support document uploads with forensic reporting and privacy controls that legal teams actually trust.

For finance, AlphaSense powers market intelligence while Kasisto brings bank-grade assistance with the compliance limits and audit capabilities that regulators demand.

The pattern is clear. AI works best when paired with human expertise, not when it tries to replace it.

šŸ’”Ā Making This Work for Your Business

Whether you use AI for personal or professional purposes, you need to adjust your approach. Here’s what I’m telling the leaders I work with:

→ Change the question. Stop asking AI to tell you what to do. Start asking it to explain, compare, summarize, and create checklists.

→ Build in checkpoints. Every output that touches sensitive areas needs human review.

→ Keep records of everything. Log your prompts, model versions, and information sources for the most important items.

→ Rewrite your prompts. Instead of ā€œWhat should I take for my headache?ā€ try ā€œList questions I should ask my doctor about headache treatments.ā€

→ Design for the refusal. When AI says no, your system should gracefully hand off to a human. Offer to save the conversation for professional review. Provide a checklist of questions to bring to an expert.

šŸ’¬Ā My Take

I know this frustrates power users who loved quick drafts for contracts or treatment questions. I get it.

I’ve actually been working on a micro-startup to handle one of these use cases as a specialized tool.

I built a new mini startup.

**Mine is for Legal & Document signing with an AI twist. **

Interested?Ā I haven’t announced it yet, give it a try here!

šŸ‘†šŸ»šŸ‘†šŸ»šŸ‘†šŸ»

āœļø Safe, Effective Prompts for Sensitive Work

Use these to help you get the most out of AI and a professional:

  • ā€œExplain the concept of [X] in simple terms and list 5 questions I should ask a licensed professional about my situation.ā€

  • ā€œSummarize common risk factors related to [topic], cite reputable sources, and note what information a professional would likely request.ā€

  • ā€œCreate a neutral checklist of documents a [lawyer/doctor/financial advisor] may review for cases like [general scenario], without giving any recommendations.ā€

And my favorite to be more self sustainable with critical thinking:

  • ā€œWhat’s one question I haven’t yet asked myself about this situation?ā€

These keep you on the right side of policy while saving time and improving the quality of expert conversations.

šŸŽÆ Spend a VIP day with me in Newport Beach!

I’m now offering limited VIP days. Something I’ve never done before, but has been on my heart now for years.

→ Spend the day with me in gorgeous Newport Beach, CA. → Full day workshop with AI strategies on your business in a private workspace, guided by me. → Lunch & Dinner provided. → Full overview, recordings and breakdown delivered in 48hrs.

**This would be the single most focused, and fast-tracked way to ramp up AI into your business I can offer. **

We can even build your Digital Clone together.

Sound great?

VIP DAY DETAILS NOW AVAILABLE

āœŒļø Your Move

The companies that win in this new environment won’t be the ones that try to work around these limits. They’ll be the ones who build better handoffs between AI and human experts. They’ll create workflows that use AI’s strengths while respecting its limitations.

We’re watching AI grow up in real time.

The wild west days of ā€œask anything, get an answerā€ are ending. What’s replacing them is more thoughtful, more responsible, and ultimately more useful.

If you’re building something interesting within these new boundaries, I want to hear about it.Ā 

P.S.Ā Our community is growing and we’re welcoming new members in weekly.Ā Join usĀ šŸ”—

Enjoy this edition?

Get CTRL+ALT+BUILDā„¢ delivered to your inbox every week.