🧨 OpenAI Keeps Hitting Delete
OpenAI just shut down three product bets in a single week.
And while all of that was happening, they had just pushed forward on a Pentagon deal and told the market they want to pull their scattered apps into one "superapp."
From the outside, it looks messy. 😅
Up close, it looks like a company clearing risks fast before a likely Q4 2026 IPO. And doing it while spending money at a level that makes even seasoned tech nerds like me blink twice.
I spent some time pulling apart each of these moves…
Because there's something here that matters for anyone running a business, building products, or making decisions about the AI tools they depend on.
Today is longer than most, but an important one, so grab a coffee, and let's get into it…
✂️ OpenAI Cuts 3 Products in a Week
A look at why OpenAI cut three products, took a Pentagon contract, and moved its tools into one platform.
Key Facts
- 💸 Cash Tension - Annual revenue sits near $2 billion, yet burn could reach $25 billion by end of 2026
- 🛒 Rapid Exits - Instant Checkout, Sora, and an adult chatbot all disappeared in one week
- ⚠️ Risk Reality - Low sales, steep compute costs, and legal exposure tipped the balance
Let me start by saying I still use Chat nearly every day.
The cool kids are all moving completely to Claude, but there's something about ChatGPT that will be hard for me to fully leave. It's understanding of my life is second-to-none, I have automations running on GPT models that work perfect and don't need to be touched.
And, as of the 2025 ChatGPT wrapped they gave me, I'm a top 10% user globally & am in the first 0.1% of all users.
Me and Chat go way back, lol.
But this is where it gets strange, and important to look at it as more than a smart friend that “gets” you.
🛒 Instant Checkout Gets Cut
OpenAI tried to make ChatGPT a place where people buy things but customers said, "No thanks."
Walmart said checkout inside ChatGPT converted 3x worse than sending shoppers to Walmart.com.
Yes, worse inside the so-called **"AI future."
Only about 12 Shopify merchants ever went live, despite millions being eligible.
And the feature missed basics like:
- multi-item carts
- promo codes
- clear shipping info
- and accurate product data
You know… really important stuff for ecom.
The idea is actually a good one when you think about it.
If AI becomes where people search, ask questions, and get recommendations, then buying inside the chat window sounds like the next step.
Right?
It makes sense on paper.
But Walmart's EVP of Product and Design, Daniel Danker, basically said the experience was "unsatisfying."
That's a polite way of saying people didn't trust it, didn't enjoy it, and didn't finish the purchase.
So where did things drop?
A 3x lower conversion rate inside ChatGPT is the kind of metric that ends internal debates quickly.
It's hard to defend a shopping feature that makes customers less likely to buy.
Merchant onboarding was described as complex and error-prone. And product scraping caused wrong pricing or availability at times.
That's a HUGE error.
If your AI tells me something is in stock for $49, and I find out it's actually $79 and backordered, you didn't just lose a sale.
You lost my trust.
What makes this extra interesting is that the same week OpenAI cut Checkout, Google expanded its own commerce protocol, signing 20+ partners, including Shopify and Walmart.
And data in the space suggests AI-powered commerce can convert better when it's done the right way.
My take?
Customers punish friction instantly, and AI products don't get a pass just because they're AI.
🎬 Sora Shuts Down
Sora looked like a flagship video product, but the usage and money never caught up to the cost of running it.
Down 45% month-over-month in January 2026, after already sliding in December.
Lifetime consumer spending hit only $1.4M.
That's shockingly low for something this compute-heavy.
Sora didn't fail because people don't want AI video.
People clearly want AI video.
It failed because the economics never made sense at the scale OpenAI would need.
When you see a product hit a peak of 3.3M monthly downloads in November 2025, you expect the usual story.
Refine onboarding > improve quality > raise prices > grow subscription revenue.
Instead, the next chapters were brutal.
Total installs were around 1.2M by that point after the spike.
That's not "slow growth."
That's building a Ferrari and nobody wants to pay for the gas.
Then came the Disney situation…
A $1B investment was announced, plus a three-year licensing setup covering 200+ characters from Marvel, Star Wars, and Frozen.
Disney teams were actively working on projects.
Then OpenAI shuts Sora down.
Reuters sources described it as a "significant rug-pull."
The Financial Times confirmed the deal never actually landed because OpenAI changed direction.
Disney's public statement was polite, but the message was clear.
It's over.
💡 Here's the broader business lesson: ****Platform risk is real, and it's getting bigger in AI.
⚠️ Every tool you build on, every vendor you depend on, every integration you wire into your workflow carries this risk.
And the bigger the platform, the faster things can change without your input.
🔞 The Adult Chatbot Feature Gets Shelved
OpenAI paused an adult-mode feature because the safety, legal, and brand risks were too high, especially with kids in the user base.
Age verification reportedly misclassified minors at around 12% at its worst point.
Ahem - #girldad here. TWELVE PERCENT?
OpenAI has roughly 100M weekly users under 18, so even "industry standard" error rates can mean millions slip through.
Thats an estimated One-TENTH of all users of Chat are under 18.
Not great odds.
And the FTC has an ongoing inquiry about child safety in chatbots, while a jury hit Meta with $375M in penalties the same week.
This one is uncomfortable, but it's important.
OpenAI reportedly had an internal feature people called "Citron mode," covered by outlets like The Verge, Engadget, Barron's, and the Financial Times.
The plan was to allow explicit conversations in a controlled way.
Then it got shelved "indefinitely."
Two things:
1️⃣ It's hard to draw clean lines once you open the door.
Models trained to avoid explicit content can also struggle to reliably block illegal and harmful behaviors when you begin allowing more adult content.
Let me be really clear. That's not a small bug. 🪲
That's the kind of risk that becomes a headline overnight.
2️⃣ The user base includes a lot of kids.
A 12% misclassification rate sounds like a "number problem" when you read it on a page.
But multiply it by 100 million weekly under-18 users, and it becomes a human problem and a legal problem at the same time.
Also, look at the timing.
The FTC opened a formal inquiry into multiple AI companies around chatbot harms to children back in September 2025.
That investigation hasn't gone away.
And the same week OpenAI shelved the adult feature, a New Mexico jury hit Meta with $375M in civil penalties tied to child exploitation exposure.
So imagine OpenAI shipping an adult chatbot feature into that moment, with that error rate. 😬
That's the type of risk that can scare off partners, advertisers, enterprise buyers, and yes, IPO investors. All at once.
💡 Here's the broader business lesson: ****risk isn't always technical. Sometimes it's legal, reputational, and timing-based.
And the "timing" part is real.
You can do the same move in two different years and get two totally different outcomes.

And that's not all that happened with OpenAI…
👇👇👇
🪖 The Pentagon Deal
OpenAI grabbed a defense deal at high speed, then rewrote the terms after backlash, showing both its speed and its governance risk.
The deal was announced the same night federal agencies were told to stop using a competitor (Anthropic).
Early language allowed use for "any lawful activity," with no clear ban on domestic surveillance.
After backlash, OpenAI revised its terms to prohibit surveillance of U.S. persons and clarify limits around weapons autonomy.
This is the moment that feels like a tell.
A political decision hits, and a competitor gets labeled a "supply chain risk."
OpenAI announces a deal fast. People raise alarms. The terms get rewritten days later.
On one hand, yes. Defense is a huge market.
The Pentagon has signed AI deals up to $200M with major players.
Revenue and credibility are real incentives, and there's nothing wrong with pursuing them.
On the other hand, the "announce first, fix later" pattern can be exciting in consumer tech.
But in national security and public markets?
It can become a serious problem.
The kind of problem that follows you into earnings calls and regulatory hearings for years.
🖥️ The "Superapp" Pivot
OpenAI is trying to merge ChatGPT, Codex, and Atlas into one desktop app because too many separate products were slowing them down.
I feel this too. It's clunky flipping between various tools. I use Codex all the time now, including for my OpenClaw agents. But being in it's own system, not tied to your normal usage is off-putting.
Like anyone who uses Claude now and feels confused toggling between Chat <> Cowork <> Code and seeing different things everywhere.
Its feeling harder to want to use these tools by the week.
OpenAI leadership admits fragmentation slowed execution and hurt quality.
The center bet is clear: coding through Codex becomes a main growth engine.
And the timing lines up with an IPO-style story where one platform, one narrative, and fewer loose ends make for a cleaner pitch.
The Wall Street Journal reported OpenAI is building a desktop app that merges ChatGPT, Codex, and Atlas together.
And Fidji Simo, Chief of Applications, said internally:
"We realized we were spreading our efforts across too many apps and stacks… that fragmentation has been slowing us down."
That's a rare moment of honesty you can learn from.
Because most companies don't fail from a lack of ideas.
They fail from too many half-finished ideas fighting for the same people, time, and budget.
✋🏻 Who else feels seen?
The pull to do more is constant, and the discipline to do less is where the real growth lives. I work on this personally, every day.
💡Here's what's extra relevant for anyone building with AI right now:
OpenAI's product list has been massive.
Chat, video, browsing, hardware experiments, robotics initiatives.
That can work in a "try everything" culture where the goal is exploration.
But once you're aiming at public markets, the questions get sharper.
What's the product?
What's the revenue story?
What are the risks?
What gets cut when costs rise?
That's why the "superapp" story matters. It's a signal that the company wants a cleaner, simpler narrative.
And it's also a signal that the next competitive battlefield is going to be AI for knowledge work, especially coding.
So… Is OpenAI Collapsing? 🤨
Honestly, I don't see collapse.
Cutting these products makes sense if your goal is to remove anything that creates ugly questions for investors.
→ Checkout had messy commerce liability and poor performance.
→ Sora had crushing unit economics and collapsing demand.
→ Adult mode had child safety exposure during active regulatory scrutiny.
Knowing when to stop something is just as important as knowing when to start it.
But the warning sign is also obvious.
A cleanup this aggressive suggests the machine is expensive to run.
And the public markets may be the next fuel source.
Public markets ask different questions than private investors. They want consistency, margins…A story that holds up quarter after quarter.
The next few months will tell us whether this was a smart reset or the beginning of something more difficult.
I'm 17.3% sure that Google will crush OpenAI (but not yet)
Either way, the lessons from this week apply to all of us.
Know your numbers.
Know your risks.
Know when a product is costing you more than it's earning.
And have the courage to make the call before someone else makes it for you.
👉 If you run any kind of business that sells a product, sit with these for a minute:
- Where do customers drop off because the experience feels "off," even if it's technically working?
- What part of your buying flow depends on trust, like pricing, shipping, returns, or availability?
- And if an AI agent sent buyers to you tomorrow, would your site and inventory systems even be ready for that kind of traffic?
👉 Ask yourself these, even if you're not in tech:
- If you had to cut 30% of your projects next week, what would you keep?
- What is your "one platform" inside the business, the one place people work, track, and decide?
- Where are you spread thin because everyone has their own tools and their own process?
👉 If you're building products or buying software for your team, consider these:
- If a key vendor changed direction in 30 days, what breaks in your business?
- Are you betting on tools where costs rise faster than revenue?
- Do you have a backup plan for your most important workflow?
👉 If you run a business in any industry, here's what I'd be thinking about:
Where are you "one feature away" from a compliance or PR crisis?
If your product has impact on kids, directly or indirectly, what's your standard for safety?
And if you're using AI in customer-facing ways, what's your process when the model behaves badly?
✍️ Prompt You Can Use This Week (Steal This)
If you're building anything with AI, whether that's marketing workflows, customer support, sales enablement, or internal search, run this prompt in ChatGPT or your favorite LLM with your product and service details filled in:
"Act like a CFO and a compliance lead reviewing our product for a public company filing. List the top 3 risks that could scare investors by correlating the known challenges we've avoided up until now. For each risk, give: (1) worst-case headline, (2) how likely it is, (3) what data we should collect to monitor it, (4) one change we can make this month to reduce the risk, and (5) A plain english justification of why you chose this so we understand the hidden cost behind the explanation."
Even if you're nowhere near going public, this exercise forces you to look at your business the way someone with serious money on the line would look at it.
And that perspective is always worth having.
Share your thoughts and let me know… Do you think OpenAI is making smart cuts, or are these warning signs that the cost structure is getting out of control?
P.S. VIP day spots are OPEN for April! I only take 2 max per month and am LOVING working with people at this level.
Enjoy this edition?
Get CTRL+ALT+BUILD™ delivered to your inbox every week.