🤯 Google’s Insane Upgrade

Get this delivered to your inbox every Tuesday

🧠 Google’s Brainchild in Deep Research

Google has unleashed its Deep Research feature, now available with Gemini Advancedand it will blow your mind.

Key Facts

  • 📋 Autonomous Research Plans: From prompt to polished report, it handles everything.
  • 🌐 Web Browsing Powerhouse: Browses tens to hundreds of websites, learning and adapting in real time on it’s own.
  • 📚 Citations You Can Trust: Delivers findings with clickable references, ready to export.

Ok. 

… Stop what you’re doing.

Well, I mean you should definitely finish reading this.

AFTER you’ve finished the newsletter however, lol, stop what you were doing and go sign up for Google Gemini Advanced 🔗 (free for a month) to try it.

It’s insane.

There’s some great tech already in Gemini, and the latest model is also fantastic, but there’s ONE feature of Gemini 1.5 I need you to try.

Gemini, Deep Research.

You type in a simple prompt—for example, “What’s the market share of AI tools in 2025?”—and let it go to work.

In minutes not seconds (I’ll get into this below), it:

  • Creates a research strategy, just like a professional consultant would.
  • Browses hundreds of relevant sites, analyzing data from reliable sources.
  • Delivers a crystal-clear report, complete with charts, citations, and action-ready insights.

Now you’ve saved HOURS of work and have actionable info ready to go. 

Ok but let’s make this become real so you can see just what I mean about why this changes things for us. 

I’m going to give you a personal use case. 

By now you’ve probably heard about the devastating fires here in Los Angeles. We live here, in a suburb at the outskirts of LA County. We’ve also been thinking about selling our home and possibly turning it into a rental to upgrade our primary residence. These fires have changed a lot for my town, but it’s all so new, I don’t know where to begin.

Enter the world of making the robots do the work..

Here’s the prompt I fed to Gemini Advanced 1.5 Deep Research (nothing fancy):

> based on the current situation in Los Angeles with the devastating fires, i’m trying to think about the best and worst case scenarios for my single family home I own in Whittier. my family is thinking about upgrading our home, which means we’ll either sell this home and roll the equity into the new home or we’ll keep this and buy something else in order to rent this one. we do live in LA county. run models and scenarios for what i can expect to come in the next months and years based on the fact that these fires are so devastating to the county and consider the economy assuming all else like my income and personal scenarios only get better or at worst stay consistent.

In about 10 seconds, Gemini gave ME a plan to review, make edits and approve – this alone rivals some of the best research minds available. 

  1. Find reports on the impact of the fires on the housing market in Los Angeles County, specifically focusing on Whittier. 
  2. Find historical data on how natural disasters have impacted home prices in similar areas.
  3. Find information on the current state of the economy in Los Angeles County and how it might be affected by the fires. 
  4. Find information on the potential impact of the fires on insurance rates and property taxes in Whittier. 
  5. Find information on the current rental market in Whittier and how it might be affected by the fires.
  6. Based on the information gathered, create a best-case scenario and a worst-case scenario for the value of your home in the next months and years.
  7. Consider the potential impact of rebuilding efforts and increased demand for housing on the market. 
  8. Research any government assistance or programs available to homeowners affected by the fires.

Honestly I was great with that plan so I hit go. 

Gemini thought for about 6 minutes, researching 29 different sites it decided were relevant and gave me this insanely good report 🔗 that was wildly valuable. 

It even got me thinking about how it will be harder for me to lock in homeowners insurance than before, and of course, we were looking at homes up in the hills. Yikes.

I then proceeded to keep chatting with the report giving it more refined data, and it kept doing it’s thing. Thinking, planning, researching and all with -no rush- to be the most comprehensive it can be.

So how does it stack up?

Unlike tools like Semantic Scholar or Elicit, Gemini’s Deep Research doesn’t just provide data—it operates autonomously, refining its analysis and delivering comprehensive, actionable results. 

And those results are so good.

Let’s look at the tech:

  • Massive Context Window: Processes up to 1 million tokens at once—perfect for big datasets.
  • Advanced Reasoning: Continuously refines its strategy based on what it learns.
  • Seamless Integration: Export reports directly to Google Docs for editing and collaboration (like the link I gave you above to check out from the test run)

It saves you time, taking complex tasks like industry benchmarking or academic research are done in minutes.

And it goes beyond summaries—builds strategies, refines findings, and pulls together coherent insights.

Plus, it’s available in 100+ countries, with desktop and mobile access.

Most LLMs try to get you their answer as fast as possible. This model takes all the time it needs as a refreshing twist to working with big data. 

Isn’t it curious to think about just how good the answer could be if we’re patient and let these research agents do their job?

Imagine a student prepping for a thesis, a marketer building a campaign, or an entrepreneur diving into market analysis—all with this powerhouse doing the heavy lifting.

There’s no reason NOT to give this a try.

💥 Free for a Month: Google Gemini Advanced includes Deep Research (normally $20/month).

📱 Mobile App Coming Soon: Desktop and web access today; app integration rolls out in early 2025.

Have you tried it yet? Let me know how it’s working for you.


🇨🇳 Meet DeepSeek V3: The Powerhouse from China

Switching gears, let’s talk about DeepSeek V3, an open-source AI model that’s turning heads.

Key Facts

  • 🚀 Massive Power: Boasts 671 billion parameters for impressive performance.
  • 💰 Cost-Effective Innovation: Developed for around $5 million—a fraction of what similar projects might cost.
  • 🌐 Wider Accessibility: Opens doors for smaller teams to leverage advanced AI capabilities.

Now THIS caught my attention.

DeepSeek V3 is flipping the script on traditional AI development.

Get this…

Despite U.S. hardware restrictions, it was trained in just two months using NVIDIA H800 GPUs—proof that resourceful innovation can rival well-funded projects.

By making advanced AI affordable, DeepSeek V3 opens the door for smaller organizations to adopt these tools without the hefty price tag. 

Imagine the breakthroughs we’ll see when more minds can access this level of technology.

We’ve seen up-starts like this in the past, but nothing this impressive for it’s speed to market. 

From coding competitions to complex mathematical reasoning, DeepSeek V3 outperforms models like Meta’s Llama 3.1 405B and matches or exceeds OpenAI’s GPT models on benchmarks.

And by offering a high-performance, open-source AI model at an affordable price and simplifying access, DeepSeek is allowing smaller companies, educational institutions, and individual developers to leverage the power of AI without the massive financial or technical resources that were previously required. 

And I’m all for the democratization of AI. 👏

While DeepSeek V3 is a major leap forward, it raises some important questions:

  • Training Data Integrity: The model occasionally identifies as ChatGPT, likely due to training on outputs from rival AI models like GPT-4. This trend of “AI training on AI” could lead to biases, hallucinations, and ethical concerns around data use.
  • Resource Needs: Its impressive architecture requires significant hardware resources to run efficiently, potentially limiting widespread adoption for smaller-scale users.

If you’re interested, give it a run at DeepSeek 🔗


🏡 NVIDIA’s Project Digits: Supercomputing at Home

Imagine having the power of a high-end AI supercomputer right on your desk—for just $3,000. That’s exactly what NVIDIA’s Project Digits is all about, and it’s set to launch this May.

Key Facts

  • 💻 Supercharged Hardware: Equipped with the GB10 Grace Blackwell Superchip.
  • 🌐 Run Large AI Models Locally: No more relying solely on cloud services.
  • 🔗 Scalable Power: Connect two units to tackle even bigger challenges.

Having this kind of power at our fingertips could change EVERYTHING.

For example, have you every had the itch to type something into ChatGPT that you really wanted to get it’s response on but immediately had a privacy freak-out moment?  

I have. More times than I’m willing to admit in writing.

I tell GPT and AI models a whooooole lot, but there are still things I wont commit to a 3rd party database. 

Think about the peace of mind you would have knowing you’d be chatting with a little device on your desk, and no where else. 👀

Or, let’s think about researchers, developers, and students who’ve had to rely on limited or expensive resources to get their jobs done.

It’s not just about speed—it’s about freedom. 🦅

Project Digits puts advanced AI computing in your hands, letting you experiment, innovate, and push boundaries without waiting for access to cloud resources or institutional supercomputers.

At its core is the GB10 Grace Blackwell Superchip, combining a cutting-edge NVIDIA Blackwell GPU and a 20-core NVIDIA Grace CPU. 

It’s a compact beast delivering up to 1 petaflop of AI computing power at FP4 precision.

What’s a petaflop you might ask? A petaflop means a computer can perform one quadrillion (1,000,000,000,000,000) floating-point calculations every second.

So, if every person on Earth (about 8 billion) did one calculation per second, it would take over 4 years to do what a petaflop computer can do in just 1 second!

… right? 

With 128GB of unified memory and up to 4TB of NVMe storage, Project Digits runs AI models with up to 200 billion parameters.

For even greater demands, you can link two units to handle models with 405 billion parameters—a feat previously restricted to data centers or cloud environments.

So here we are.

Whether you’re prototyping new AI algorithms, running predictive analytics, exploring complex models for drug discovery or climate research, or just want to talk dirty to an LLM and not worry someone at OpenAI will get a notification about it.. this tool makes it all possible—on your own terms.

Of course, it’s an investment, and it’s worth considering how it fits into your goals.

But the possibilities are pretty exciting.

And they continue to get exciting BY THE DAY.

I run an incredible private community for leaders like you who are interested in AI development and implementing it into their own personal and professional lives.

Right now, it’s open for enrollment. 

  • 🤝 Networking & Collaborations
  • 🧠 AI Insights & Tools Showcase
  • 📆 Monthly Q&A “Office Hours” with me
  • 📞 A 1:1 Welcome Call for us
  • …and more!

Check it out here: https://fastfoundations.com/slack 🔗

Share this Post

Fast Foundations AI Weekly

You’ll receive an email every Tuesday of the top trending AI topics, tools, and strategies you NEED to know to stay on top of your game.

CAN I EMAIL YOU THE LATEST IN AI?