📚 NotebookLM Goes Mobile
Google’s AI-powered research and note-taking tool, NotebookLM, will officially launch its native mobile apps for Android and iOS on May 20, 2025, the opening day of Google I/O 2025.
Key Facts
- 🚀 Growing Popularity: NotebookLM had over 28 million visits in the past three months, with 9 million in January 2025 alone.
- 🔄 Stay Synced: Your notes and sources will sync between web and mobile, so you’re always up to date, no matter where you are.
- 🧠 Enhanced Import and Analysis: Supports direct import of YouTube videos (with transcription), intelligent analysis of images in PDFs, and automatic web resource suggestions.
Not gonna lie, I’ve been waiting for this one.
When I first heard about NotebookLM, I was skeptical.
It’s like a random Google project they released quickly just to try something new. But when I took the time to test it and saw how simple they built this AI for consumers, I realized there was something way more coming.
What I really like is that NotebookLM is grounded in your material.
It won’t hallucinate or make things up, it cites everything directly from what you upload.
That makes it super useful for real research, onboarding new team members, or pulling insights from a messy folder of resources.
The Interactive Mind Maps are great for visualizing complex ideas quickly.
And now that it’s going mobile, this gets way more usable in day-to-day life.
If you’ve been here a while, you know my morning walks are kind of like a creative ritual for me. So, I love that the Audio Overviews let me listen to summaries even when I’m offline (which is available in over 50 languages, btw).
I’m already thinking about using it to review key docs while traveling, or just during downtime between meetings when I need something complex summarized.
Since 73% of Americans have listened to or watched a podcast at least once and 58% of Gen Z’s spoken-word audio time is now spent with podcasts-more than double the 28% it was just seven years ago, the tide is shifting.
NotebookLM also offers a premium version, NotebookLM Plus, for those who need higher usage limits, team collaboration, or more advanced features.
I haven’t personally needed the Plus version yet, but I can see it being helpful for teams who want a shared research assistant.
With over 28 million visits in the past 3 months, this Google project is gaining serious traction and mobile is only going to accelerate that with how fast Gemini is growing.
If you’re looking for AI to support you in learning and research, this is the one I’d start with.
You can pre-order on the 🔗Apple App Store or pre-register on 🔗Google Play.
🎨 Midjourney’s New Feature Can “See” Your Vision
Omni-Reference is a wild new feature that lets you use real images as a “starting point” for your AI art as just the start.
Key Facts
- 🖼️ Start with Real Images: Use your own images as starting points, allowing you to include specific faces, objects, or styles in your AI art.
- 🎛️ Control Details: The
--ow
parameter lets you adjust how closely the AI follows your reference, giving you the perfect balance between inspiration and originality. - 🔄 Mix & Match: Combine multiple references and parameters for consistent characters and objects across different scenes.
This is a big leap because, before, AI art tools mostly just guessed what you wanted based on your text. Now, you can show the AI exactly what you’re thinking of.
We’re talking about midjourney.com one of my favorite GenAI Image tools
How Does It Work?
You just add a special code to your prompt, like this:
--oref [image_URL]
You can also tell the AI how closely it should stick to your reference, using:
--ow [number]
Lower –ow values (like 25): The AI uses your reference as a springboard but adds its own creative flair.
Higher –ow values (up to 400): The AI sticks closely to your reference, keeping specific details intact.
Try It Out:
Replace [image_URL] with your image link to experiment with these prompts:
🎨 Turn a photo into a painting:
A portrait in the style of Van Gogh --oref [image_URL] --ow 50
🐕 Make a cartoon version of your pet:
A cartoon illustration of a cat --oref [image_URL] --ow 100
🦸♂️ Put your friend in a superhero costume:
A superhero standing on a city rooftop --oref [image_URL] --ow 300
🖌️ Mix styles & turn a real car into a futuristic concept:
A futuristic car design, cyberpunk style --oref [image_URL] --ow 75
🖼️ Combine two references (your face + a famous painting)
Portrait of a person in the style of the Mona Lisa --oref [your_face_URL] --oref [mona_lisa_URL] --ow 150
Even though Midjourney takes being a little more brave with learning the prompt syntax than most, it can be very worth it.
🧠 Why It Matters
One of the biggest challenges with AI-generated images has always been consistency especially if you’re building a brand, telling a story, or working on visual content across multiple scenes.
Omni-Reference fixes that.
Now, you can reuse the same character, object, or style across dozens of prompts without starting from scratch every time.
Whether it’s a product, a person, or a visual aesthetic, Midjourney will hold onto those details and apply them wherever you need.
For content creators, designers, and marketers, that’s huge.
You can:
- 🎯 Keep characters and products consistent across campaigns
- 🎨 Mix styles and moods to match your brand
- 🔁 Embed references into any scene or scenario
- ✏️ Achieve visual accuracy that used to take hours in Photoshop
Even better… it plays well with the rest of your tools.
You can use Omni-Reference alongside style guides, moodboards, and parameters like –stylize or –exp for full creative control.
It works in both Discord and the web app, supports external images, and slots easily into your workflow.
Basically, if you’ve ever wanted AI to just get what you mean this is the closest it’s come yet.
So, does this open up an idea for you?
Product mockups?
Storyboards?
Branded content that actually looks on-brand?
Give it a shot and see what you create. And hey, if you come up with something cool, I’d love to see it.