๐Ÿ“ธ Midjourney 6 is unreal

Get this delivered to your inbox every Tuesday

Midjourney 6 is Here & It’s The Best Yet

A lot can happen in just 2 years. And in Midjourney’s case, that’s the understatement.  

Recently I’ve seen rumbles from those deep in the AI Imagery space talking about how they were unsure if it was wordy Midjourney holding on to 100% equity and taking no funding while the other players are brining in hundreds of millions and going faster than you can imagine. When faced with immense competition, comparable quality and new tools like Magnific ๐Ÿ”— that can upscale nearly anything to unreal levels, how does MJ keep it’s edge?

Then they released v6. 

We’ve reached the point where a majority of humans can’t tell the difference between AI and real photography. 

I collected this series of tweet examples for you to check out some of the best I’ve seen. These are worth looking at and remembering, this is a tool you have access to right now to use for your projects. Also, they added text support! Finally, add what you want โ€œin quotesโ€ to have it included in you image. 


Around the web (from X)

๐Ÿ–ฅ๏ธ ARC (The Browser Company) announces ACT II – A New Computer

๐Ÿ’ฌ An Open Source Chat System

โš™๏ธ GPTEngineer Looks incredible


๐Ÿ—ฃ๏ธ Thought of the week

What comes next?  Are you working on something relevant for today, or are you working on something relevant for where we’re going?


Humans + Photos + 3d Models = Realtime โ€œHUGSโ€

This project (silly as the gif seems) introduces a new technique called HUGS (Human Gaussian Splats) that can turn short videos of people into animated 3D models of them. Normally it takes special cameras and hours of work to make 3D avatars, but this method can do it very quickly from regular videos people take on their phones.

It works by analyzing the video frames to figure out where the person’s body and clothes are in each picture. It then builds a 3D representation of the person as colored blobs (Gaussian splats) in different places based on their shape and movements.

Within about 30 minutes, it can turn a 50-100 frame video into an animated 3D model. This 3D avatar looks a bit like a cartoon but it moves just like the real person in the video. The blobs smoothly slide and deform to match their poses.

The best part is these avatars can then be easily added into other virtual scenes. For example, you could take your HUGS avatar and place it into a virtual landscape created with a different technique called Neural Radiance Fields. This allows people and their animated 3D selves to interactively explore virtual worlds.

Imagine where this will take us in 1 year and what interactivity this opens up for connecting with others!

Share this Post

More from the archive

Fast Foundations AI Weekly

You’ll receive an email every Tuesday of the top trending AI topics, tools, and strategies you NEED to know to stay on top of your game.

CAN I EMAIL YOU THE LATEST IN AI?