• Bot Eat Brain
  • Posts
  • Steve Jobs is back from the dead - 🤖😋🧠 #7

Steve Jobs is back from the dead - 🤖😋🧠 #7

Aaand we're back, did you miss us? This is Bot Eat Brain, the best-tasting breakfast cereal on the shelf providing you with your recommended daily serving of AI news.

Oh and by the way... Bot Eat Brain just passed 100 subscribers 🥳

Happy 3-digit day to US, and a huge thank-you to YOU for being here and reading this as one of our very earliest subscribers.

If you like today's issue please don't hesitate to share it with ten to twenty of your closest friends 😉

Now, without further delay...

Here's what's up:

  • Steve Jobs is back from the dead 🧟

  • Google jumps into the ring with text-to-video 🥊

  • Open-source text-to-3d (hate to say we told ya so) 📖

Today's vibe: ride that wave bruddah 🏄

Let's dig in.

Steve Jobs is back from the dead 🧟

And the first thing he did was go on the Joe Rogan Experience.

What you're listening to are two simulated voices having an AI-generated conversation.

The demo was produced by a company Play.ht which converts text into natural-sounding speech.

For more on synthesized voices check out this previous issue of Bot Eat Brain where we discuss Google's speech-synthesis engine, Lyra V2.

While this is a cool demo by itself, it also exemplifies the growing trend of synthetic media:

AI-generated content that capitalizes on famous faces and voices without the need to actually hire any real person.Maybe you've seen Unreal Keanu Reeves on TikTok?

To be clear, that is not the real Keanu Reeves, it is an unlicensed Deepfake that has inserted Keanu's face onto a body double.

This is a trend that's already hit Hollywood, but we still think it's only just getting started.

What to look out for 👀

  • Lawsuits as the famous stake their claims on simulated versions of their likeness 🧑‍⚖️

  • Meanwhile, entrepreneurs will profit. Imagine knock-off cameo at 1/10th the cost. 🪨

  • The rise of new companies that license likenesses for simulated media. 💽

  • Actors appearing in movies without ever stepping foot on set. 💁‍♀️

Google jumps into the ring w/ text-to-video 🥊

From the abstract:

"Imagen Video generates high definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models."

In other words, they use a layered approach, one model generates low-resolution video, another up-scales it to HD, and further layers refine the final output.

Open-source text-to-3d 📖

Well, it came even faster than we expected.

Just as Stable-Difusion democratized access to text-to-2d image generation, Stable-Dreamfusion is now doing the same for text-to-3d with a tunable open-source model.

We can't wait to see the absolute deluge of ai-generated genitalia which is sure to erupt from this advancement.

Wondering how the heck this stable-diffusion nonsense actually works?

Here's a great 17-minute introduction for anyone who's using stable-diffusion and thinking to themselves:

"What is this witchcraft???! How does it work??!"

TL;DW Stable diffusion transforms noise into images by iterating progressively from "pure noise" to the target image.

How?

First, it learns to de-noise noisier and noisier versions of various inputs.

Then, this process can be run in reverse to produce novel outputs.

Byte-sized bonus treats 😋

Until next time ✌️

Shoutout to everyone who fed us great content for today's newsletter:

🤖😋🧠

P.S. Want to write for Bot Eat Brain? Know someone who'd be perfect to join the writing team? We're looking for writers and anyone can try out.