• Bot Eat Brain
  • Posts
  • Putin questioned by a deepfake of himself during Q&A

Putin questioned by a deepfake of himself during Q&A

PLUS: AI firms battle over who's the safest


Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Putin stares longingly into his own beautiful eyes 💅 💁‍♀️

    During his annual Q&A, a deepfake Putin questioned the real Putin.

  • Deepfake video > boring mugshot 🚨 👮‍♂️

    Pakistan’s former prime minister created a speech from prison with AI.

  • Mirror, mirror on the wall. Who’s the safest of them all? ☢️ 🦺

    OpenAI unveils its Preparedness framework to evaluate AI models’ safety.


Who wore it better? 💅 💁‍♀️

In October, we curated an in-depth recap of 2023’s AI Safety policies. We reported on everything from The Godfather of AI leaving Google, to the Biden administration’s restrictions on China’s access to AI chips.

In the same issue, we covered Reality Defender’s $15 million funding round. It’s a startup that develops deepfake detection tools.

And speaking of deepfakes… 🫣

This happened during his annual Q&A phone-in event.

Putin-on-Putin action?

Yes, but not in the hot way. 🥵

The deepfake Putin claimed to be a student from St. Petersburg University. He asked for Putin’s thoughts on the dangers of AI and his reported use of body doubles.

How did the real one respond?

He denied any use of body doubles for health or security reasons.

In response to questions about AI, he shot back:

"I see you may resemble me and speak with my voice. But I have thought about it and decided that only one person must be like me and speak with my voice… and that will be me."

Vladimir Putin

So, it was staged propaganda?

It doesn’t appear so.

This was one of the more pleasant moments of the Q&A and even prompted some laughter from the audience.

This was part of a 4-hour event where he answered questions about the Russian economy, the Russia-Ukraine conflict, the Israel-Hamas conflict, relations with the West, China, and more serious issues.

So, he wasn’t disturbed by the deepfake?

I wouldn’t say that.

Here’s a screenshot of him interacting with the deepfake.

(Real Putin is on the left)

From Our Partners

This painting sold for $8 million and everyday investors profited.

When the painting by master Claude Monet (you may have heard of him) was bought for $6.8 million and sold for a cool $8 million just 631 days later, investors in shares of the offering received their share of the net proceeds.

All thanks to Masterworks, the award-winning platform for investing in blue-chip art. Masterworks does all the heavy lifting like finding the painting, buying it, storing it, and eventually selling it.

In just the last few years, its investors have realized annualized net returns of 17.8%, 21.5%, 35% and more.

Shares of offerings can sell out in just minutes, but Bot Eat Brain readers can skip the waitlist to join with this exclusive link.

*Investing involves risk and past performance is not indicative of future returns. See important Reg A disclosures and aggregate advisory performance.


Prison selfie = “cell-fie”🚨 👮‍♂️

Puns. 🤓

He’s been in prison for allegedly leaking classified documents since August.

Here’s a tweet from Khan’s social media leader:

How’d he create a speech from prison?

Khan’s political party, used Khan’s script and AI tools from ElevenLabs to mimic his speech style.

The AI speech was a blend of genuine footage and AI audio.

Then, the speech was paired with a video, which featured stock images and historical footage.

Is it believable?


The received varied responses. Some people were impressed by the ingenuity of the tech, but others pointed out the noticeable differences in the delivery and grammar of the speech.

Did this piss off Pakistan’s government?

You bet it did. 🤯

The video was part of a larger virtual rally that was five hours long. It drew 500,000+ views on YouTube and thousands more on other social media platforms.

Because of this, Pakistan’s government disabled access to Facebook, Instagram, X, and YouTube on Sunday evening.


This one isn’t about deepfakes ☢️ 🦺

Last Thursday, we covered OpenAI’s partnership with Axel Springer. The goal is to bring real-time news updates to ChatGPT.

On Monday, OpenAI unveiled its Preparedness Framework in beta. The goal is allegedly to address safety concerns by evaluating risks from advanced AI models.

How does it work?

The framework uses risk “scorecards” to measure frontier models at important milestones.

It pushes these models to their limits to decide if they’re safe for further development and use.

The framework sets thresholds for precautions in areas like cybersecurity, toxic content, model autonomy, and more.

What’s so great about it?

It’s not just the framework itself, OpenAI designated several internal teams to work on this.

A new Preparedness team will assess dangers from new AI developments and collaborates with other groups to ensure science-based safety measures.

A new Safety Advisory Group will educate OpenAI’s leadership and board of directors on important safety decisions. They will also develop a system for handling emergency situations.

That’s it?

The framework also prioritizes collaboration with both external and internal teams to track real-world misuse and emerging AI risks.

They claim to plan continuous investment to understand evolving risks as models scale.

Is this a response to Meta and IBM’s AI alliance?

Answering this question would just be speculative.




VoiceDual — an AI-powered voice changer.

 TryHairstyle — take a photo and see what you’d look like with various styles.

Xmind — An AI-powered, collaborative idea generation/mind mapping tool.

Live Lectures — records, transcribes, and takes notes on your lectures.

Think Pieces

AI is everywhere, but we ain’t seen nothing yet. A deep dive into AI’s progression into technology and where the future is headed.

The U.S. National Science Foundation’s new AI guidelines. How they intend to use AI and what steps they’ll go through to ensure AI safety.

How to fine-tune Mistral’s 7 billion parameter AI model. Also, why a fine-tune version is better than ChatGPT.

Startup News

Mixtral 8×7b is available for testing. It’s Mistral AI’s second LLM that matches or outperforms GPT 3.5 on several benchmarks.

OpenAI launched a guide on prompt engineering. It includes tips to get the most out of LLMs and gives examples for real-life use cases.

ByteDance (TikTok’s creator) got banned by OpenAI. Reportedly, it used ChatGPT to train a competing AI model.


ECLIPSE — a training method to make text-to-image diffusion models more resource-efficient.

EVERYTHING OF THOUGHTS — a new thought-generation approach for LLMs that outperforms current Chain-of-Thought techniques.

Persona-based datasets — a framework for creating more personalized interactions with LLMs by creating datasets for user personas.


Ethan Mollick, renowned Wharton University Professor, shares an open-source small language model that reportedly outperforms GPT 3.5 and Grok. His prompt is adorable. 🐶

Spoiler alert, it’s not Google Gemini.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.


Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.