The state of AI safety in 2023

PLUS: Is GPT-4 getting faster?


Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • The state of AI safety in 2023 🚀 🤖 

    An in-depth look at AI safety news headlines from May to now.

  • Will AI detectors actually work soon? 🔍 🕵️‍♂️

    Ex-Goldman Sachs VP’s AI-detection startup gets $15 million.

  • Is GPT-4 getting faster? 🏎️ 💨 

    A study shows that GPT-4 and 3.5’s response times are both around 1 millisecond per token.


An in-depth 2023 AI safety recap 🚨 🦹‍♂️

We’re going for a scenic drive through the recent history of AI policy-making. Buckle up.

On May 2, we reported on Geoffrey Hinton’s departure from Google. The Godfather of AI claimed his reason was to warn the world about AI’s danger.

A couple of days later, we covered The White House’s AI safety meeting. The CEOs of Microsoft, Google, OpenAI, and more came to discuss the short-term risks of AI.

In June, the Center for AI Safety published its official Statement on AI Safety. Over 350 prominent AI figures co-signed the statement.

The whole thing could fit on a napkin:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

- The Center for AI Safety

A couple of weeks later, the European Parliament passed the EU AI Act. This meant a ban on AI in biometric surveillance, emotion recognition, predictive policing, and more.

In July, The White House brokered a voluntary agreement between the top AI firms. The companies agreed to invest in cybersecurity, discrimination research, and AI watermarking systems.

Then we covered the popular, malicious AI tool — FraudGPT. It’s like ChatGPT, but without the safety guardrails. It’s the perfect tool for seedy cybercriminal activities.

In August, we reported The White House’s “AI Cyber Challenge” — it was a hackathon to create AI that finds and fixes security vulnerabilities. There were over $20 million in prizes for winners.

A couple of days later, the Department of Defense launched its Generative AI Task Force. Its purpose is to adopt AI to enhance national security.

In September, OpenAI launched its Red Teaming Network. The purpose was to hire security experts to evaluate and test their AI models.

Fast-forward to last Friday, we covered The Space Force’s ban on web-based AI tools. It was allegedly to protect data while AI policies are put in place.

*Stops to catch breath 🤖💨

I can’t believe I read all of that... Biden did what?

His administration’s new restrictions now require U.S. firms to obtain more licenses to sell AI chips or manufacturing equipment to China.


The Biden administration believes these measures are crucial to hinder China from enhancing its domestic chip production.

Who does this affect?

The primary restrictions primarily target tech with potential military applications, like hypersonic missiles and surveillance systems, but could also hamper China’s thriving commercial AI sector.


Build an AI ChatGPT for your website in five minutes.

Wonderchat helps you build your own ChatGPT-powered chatbots in 5 minutes.

Wonderchat empowers you to:

🚀 Build chatbots trained on website links and PDF files in 5 minutes.

🚀 Automate up to 70% of your support queries from your site visitors.

🚀 Multilingual chatbots that provide 24/7 round-the-clock support in over 80 languages.

🚀 A chatbot that integrates with over 5000 apps such as Slack, Microsoft Teams, and more via Zapier 👇



Fake videos or audio recording that look and sound real, created using artificial intelligence.

These are typically used to spread malicious or false information.


Will money make AI detection tools work?🕵️‍♂️

RIP. 🪦

Yesterday, Reality Defender raised $15 million — it’s a startup that develops DeepFake/synthetic media detection tools. The funding will go to improve its models that identify AI-generated media.

Another AI startup got money. Why do I care?

Reality Defender, founded by ex-Goldman Sachs VP Ben Colman and partners, helps governments, big financial firms, and more identify and stop AI-generated content.

What makes this different from OpenAI’s trashy tool?

Its ensemble method uses a variety of data to lessen bias during detection. It claims to achieve a higher accuracy rate compared to its rivals, though it doesn’t give any evidence of this.

It also claims to utilize deep learning models to identify manipulated media through a web app, an API, and “custom solutions.”

Whatever that means.

Our take: OpenAI’s disabled tool labeled The Declaration of Independence as AI-generated. Despite the funding and OpenAI’s failure, Reality Defender faces an uphill battle in proving how effective its tools are against rapidly advancing generative AI.


Is GPT-4 getting faster? 🏎️ 💨

It got dumber. 🤤

On Monday, Portkey published a study that shows GPT-4’s speed is improving, almost catching up with GPT 3.5.

Portkey is a startup that helps businesses develop and improve their generative AI apps and features.

I’m not going to read it, what are the takeaways?

Latency is the time it takes for an AI to respond to a request. This study shows that GPT-4’s latencies have more than halved in three months.

The median latencies, for both GPT-3.5 and GPT-4, remain consistently under 1 millisecond per token.

So what’s better, 3.5 Turbo or GPT-4?

It depends if you want to pay or not. Portkey claims that despite being costlier, GPT-4’s comparable speed for the majority of requests makes it a more viable option.



Think Pieces

How Toxic is ChatGPT? Apparently, GPT-4 can be made to output toxic, biased text than other LLMs.

How do you talk, with your actual voice, to LLMs? How LLMs are great to converse with and where they fall short.

An interview with NVIDIA’s CEO. He explains the times he bet the entire company, his early strategy with the company, and more.

Startup News

Stack Overflow fired 28% of its workforce. The company claims generative AI is to blame for its drop in traffic.

Riffusion, an AI music generator app, gets $4 million in funding. It creates music by using images of audio.

Nirvana, an AI insurance startup, raised $57 million. The money will go to developing effective, affordable commercial trucking insurance with AI.


6DRepNet360 — an open-source technique to more accurately which way a person’s head is facing.

Ex-MCR — a new method that achieves state-of-the-art performance in audiovisual and 3D object classification tasks.

SupFusion — a technique that combines LiDAR and camera systems with AI to better detect and identify objects.


Expresso — an AI tool that monitors and improves employees’ mental health.

Stable Audio Tools — training and inference code for Stability’s audio generation models.

CapCut for Business — AI script generation, characters, and more for video.

DataGPT — a conversational AI data analysis.

💡 Read Alts — Unique investment ideas for alpha-seeking investors, traders, and finance workers. Learn about exotic markets and alternative asset classes.


The Bayeux Tapestry is a 230-foot-long medieval embroidery that depicts the Norman Conquest of England in 1066. It’s one of the most famous medieval works of art in the world.

Renowned Wharton University Professor, Ethan Mollick, tweets a modern version generated by DALL-E 3. More on DALL-E 3, here.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.


Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.