Anthropic announced its Claude 3 model family

PLUS: An orca schools you in math


Good morning, human brains, and welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Claude pounds GPT-4 into the ground 🦾 🤖

    Anthropic announced three new, impressive AI models.

  • No one to talk to? Talk to ChatGPT 🔊 👶 

    OpenAI launched a “Read Aloud” feature for ChatGPT.

  • Is an orca better than you at math? 🐳 🧮

    Microsoft’s small, 7B model outperforms much larger math models.


Once, twice, three times, an AI 🦾 🤖

On Monday, Anthropic introduced the Claude 3 model family. It contains three new models called Opus, Sonnet, and Haiku.

Why do I care?

Claude 3 Opus demonstrates a “near-human comprehension,” in performing complex tasks. This is a first for AI models. Opus, Sonnet, and Haiku all excel in analysis, forecasting, content creation, code generation, and multilingual conversation.

Ok, so how good is Opus?

In the demo, Opus conducted a Google search, analyzed visual and textual data, and created a code for an accurate 20-year GDP graph. It outperformed GPT-4, and achieved state-of-the-art results on several benchmarks.

That’s insane. What about the others?

All the models can process 200,000 token windows. Haiku is the fastest and most cost-efficient model in its intelligence class, and Sonnet provides 2x the speed of Claude 2 models.

Let me guess: I can’t use them yet.

Wrong. You can access Opus and Sonnet through Anthropic’s API, Amazon Bedrock, Google Cloud’s Vertex AI, and more. Haiku is coming “soon.”

I want to learn more about Anthropic.

I got you. Back in October, we reported on Google and Anthropic’s new partnership. Google invested $2 billion in exchange for 49% ownership.

In November, we covered Anthropic’s Claude 2.1. It improved Claude 2 with a longer context window, more truthful responses, and more.

In January, we reported on Anthropic’s study on Sleeper Agents. It reveals how LLMs bypass current safety protocols.

Your mind is a universe. Explore it freely.

😌 Cut the clutter and claim your calm.

Channel every thought, burst of inspiration, and unfinished project into one seamless, integrated experience with MyMind.

😰 Tired of forgetting your valuable thoughts?

Cut through the information overload. Let MyMind keep every idea safe, secure, and sorted.

🚀 Revolutionize your note-taking.

Capture everything with a single click. Organize everything instantly and effortlessly.

🧠 No more tagging, filing, or stressing.

MyMind understands your content, transforming chaos into a serene sanctuary of your digital essence.

The best part? It’s 100% free.


ChatGPT, read me a bedtime story 🔊 👶

On Monday, OpenAI announced ChatGPT’s “Read Aloud” feature. It allows you to hear its responses on its iOS and Android apps.

Huh? What? 👴

What’s the point?

It auto-detects and speaks 37 different languages. The goal is to extend ChatGPT’s reach and make AI accessible to the blind and low-vision communities.

How does it help the blind?

OpenAI collaborated with Be My Eyes to gather feedback from people with visual impairments. They’ve used the feature to see if their outfits match, describe their gardens, and more.

Is it available now?

Yes, it’s now available on the web version and in the app.

We see it on our end:

What else has OpenAI been up to?

Well. In January, we reported on the GPT store. We went over how to access it, find new GPTs, share your own GPTs, and more.

In February, we covered how to use multiple GPTs in a single chat. This allows you to integrate your GPTs, have them debate each other, and more.

In January, we reported on OpenAI’s GPT store updates. It added a rating system for GPTs and expanded builder profilers.


Looking for some good news to brighten up your morning? Then we recommend you check out The Boonly — your wholesome newsletter with a witty twist.

Spark your curiosity with inspirational insights that make self-growth enjoyable, not stressful. Delivered to you every Sunday, 100% free.


Does size matter? 🐳 🧮

On Tuesday, Microsoft unveiled Orca-Math. It’s a small, 7 billion-parameter model that excels in grade school math problems.

Why do I care?

On the GSM8k benchmark, a dataset of 8,500 complex math problems, it outperformed much larger models like LLAMA-2-70, Gemini Pro, GPT-3.5, and more. It also beat math-specific models like MetaMath-70B, WizardMath-70B, and more.

What’s under the hood?

Orca-Math is a fine-tuned version of Mistral 7B. The base version achieves a 37.83% accuracy rate on the G@M8k benchmark. Orca-Math achieved 86.61%

How is it so good at math?

It’s trained on a synthetic dataset of 200,000 math problems and continuously improves through practice and feedback.

What else has Microsoft done lately?

Let’s see. In January, we covered Microsoft’s drug development research. It reduced the time required to create drugs for infectious diseases.

In February, we reported on Microsoft’s partnership with Mistral. It invested $16.3 million in Mistral, and positioned it as a leading GPT-4 competitor.

On Monday, we covered Microsoft’s Copilot for Finance. It’s a model designed to enhance the efficiency of finance professionals.



Corgea — automatically fixes code and reduces workload by 80%.

OSO AI — a censorship-free AI search engine and chat platform.

Parallel AI — create AI employees that are securely trained on your data.

Think Pieces

Political DeepFakes are getting out of hand. AI-generated misinformation is being created at an alarming rate, and little is being done about it.

Can AI help people apply for federal government benefits? A New York-based startup uses AI to streamline the social security process and more.

How good is Amazon’s Rufus chatbot? It shows a preference for sponsored products, can be stereotypical, and more.

Startup News

Perplexity AI approaches a $1 billion valuation. This puts it close to “unicorn” status.

Microsoft announced Copilot for OneDrive. It will search for your files, summarize them, and more.

Dell executives leak info on NVIDIA’s 1000-watt GPU. NVIDIA declined to comment; its GTC conference is coming up in a few weeks.


Humanoid Locomotion — a training technique that allows a full-sized robot to walk around San Francisco with minimal real-world training data.

RT-Sketch — an imitation learning method that uses hand-drawn sketches to effectively train robots and build highly effective datasets.

MAGID — Amazon’s system for generating synthetic, multimodal datasets.



Elon Musk is suing OpenAI for transitioning from a nonprofit to a Microsoft subsidiary that prioritizes maximum profit.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.


Until next time 🤖😋🧠 

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.