Amazon's new LLM

PLUS: The godfather of AI

Good morning human brains, welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Mind-reading AI 🧠 

    Berkeley scientists figured out how to vaguely read thoughts by plugging FMRI scans into GPT.

  • Amazon's new LLM plays 🔤 

    Amazon’s supercharging its LLM development to make Alexa the best personal assistant out there.

  • The godfather of AI 🎅 

    One of AI’s greatest pioneers just left Google. Now, he’s warning the world about the dangers of AI.

APPETIZER

Thought-To-Text AI 🧠 

Researchers just made huge strides in translating thoughts into text.

FMRI has been used to capture snapshots of the brain for a good while, but scientists couldn’t really use it to tell what you were seeing, hearing, or thinking... until now.

Texas researchers have figured out how to combine fMRI scans with GPT-1. They fine-trained personalized models and instructed participants to picture themselves narrating detailed stories, then plugged their brain scans into GPT’s first model.

AI predicting vowel sounds

The result is a decoder that can reproduce stories that you listen to / imagine with a surprising level of accuracy (~50%). It also proves that the same brain regions are involved in imagining something vs just feeling it.

It’s still in its infancy, so it doesn't construct a full transcript of the words you imagine. But it’s great at capturing the general gist of thoughts with text.

So are telepathy headsets around the corner? Probably not. Brain-reading tech is still a few years out from being used in everyday life. We’ll need millions of dollars and tons more research to make it portable and correct more often, but we’re stoked to see how GPT-4 performs with these scans in the meantime.

BUZZWORD OF THE DAY

AI Model

A program or algorithm that uses training data given by its creators (eg. online webpages & forums for ChatGPT) to recognize patterns and make predictions or decision.

FROM OUR PARTNERS

The results-savvy Twitter growth challenge

Everyone would like to grow a huge audience on Twitter, but only a few ever figure it out. This challenge is going to fix that for you.

Grow your followers, unlock the secrets of virality, and establish your brand on Twitter over the course of our 30-day-long Twitter challenge.

Maintain your streak to get a high score + access our month-long curriculum of content, events, and community.

Get access to:

1/ Twitter growth guide
2/ Weekly masterminds
3/ Live Q&As with Twitter pros
4/ Private Discord community of hustlers like you that’ll hold you accountable.

MAIN COURSE

Amazon enters the LLM wars 🪖 

“Hey Alexa”

At their Q1 earnings call, Amazon just announced its building a stronger Large Language Model to power Alexa.

Amazon’s CEO says that an improved LLM will help the company work toward its goal of building the world's best personal assistant and that it’s got a great starting point with Alexa - with data from hundreds of millions of devices in entertainment and smart homes.

It also revealed its competitive edge: Amazon’s been investing in LLMs for years, and while it has the ability to double down and invest even further now, small companies don't.

Amazon wasn't the only company to bring up AI during its quarterly call with investors - Google, Microsoft, and Meta all emphasized their investments in LLMs too this week.

Meanwhile, Apple’s working on something similar with Siri.

What this means: with the heavy new emphasis on LLMs, all big tech players are openly acknowledging GPT as a serious threat to their legacy businesses. We’re going to see many new fragmented AI models + many more OpenAI partnerships soon.

A LITTLE SOMETHING EXTRA

The Godfather of AI 🎅 

Geoffrey Hinton, one of AI’s greatest pioneers, just left Google to warn the world about the dangers of AI.

Geoffrey Hinton

Hinton was part of the team that created the intellectual foundation for AI systems (deep learning) that most AI tech companies are relying on today.

Now, he’s concerned that generative AI is dangerous and regrets helping advance it to current levels. Here’s why:

I console myself with the normal excuse: If I hadn’t done it, somebody else would have.

Goeffrey Hinton

1/ Hinton fears that AI is going to be used to spread personalized misinformation and take away a large number of jobs very soon.

2/ He wants global regulation but thinks it may not be possible because it’s impossible to know if companies or countries are working on AI in secret.

3/ Somewhere down the line, it’s going to be a risk to broader humanity.

The big question: are governments and the leaders of AI innovation taking the existential threat of AI too lightly? We only get to do this once.

MEMES FOR DESSERT

YOUR DAILY MUNCH

Think Pieces

The Guardian: understanding the stupidity of AI.

AI reveals the most human parts of writing.

Startup News

OpenAI closes $300M share sale at $27B-29B valuation: investors from outside now own more than 30% of OpenAI.

A pot of gold at the intersection of DevOps and generative AI?

Research

Discover AI in Daily Life: Google’s AI literacy lesson at a middle school level.

Using negative human rights as a basis for long-term AI safety and regulation.

Tools

Mailbutler: compose, summarize, and organize emails.

CopyMate: an SEO content generator. In any language.

DocsAI: create chat support agents & integrate them with websites and Slack.

CodeDesign: launch your brand’s website with AI.

CodeDesign

TWEET OF THE DAY

16.4% of this sample thinks GPT4 meets the bar for AGI

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

AI ART-SHOW

Until next time 🤖😋🧠