• Bot Eat Brain
  • Posts
  • Meta beta tests new AI in its Ray Ban smart glasses

Meta beta tests new AI in its Ray Ban smart glasses

PLUS: Is real-time ChatGPT coming?

TOGETHER WITH

Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Shove AI in your eyeballs 👀 🕶️

    Meta started beta testing new AI features for its Ray Ban smart glasses.

  • Is Google Gemini Ultra already irrelevant? 😵‍💫 🥊

    Microsoft’s Medprompt+ crushes Gemini Ultra on several benchmarks.

  • Will OpenAI look for love in the right places? 🫣 💔

    OpenAI partners with Axel Springer to bring real-time news to ChatGPT.

MAIN COURSE

Meta crams AI in your face 👀 🕶️

On Tuesday, we reported on Meta’s Purple Llama. It included a cybersecurity evaluation benchmark and a model that’s less likely to generate harmful outputs.

Later that day, Meta announced updates for its Ray Ban smart glasses. It began beta testing multimodal AI features that leverage the glasses’ cameras and microphone.

Why would I care about this?

These new features have real-world applications like answering questions based on what you’re seeing or your location.

The new AI can identify objects and suggest related items, like recommending which pants would go with your shirt.

Who would wear AI glasses?

Meta claims the aim of these new features is to enhances user interaction and provide assistance in daily tasks.

The new update equips the glasses with real-time search capabilities.

You can ask Meta to take a picture, caption it, save it, and post it to Facebook or Instagram.

So it’s for social media and fashion tips?

It can also translate text, accurately describe objects, gather information about your surroundings, and answer questions about previous instances when you were wearing the glasses.

Some real-world examples Meta suggested:

What can I pair with this wine?

What does this sign say?

Is there a pharmacy close by?

How much are these things, anyway?

They start at $299.

I don’t have anything better to do, I’ll try it out.

Good luck.

Meta says they’re only beta testing to a small number of early adopters.

From our partners

An emotional AI voice generator

Lovo uses AI to generate lifelike human voices. No more dull, robotic voiceovers for your projects; we're talking about an arsenal of 400+ voices with support for 100+ languages.

But here's the kicker: Lovo doesn't just spit out words - it does emotions. You're not just setting the language, you're also setting the mood 😏 

  • 💵 It’s cheap. Hiring voiceover artists or maintaining a multilingual team is expensive. Lovo starts at $19/month.

  • 🏃 It’s fast. Producing a high-quality voiceover with human actors can take months. Lovo takes seconds.

  • 💌 It resonates. This is no Microsoft Sam. Lovo-generated voices have human emotion.

Imagine an AI-generated voiceover that actually sounds excited when introducing your product. It's like hiring an Oscar-winning actor for your voicemail.

Best of all? It’s free to try.

SIDE SALAD

Microsoft slams Google Gemini 😵‍💫 🥊

Last week, we reported on Microsoft’s Deep Search. It’s a new AI tool for Bing Search that provides summaries based on multiple search results.

On Tuesday, Microsoft announced Medprompt+. It’s an advanced prompting technique for GPT-4 that attains the highest score ever recorded for the MMLU benchmark.

The what?

MMLU stands for Measuring Massive Multitask Language Understanding. 

It tests AI systems on 57 diverse knowledge areas, from math to medicine.

So, what is Medprompt+?

Medprompt was originally designed for medical questions. It utilizes various techniques to draw out superior responses from AI models.

Medprompt+ is Microsoft Research’s latest iteration of this technique.

It uses a combination of simple and complex prompting methods to achieve state-of-the-art results on multiple benchmarks.

Didn’t Gemini Ultra just beat human expert level on the MMLU?

It did last week, Medprompt+ just beat it.

That’s only one benchmark.

Yeah, about that.

Microsoft’s prompting innovations expanded GPT-4’s capabilities way beyond what Google Gemini Ultra is expected to reach once launched next year.

Here are the results of several of the most popular benchmarks:

A LITTLE SOMETHING EXTRA

OpenAI & someone else 🫣 💔

So, remember the real-life soap opera involving OpenAI last month?

Now that the that dust has settled… OpenAI is ready to get hurt again. ❤️‍🩹

They aim to integrate real-time news into AI systems like ChatGPT.

Real-time news, like Grok?

Eh, sort of.

Axel Springer is a global publishing giant that owns Politico, Business Insider, Bild, Welt, and more news outlets.

How will it work?

ChatGPT will provide summaries of news articles, including premium content previously behind paywalls.

Its answers will include links and attribution to original articles, promoting transparency and driving traffic to Axel Springer's websites.

Won’t that mean less traffic to Axel Springer’s sites?

OpenAI claims Axel Springer will gain increased visibility and readership while ChatGPT’s knowledge will contain current, factual news.

Axel Springer claims that the partnership secures their content’s relevance in the AI era.

A win-win, apparently.

MEMES FOR DESSERT

YOUR DAILY MUNCH

Tools

Lovo — Generate lifelike human voices for podcasts or videos. [Sponsor]

notdiamond-0001 — evaluates whether to send queries to GPT 3.5 or GPT-4.

Hexus AI — its like Spotify Wrapped, but for your ChatGPT usages for the year.

BoldDesk — AI-powered workflow automation and customer service tool that’s half the price of Zendesk, Freshdesk, and more.

Think Pieces

Will AI replace lawyers for legal advice? The rapid advancement of LLMs makes this more viable than ever.

Microsoft released Phi-2. It’s a new small language model with 2.7 billion-parameters.

An AI system made from human brain cells understands speech? It learned to recognize peoples’ voices with up to 80% accuracy.

Startup News

The University of Tokyo connected a humanoid to GPT-4. The robot, Alter3, prompts GPT-4, turns instructions into code, and then performs actions.

Essential AI raised $56.5 million to develop enterprise brain. The founders were authors of the famous Attention is All You Need paper on Transformers.

Together AI and Cartesia AI released a new model. It’s trained on 600 billion tokens and built on the Mamba architecture.

Research

Sherpa3D — a text-to-3D framework that leverages both 2D and 3D diffusion models to create multi-view 3D content from text prompts.

Beyond Human Data — a method that outperforms human-trained models on MATH reasoning and APPS coding benchmarks.

FreeInit — a method that substantially enhances temporal consistency in video diffusion models.

TWEET OF THE DAY

Tesla released a teaser for Optimus Gen 2.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

AI ART-SHOW

Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.