Project Eureka: NVIDIA's self-teaching research bot
PLUS: AI-related goodness
Good morning, human brains. Welcome back to your daily munch of AI news.
Here’s what’s on the menu today:
Casually slurp beer while AI trains your robot 🦾 📝
NVIDIA open-sourced its AI agent that creates algorithms for robots.
Would you let Meta’s robot tuck you in tonight? 💁♀️ 🤖
Meta’s released three new updates for future live-at-home robots.
What if DeepMind assessed AI’s social and ethical risks? 👨🔬 🚨
It proposed a new framework that it claims is superior to previous methods.
Why teach your robot? Just use AI 🦾 📝
In September, we reported on robot bartenders in Las Vegas. The Tipsy Robot uses two AI-powered robs to pour your drinks.
Earlier this month, we covered an AI that quickly designed a walking robot. Northwestern University’s AI system designed a robot from scratch in seconds.
I want to make a robot.
You’re in luck. NVIDIA Research unveiled Eureka — It’s an AI agent that creates algorithms to teach robots new skills.
You can use it, now. It open-sourced its code.
Here it’s teaching itself to spin a pen:
How does it work?
NVIDIA’s tool, Eureka, uses GPT-4 to help robots learn new things.
The best part? Eureka does this all by itself without needing any human help. It can make its own rules that fit each individual robot and task.
What does GPT-4 have to do with it?
It uses GPT-4’s language abilities to generate reward functions, which are algorithms that help robots learn from their mistakes.
Then, it combines GPT-4 with reinforcement learning to speed up the robot’s skill-learning process.
What can I teach it to do?
The sky’s the limit.
In tests with 29 different settings and 10 types of robots, Eureka was better at making rules for learning than human experts.
For 80% of tasks, robots using Eureka’s rules did their jobs 50% better.
Why open your own drawer?
FROM OUR PARTNERS
Build an AI ChatGPT for your website in five minutes.
Wonderchat helps you build your own ChatGPT-powered chatbots in 5 minutes.
Wonderchat empowers you to:
🚀 Build chatbots trained on website links and PDF files in 5 minutes.
🚀 Automate up to 70% of your support queries from your site visitors.
🚀 Multilingual chatbots that provide 24/7 round-the-clock support in over 80 languages.
🚀 A chatbot that integrates with over 5000 apps such as Slack, Microsoft Teams, and more via Zapier 👇
BUZZWORD OF THE DAY
Let Meta’s robots sleep in your bed 💁♀️ 🤖
Later that day, it unveiled three breakthroughs for live-at-home robots — Habitat 3.0, HSSD-200, and HomeRobot.
iRobot meets The Sims.
Let’s start with Habitat 3.0.
You got it, chief.
Habitat 3.0 is Meta’s new simulator that teaches robots and humanoid avatars to work together in homes, allowing them to learn tasks like cleaning through human interaction.
Not creepy at all. So what’s HSSD-200?
It’s a massive 3D dataset with over 18,000 artist-designed objects. It assists AI agents in navigating interiors with more accuracy and using fewer resources than previous datasets.
Delightful. What’s HomeRobot all about?
HomeRobot combines hardware and software, which enables robots to perform various tasks in both simulated and real-world environments.
Bot, turn the lights down low. 🤖 🔥
Want to learn more about AI-powered robotics? Check out these previous editions of Bot Eat Brain:
A LITTLE SOMETHING EXTRA
How does DeepMind assess AI risk? 👨🔬 🚨
Last Wednesday, we reported on 2023’s AI safety policies. We took an in-depth look at this year’s AI policy-making headlines.
The same day, Google DeepMind proposed its framework to assess AI’s risks. In particular, it tackles the social and ethical risks of AI systems.
Google wants the best for me?
Eh, DeepMind suggests a three-layered evaluation approach:
Capability: How likely is the AI to produce harmful or incorrect outputs?
Human Interaction: How do real people use the AI, and does it work as planned?
Systemic Impact: What risks arise when lots of people start using AI, especially concerning larger social structures?
So, this framework is better than the previous ones?
The research team looked at how people currently check if AI is safe and found some big problems. They claim there are three main issues:
Context: Most only look at what AI can do on its own, without considering how people use it or its bigger effects on society.
Risk-Specific Evaluations: Most only look for obvious problems and miss more subtle dangers.
Multimodality: Most only consider AI that works with text. They don't look at AI that uses pictures, sound, or videos, which can have their own unique risks.
MEMES FOR DESSERT
YOUR DAILY MUNCH
Is AI’s threat to humans inevitable? Meta’s chief AI scientist claims it will never happen.
How to fine-tune ChatGPT. Specifically, it goes over how the popular “RAG” method is unnecessarily and counterproductive.
The U.S. government’s “Cancer Moonshot” project. It aims to utilize AI to cut cancer deaths by 50%.
OpenAI’s valuation will reach $80 billion. Thrive Capital is leading a deal to buy shares from OpenAI’s employees.
Imbue gets $12 million in funding. The money will go to creating practical AI systems that can reason and perform real-world tasks.
Microsoft’s CEO releases his annual letter to shareholders. He mentions a “new era” in AI, how other companies use Microsoft’s products, and more.
LAMP — a text-to-video generation framework to balance generation freedom and training costs.
AutoMix — a method that leverages LLMs for more cost-effective and accurate problem-solving.
Self-RAG — a new framework that enhances LLMs’ adaptive retrieval and self-reflection to improve its accuracy.
Jumble Journal — an AI-powered journal for your mental health.
Autotab — create and train AI agents directly in your browser.
Sales Sparrow — an open-source AI sales assistant for Salesforce.
Pitch Your Idea — up your sales-pitch game by practicing it with AI assistants.
If you like Bot Eat Brain there’s a good chance you’ll like this newsletter too:
👨 The Average Joe — Market insights, trends, and analysis to help you become a better investor. We like their easy-to-read articles that cut right to the meaty bits.
TWEET OF THE DAY
An interesting look at how you can unintentionally make your LLM’s “personality” more biased.
Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.
Until next time 🤖😋🧠
What'd you think of today's newsletter?