• Bot Eat Brain
  • Posts
  • NVIDIA's LATTE3D makes 3D models in 400 milliseconds

NVIDIA's LATTE3D makes 3D models in 400 milliseconds

PLUS: New Humane Ai pin deets


Good morning, human brains, and welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Instantly make complete, robust 3D models 🤯 💠

    NVIDIA unveiled LATTE3D, it makes 3D shapes in 400 milliseconds.

  • Emad Mostaque resigned 🤦‍♂️ ✌️

    Stability AI’s CEO, co-founder, and board member left the company.

  • New Humane Ai Pin deets 🤖 📌

    Humane announced shipping details, a vision feature, and more.


Make 3D models in under a second 🤯 💠

On Thursday, NVIDIA unveiled LATTE3D. It converts text prompts into high-quality 3D shapes in milliseconds.

What does it do?

It creates robust, detailed 3D models from text prompts in 400 milliseconds.

Why would I use this?

It’s mainly designed to create animals and everyday objects, but it can be adapted to train on any data type. This allows you to create models for landscape design, home simulation, training robotics, and more.

So, what?

Last year, it took an hour to generate 3D models of this quality. LATTE3D allows you to rapidly materialize any idea in a usable, sharable 3D format.

“The current state of the art is now around 10 to 12 seconds. We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries.”

Sanja Fidler, NVIDIA’s VP of AI Research

What’s under the hood?

It was trained with NVIDIA’s A100 Tensor Core GPUs and uses ChatGPT to create diverse text prompts. It utilizes a two-stage pipeline which leverages volumetric and surface-based rendering to quickly generate detailed textured meshes.

What are some other 3D tools?

Back in October, we reported on NVIDIA’s Masterpiece X. It’s a text-to-3D animation playground made in collaboration with Masterpiece Studio.

A week later, we covered 3D-GPT. It leverages LLMs to create 3D models and works with the popular 3D modeling software, Blender.

Last week, we reported on Stability AI’s Stable Video 3D. It leverages video diffusion models to create 3D videos from an image or text prompt.

Curiosity is a compass. Follow it fearlessly.

🚀 Elevate your academic exploration

Say goodbye to the hassle of navigating endlessly through academic papers. Instead, streamline your research with SciSpace.

😰 Drowning in documents?

Navigate through 200M+ papers, upload your own PDFs, and get tailored summaries and explanations with SciSpace.

😌 Turn complexity into clarity.

Breeze through the most complex academic texts. Translate the most elaborate material into simple, understandable language.

📚 Loved by millions of researchers worldwide.

From Harvard to Stanford, join a global community that’s revolutionizing how research is done.

Get 40% off on an annual plan with BOB40 and 20% off on a monthly plan with BOB20.


Emad is outie 🤦‍♂️ ✌️

On Saturday, Emad Mostaque resigned from Stability AI. He was the CEO, co-founder, and a board member of the company.

What happened?

In a press release, Stability AI announced that Emad resigned and the COO and CTO would serve as interim co-CEOs.

Why did he resign?

Emad claims he stepped down to combat centralized AI.

“I am proud two years after bringing on our first developer to have led Stability to hundreds of millions of downloads and the best models across modalities. I believe strongly in Stability AI’s mission and feel the company is in capable hands. It is now time to ensure AI remains open and decentralised.”

Emad Mostaque, former CEO of Stability AI

Is that the real reason?

You be the judge. Back in June, we reported on Stability’s executive retention problem. The COO and the Head of Research both quit following lawsuits.

In July, we covered Stability’s co-founder’s lawsuit against Emad. He claims Mostaque deceived him into selling his 15% share for $100.

Two weeks ago, we covered Midjourney’s ban against Stability AI employees. It claims Stability AI caused a 24-hour outage by attempting to scrape data.


Want a byte-sized version of Hacker News? Try TLDR’s free daily newsletter.

TLDR covers the most interesting tech, science, and coding news in just 5 minutes. No sports, politics, or weather.


New Humane Ai Pin deets 🤖 📌

On Wednesday, Humane unveiled more information about its Ai Pin. It ships out at the end of March, features a new vision capability, and more.

When is it shipping out?

Humane claims priority orders will arrive by April, and new orders are expected to be delivered in May.

What does the vision feature do?

It captures photos and videos and allows you to gather information about objects you interact with.

Anything else?

Humane announced future updates that include gesture-based unlocking, Google Calendar integration, and AI agents for web browsing, and more.

I want to learn more about this thing.

In October, we covered the AI Pin’s debut at Paris Fashion Week. Naomi Campbell wore it at Coperni’s 2024 Spring/Summer fashion show.

In November, we reported on Humane’s AI Pin launch announcement. They said it aims to integrate technology into daily life without a screen.

Last month, we covered Humane’s AI Pin’s launch delay. They also announced a new partnership with South Korea’s largest mobile provider.



Claude Investor — an open-source investment analyst powered by Claude 3.

Glossarie — Read your favorite pieces of literature to learn new languages.

No-Code Leaderboard — a worldwide, no-code development ranking platform.

Butternut AI — an intuitive, fast, versatile AI website builder.

Think Pieces

Is AI more likely to change your mind than another person? In a study, personalized chatbots achieved persuasion rates up to 81.7% higher than humans.

Why did the US propose to invest $8.5 billion in Intel? The White House aims to fund Intel’s domestic chip manufacturing.

Here’s why Apple is considering buying Baidu’s generative AI. The goal is to use Baidu’s AI in Chinese Apple devices.

Startup News

Microsoft paid Inflection $650 million. The aim is to integrate Inflection in Microsoft Azure and hire the majority of Inflection’s team.

Sakana AI unveiled its Evolutionary Model Fusion method. It’s a technique for creating AI models inspired by natural selection.

Elon Musk says Neuralink cures blindness in monkeys. He announced a new product, Blindsight, will be the next product from Neuralink.


Champ — a human generation method that stands for “Controllable and Consistent Human Image Animation with 3D Parametric Guidance.”

VidLA — a method that outperforms previous video-language alignment techniques by utilizing a two-tower architecture.

SiMBA — an architecture that enhances the stability and performance of sequence modeling.



We know AI models can solve complex problems. Why and how? Even the top researchers don’t really know.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.


Until next time 🤖😋🧠