• Bot Eat Brain
  • Posts
  • Amazon's virtual try-on tool, Diffuse To Choose

Amazon's virtual try-on tool, Diffuse To Choose

PLUS: AI gives better gifts than you


Good morning, human brains, and welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Waste more money on clothes from Amazon 🙋‍♀️ 👗

    Amazon’s new tool allows you to virtually try on clothing before buying.

  • Get your ChatGPT to stop slacking 🥴 💻

    OpenAI announced new models, updates, API tools, and more.

  • AI gives better gifts than you 🤤 🎁

    Etsy’s Gift Mode suggests gifts based on interests, personality, and more.


That dress is so you 🙋‍♀️ 👗

In November, we reported on Amazon’s new AI chips. It unveiled its Trainium2 and Graviton4 chips to attempt to compete against NVIDIA.

3 weeks ago, we covered Amazon’s clothes shopping tools. It released 4 tools to reduce the amount of clothing items returned to Amazon.

2 weeks ago, we reported on Amazon’s Fire TV update. It allows you to create AI-generated art from your Amazon devices.

Are we going shopping?

I’m so there. On Wednesday, Amazon introduced Diffuse to Choose (DTC). It allows you to virtually try on clothing items.

What does it do?

It allows you to integrate product images into your personal photos. It uses a diffusion-based inpainting model to manipulate the details of any scene you provide.

So, it shows how I look in a shirt?

Yes, but you can also use it with clothing, furniture, decor items, and more.

What’s under the hood?

It uses a U-Net Encoder to inject details into the diffusion process, which enhances the realism of the image. It achieves state-of-the-art performance and has been validated on both proprietary and public datasets.

That’s awesome, how do I try it out?

You can’t yet, but Amazon says it will release a demo and its code soon.

Knowledge is Power. Share it Seamlessly.

Ditch the documentation drama. 😰

Instead, transform even the most complex instructions into captivating video tutorials in minutes with guidde.

🤬 Fed up with repeating the same thing over and over?

Say “sayonara” to wasted time and money, let guidde do the talking for you.

🚀 Turbocharge your training process.

Create effortless, engaging, effective guides 11x faster.

📲 Share knowledge effortlessly.

Embed or share your guides anywhere, ensuring your team stays in sync.

🌟 No cinematography experience? No worries.

guidde makes you a documentation wizard without the hassle.

And it’s a web app – no downloads needed.

The best part? It’s 100% free.


ChatGPT isn’t lazy anymore? 🥴 💻

In December, we reported on OpenAI’s Preparedness Framework. The goal was allegedly to address safety concerns by evaluating risks for advanced models.

A couple of weeks later, we covered OpenAI’s blog post on journalism. It was a direct response to its ongoing lawsuit with The New York Times.

A week after that, we reported on OpenAI’s new GPT store. We gave a step-by-step guide on how to discover and share GPTs.

Last week, we covered OpenAI’s new partnership with Arizona State University. It integrated ChatGPT into ASU’s teaching, research, and more.

More OpenAI stuff?

You got it. On Thursday, OpenAI unveiled new updates and upcoming models. It includes two new models, discounted API access, and more.

GPT 5 is here?

Dream on. OpenAI updated GPT 3.5 Turbo and cut the cost of its API by 50%. It’s now more accurate, less buggy, and more.

What about GPT 4?

OpenAI announced a new GPT-4 Turbo preview model that’s designed to reduce its “laziness.” It has vision capabilities and will be coming out soon.

What are the new models?

Two text embedding models called text-embedding-3-small and text-embedding-3-large. Text embedding allows AI models to perform vital functions like searching, clustering, recommending, and more.

Anything else?

It also introduced a new, free version of its Moderation API for developers. It’s called 007, and it’s designed to identify potentially harmful text.


Microns.io is a newsletter to help you discover profitable and bootstrapped startups for sale:


Take the thought out of thoughtful 🤤 🎁

Back in July, we reported on Shopify Sidekick. It’s an ecommerce assistant that helps you perform virtually any task within Shopify.

In September, we covered eBay’s Magical Listing Tool. It allows you to upload an image and use AI to generate a title, description, and more.

Let me guess, Facebook Marketplace?

Wrong. On Wednesday, Etsy launched Gift Mode. It’s a new feature that recommends gifts based on the recipient’s interests, personality, and more.

How does it work?

Just specify who you’re shopping for, what occasion it’s for, what they like, and more. Etsy shows you various personas, like The Cat Lover or The Adventurer, to help you select a gift.

Sweet, that’s it?

Etsy will also send a teaser email to whoever’s getting the gift. You can choose whether it will reveal the gift or not, and you can write them a custom message.

Let me guess, I have to wait to use it… 🙄

Actually, you can use it right now.



Brainner — a recruitment tool that sorts candidates, screens resumes, and more.

Steve 2.0. — a versatile text/audio-to-video tool.

Startilla — an idea validation and pitching tool for business development.

Findr — centralizes data from Slack, Notion, and more into one interface.

Think Pieces

Here’s why Microsoft is shifting its focus to smaller AI models. It aims to create SLMs (small language models) that match GPT-4’s quality.

Taylor Swift deepfakes get millions of views on X. The controversy it caused and the regulations it may bring.

What you need to know about the new Center of Generative AI. The University of Texas is creating the most important GPU cluster in academia.

Startup News

NVIDIA unveiled RTX Video HDR. It leverages AI to enhance low-res elements, so you can remaster your favorite old games.

Amazon announces AI pharmacy enhancements. It plans to use Bedrock and SageMaker to create apps and make your experience more convenient.

PayPal announced new upcoming AI products. They will help merchants make more sales, streamline the checkout process, and more.


Health-LLM — MIT and Google’s framework to leverage LLMs to tackle health prediction tasks using data acquired from wearable devices.

MM-LLMs — A look at the architecture of multimodal LLMs. It includes design formulations, training techniques, and more.

UNIMO-G — a framework that enhances text-to-image models by utilizing textual and visual inputs.



An AI-powered prosthetic hand that allows you to do push-ups, pick grapes, and more.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.


Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.