• Bot Eat Brain
  • Posts
  • Adobe Premiere unveiled AI features and model integrations.

Adobe Premiere unveiled AI features and model integrations.

PLUS: Meta slides into your DMs

Good morning, human brains, and welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Your $100,000 film school degree is useless 📹 🥴

    Adobe announced AI features and models coming to Premiere Pro.

  • Bitty Bot with a Big Brain 🐜 ⚔️

    Hugging Face released an open-source, 8B parameter model.

  • A new “friend” in your DMs 🤫 👀

    Meta is testing the Meta AI chatbot in select Instagram inboxes.

MAIN COURSE

Your vids > Hollywood productions 📹 🥴

On Monday, Adobe Premiere unveiled AI features and model integrations. This allows you to access Sora, Pika, and Runway directly in Premiere Pro.

I’m Adobe-sessed with these features. 🤭

What are the video features?

  • Generative Extend allows you to add frames to video clips, smooth transitions, and more.

  • Object Addition enables you to select, track, and replace objects in videos.

  • Object Removal lets you remove unwanted items from clips.

  • Text-to-video allows you to generate new footage with text prompts or uploaded images.

Why multiple models?

You can select Adobe Firefly, OpenAI Sora, and more to generate video. Each model is completely different, so this gives you a virtually endless supply of content to choose from.

Will my viewers know its AI?

Yes. Adobe stated that its content credentials watermark will be attached to any content produced with AI.

Bummer. Are there any new audio features?

  • Interactive fade handles allow you to quickly adjust audio fades.

  • Essential Sound badges let you easily categorize and tag audio files.

  • Effect badges enable you to identify audio clips effortlessly.

  • Redesigned waveforms allow you to Intuitively adjust audio clip visibility.

I have Premiere Pro; I don’t see any of these.

The audio features will be generally available in May. The video models are coming “later this year.”

What else has Adobe been up to?

In January, Adobe announced new AI features in Premiere Pro. It began rolling out Enhanced Speech, Audio Categorization, and more.

In February, Adobe unveiled Project Music GenAI Control. It’s a music generation/editing tool. It claims to be like Photoshop, but for music.

In March, Adobe introduced its Express mobile app. It allows you to use Adobe’s generative AI features on Android and iOS.

MaxAI.me turns every click into AI magic. Instantly summarize articles, craft emails, and search the web with AI. #1 of the day and the week on ProductHunt.

SIDE SALAD

Small Fry, Big AI 🐜 ⚔️

On Friday, Hugging Face released Idefics2. It’s a multimodal model that outperforms larger models in visual question answering, data extraction, and more.

What’s the big deal?

Idefics2 is open-source, contains 8 billion parameters, and competes with much larger models like LLava-Next-34B and MM1-30B-chat.

So… It’s good?

Yes. Idefics2 sets new benchmarks in visual question answering, visual content description, story creation from images, document information extraction, and arithmetic operations based on visual input.

What’s under the hood?

It was trained with “The Cauldron,” a dataset of 50 manually curated, conversational fine-tuning datasets. It utilizes transformers, which allows you to easily fine-tune it for many multimodal applications.

How can I use it?

Hugging Face released it under an open Apache 2.0 license. You can check it out on the Hugging Face Hub, now.

Has Hugging Face done anything else lately?

Why, yes. Back in November, Hugging Face, Meta, and Scaleway announced their partnership. The goal is to create AI tech in the French tech scene.

A LITTLE SOMETHING EXTRA

Meta slides into your DMs 🤫 👀

And not in the hot way… 🥵

On Friday, Meta started testing Meta AI on Instagram. It’s a chatbot that answers your questions, generates images with text prompts, and more.

What’s the point?

Meta AI claims its role is to assist users directly on Instagram. It describes itself as a supplement to the social experience on Instagram, providing quick answers and fresh perspectives when friends are not available.

Can I disable this?

We asked it. Here’s its response:

What else has Meta been messing around with?

In December, Meta unveiled Purple Llama. The purpose is to allegedly establish trust in AI development by providing tools for building responsible AI.

Yesterday, we covered Meta’s OpenEQA. It’s a benchmark that assesses how AI understands physical spaces.

YOUR DAILY MUNCH

Tools

Evelyn — an AI tutor that open-source, provides flashcards, and more.

Flim — a database of photos, videos, TV series, documentaries, and more.

Packify — an AI-powered product packaging design tool.

Deblank Colors — an AI design tool that explains color theory, mockup visualizations, and more.

Think Pieces

Here’s how Meta is handling explicit AI-generated images. Meta’s Oversight Board announced investigations into Instagram in India and Facebook in the U.S.

Are venture capitalists growing skeptical of AI? The market may be more affected by tech companies leveraging existing AI products than new ones.

How AI is changing Formula 1 racing. How a machine-learning-based system provides data on how to be more aerodynamic.

Startup News

Adobe trained Firefly with pictures from Midjourney. Adobe previously stated it only trains Firefly with content it owns or from the public domain.

Anthropic’s CEO says major AI models will cost $10 billion. For reference, today they cost around $100 million.

ChatGPT’s growth rate decreases as Claude 3 gains popularity. Claude saw a 161% increase in March alone.

Research

SLIP vs. FLIP vs. CLIP vs. CLIP+Data — a method to achieve CLIP performance with half the training data via CLIP+Data.

RecurrentGemma — an open language model that leverages Google’s Griffin architecture.

Pre-training Small Base LMs with Fewer Tokens — a method that involves pre-training 1.5B parameter language models with only 1 billion tokens of data.

MEMES FOR DESSERT

TWEET OF THE DAY

OpenAI opened an office in Japan on Monday.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

AI ART-SHOW

Until next time 🤖😋🧠