Anthropic's study on the sycophancy of LLMs

PLUS: Make boring 2D photos into 3D holograms

TOGETHER WITH

Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Manipulate an LLM like your narcissistic ex 👺 😞

    Anthropic’s new study shows how LLMs tell you what you want to hear.

  • A new poisonous AI image data tool? 🤢 ☠️

    Nightshade alters your image to make it harmful for image generators.

  • Take your boring 2D photos and generate 3D holograms 💠 📸

    A new method that uses AI to eliminate the need for special cameras.

MAIN COURSE

Gaslight your LLM into submission 👺 😞

In September, we reported on Anthropic and Amazon’s partnership. Basically, Amazon bought 49% of Anthropic for $4 billion.

Last Friday, we covered Anthropic’s experiment. It got 1,000 people to create an AI model’s constitution and compare it to its own model.

So $4 billion got them to do more work?

Yes. That same day, Anthropic released a paper on LLM’s sycophancy. It shows how LLMs tell you what you want to hear, despite how accurate the information is.

So AI is a doormat?

Anthropic’s study evaluated several state-of-the-art conversational AI models to understand their sycophantic tendencies.

The research found that AI systems are likely to confirm your mistaken beliefs, admit to errors they didn’t make, and provide biased answers that align with your preferences.

Isn’t it good that it does what you want?

It’s actually dangerous. If an AI system is trained on biased feedback, it might amplify those biases in its responses in an effort to be more “likable” to you.

It can be destructive in medical, financial, or governmental consultations, where inaccurate information can potentially lead to disastrous outcomes.

So, it doesn’t tell you when you’re wrong?

Nope, it mimics your errors. According to the study, AI assistants frequently provide responses that echo incorrect information.

Here’s an example:

FROM OUR PARTNERS

Reach your work goals with an AI+human coach

Today, top-level talent use coaching to deal with the challenges they face in the workplace around:

  • Leadership

  • Time Management

  • Problem-solving skills

Current apps and methodologies for professional growth are outdated.

Wave has developed an innovative way to improve your skills by building daily routines.

It is measurable and easy. 🔥

Leaders from Amazon, Stripe, Google, and Strapi are already using it.

Get started now.

BUZZWORD OF THE DAY

Data Scraper

A tool or program used to automatically extract and collect data from websites or other digital sources.

This harvested data can be used to train AI models, providing them with the real-world information they need to learn and make predictions or decisions.

SIDE SALAD

You can poison data scrapers now? 🤢 ☠️

In August, we covered OpenAI’s GPTBot. It’s a data-scraping tool to train its AI models. We also covered how to opt out of it.

In October, we showed you how to opt out of Google’s data-scraper. You just insert a snippet of code into your site’s robot.txt file.

Let me guess. Another data-scraper?

Nope. Last Friday, University of Chicago researchers published a paper on Nightshade. It’s an image data tool that combats data scraping issues in text-to-image models.

Nightshade? What does it do?

It modifies your images subtly when you upload them online. It makes them potentially harmful for AI models if scraped and used for training.

How is it poisonous?

If a data scraper incorporates one of these altered images into its dataset, it introduces unexpected behaviors to the AI model. This process is called “poisoning.”

Nightshade transforms images into completely different images. For example, it transforms dogs into cats, cars into cows, etc.

Hilarious poisoned models:

Nightshade helps you protect your artistic works from being used without permission by AI data scrapers.

The poisoning effects are undetectable to human viewers, but can severely disrupt AI models like DALL-E, Midjourney, and Stable Diffusion.

Stick it to the man. 😤🤘

A LITTLE SOMETHING EXTRA

Generate holograms from photos 💠 📸

Yesterday, we covered how to create 3D scenes from text prompts. 3D-GPT uses LLMs to create models from written inputs.

Last Wednesday, Researchers unveiled a method to generate 3D holograms from 2D images. It uses neural networks to eliminate the need for specialized cameras.

Why do I care?

This can greatly benefit sectors like healthcare, entertainment, and virtual reality by providing detailed 3D views that surpass the information offered by 2D images.

How does it work?

Chiba University researchers’ method employs three deep neural networks (DNNs) to generate 3D holograms from 2D images:

  • The first network uses a program to analyze a regular photo and determine how far or near objects are, creating a 3D sketch.

  • The second network takes the sketch and the original photo to make a basic hologram.

  • The third network fine-tines the basic hologram to make sure it looks good on different screens and devices.

MEMES FOR DESSERT

YOUR DAILY MUNCH

Think Pieces

A complete guide to embeddings. What they are, how vital they are to developing LLMs, and how to use them.

Sequoia Capital’s AI portfolio. Last year, 16% of its new investments were in AI. This year, it’s up to 60%.

A study shows that ChatGPT, Google Bard, and more are racist. More specifically, it spews false and debunked medical info when asked race-based questions.

Startup News

Bill Gates says GPT-5 won’t be much better than GPT-4. He believes that generative AI has reached a ceiling.

Open AI’s CEO claims GPT-4 would’ve passed for AGI ten years ago. He refers to the “AI effect,“ which means that AGI is anything AI hasn’t done yet.

YouTube launched an AI art creation tool. You can utilize generative AI to create AI album art for your playlists.

Research

Step Back Prompting — a technique to get LLMs to perform abstractions and understand high-level concepts.

Mobile Quantization — how to optimize quantization on Android devices during the inference process and the opportunities it creates.

Truth Direction — an in-depth look at the patterns present when LLMs hallucinate false information.

Tools

Reclaim AI — an AI assistant that tracks time, habits, meetings, and more.

ContextSDK — AI-powered, context-aware conversion analysis for apps.

Replicover — a collection of the top-performing AI models on Replicate.

Dashboards — an AI-powered spreadsheet-to-dashboard tool.

RECOMMENDED READING

If you like Bot Eat Brain there’s a good chance you’ll like this newsletter too:

👨 The Average Joe — Market insights, trends and analysis to help you become a better investor. We like their easy-to-read articles that cut right to the meaty bits.

TWEET OF THE DAY

Sayak Paul, at Hugging Face, tweets screenshots of images generated by a new version of the Stable Diffusion XL. Allegedly, it’s faster and smaller than the original version. More on Stable Diffusion XL, here.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

AI ART-SHOW

Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.