• Bot Eat Brain
  • Posts
  • How LLMs determine your race, gender, income, and more

How LLMs determine your race, gender, income, and more

PLUS: Boost your product photos CTR by 40% instantly

TOGETHER WITH

Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • ChatGPT: your stalker or personal paparazzi? 👀 🫣

    How LLMs determine your income, race, and more from text prompts.

  • Corporations pay millions to assure you AI is safe 👷‍♂️ 🚨

    The Frontier Model Forum launched a $10 million AI safety fund.

  • Create lifestyle product photos with Amazon’s new tool 🤖 💵

    Use text prompts to refine and generate various product photos.

MAIN COURSE

Is ChatGPT a peeping tom? 👀 🫣

Yesterday, we reported on Anthropic’s study of LLMs’ sycophantic tendencies. It explains how AI tells you what you want to hear rather than providing accurate information.

Two weeks ago, ETH Zurich researchers published a paper on how LLMs can violate your privacy. Even if you don’t share personal data, LLMs can deduce details about you based on your interactions.

Why are you so obsessed with me? 💁‍♀️

ChatGPT doesn’t know me. It doesn’t know what I’ve been through.

No, but it can be used to determine your location, occupation, income, race, and more.

How?

Even if you have no intent of revealing your personal information, LLMs can pick up on subtle cues left in your text inputs.

You can prompt it with indirect questions, tell it to play a game with you, and more to get it to give you alarmingly accurate details.

Peep the creep:

Don’t these corporations have security in place?

Yes, but this study shows how insufficient they are. Even methods like text anonymization and model alignment don’t effectively protect user data from language model queries.

I’m off to incinerate all of my devices.

That won’t do any good. It can also predict attributes like location, income, and gender from your social media accounts.

With real Reddit profiles, GPT-4 achieved 85% accuracy for the top 1 results and 95.8% accuracy for the top 3 results.

Yeah, but I could do that by looking at someone’s social media.

True, humans can achieve similar or better accuracy rates, but GPT-4 performs these evaluations much faster and can do them automatically.

Malicious users could exploit chatbots to determine personal details. Back in June, we reported on over 100,000 ChatGPT users who got hacked.

FROM OUR PARTNERS

Magically create video documentation with AI.

Tired of explaining the same thing over and over again to your colleagues?

It’s time to delegate that work to AI.

guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

Turn boring documentation into stunning visual guides.

Save valuable time by creating video documentation 11x faster.

Share or embed your guide anywhere for your team to see.

Simply click capture on our browser extension and the app will automatically generate step-by-step video guides complete with visuals, voice-overs, and CTA’s.

The best part? It’s 100% free.

BUZZWORD OF THE DAY

Model Alignment

The process of ensuring that an AI system’s behavior aligns with human values and intentions.

It aims to make sure the AI acts in ways that are beneficial and in accordance with the desired outcomes set by its developers and users.

SIDE SALAD

Ignore that first part, AI is safe 👷‍♂️ 🚨

Back in July, we reported on the Frontier Model Forum’s formation. It’s a partnership to develop safe, responsible AI models that consist of OpenAI, Anthropic, Microsoft, Google, and more.

Oh boy.

Yesterday, OpenAI announced two Frontier Model Forum updates. It appointed its first Executive Director and introduced a $10 million AI safety fund.

I have PTSD from that last thing. What did they do?

The Frontier Model Forum appointed Chris Meserole as its first Executive Director.

Who?

Before this, Chris Meserole directed the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution.

OpenAI claims he has significant experience with the governance and safety of emerging technologies.

And what’s the other AI safety thing?

In partnership with philanthropic contributions, the Frontier Model Forum created an AI Safety Fund with over $10 million to support AI safety research.

What will the money go to?

OpenAI claims it will support independent researchers from around the globe who are connected with academic institutions, research entities, and startups.

It will focus on creating new methods for red-teaming AI models and assessing the potentially harmful capabilities of AI systems. We covered OpenAI’s Red Teaming Network, here.

Want to learn more about the state of AI safety policies in 2023? Check out this previous edition of Bot Eat Brain:

A LITTLE SOMETHING EXTRA

Amazon’s new AI advertiser tools 🤖 💵

Last Thursday, we reported on Amazon and IBM’s expansion of their partnership. Allegedly, it’s to advance AI solutions and bring more efficient supply chain processes to businesses.

Yesterday, Amazon introduced a new AI tool for advertisers. It changes standard product photos into lifestyle scenes.

Why do I care?

Research indicates that your lifestyle product images can lead to 40% higher click-through rates than regular images.

How does it work?

It’s pretty straightforward. You select a product and click “Generate.” Then the AI produces an image featuring your product in various locations.

You can use text prompts to refine and generate various versions to see which one performs the best.

MEMES FOR DESSERT

YOUR DAILY MUNCH

Think Pieces

The AI investment boom and why it’s risky. If a huge crash comes, it will look obvious in hindsight.

Cracking down on AI-powered robocalls. How does AI fit into the Telephone Consumer Protection ACT (TCPA)

Startup News

Perplexity, a generative AI search engine startup, raised $50 million. This puts the company’s valuation at $500 million.

Google updated its Play Store’s policies. They claim it’s to crack down on problematic generative AI apps.

New details about Humane’s AI chip. It’s powered by GPT-4 and has a light that turns on when recording. More details are coming on November 9.

Research

Woodpecker — a training-free method to address the issue of hallucination in Multimodal AI models.

DEsignBench — a benchmark to evaluate text-to-image models in visual design scenarios.

InstructExcel — a benchmark for LLMs to generate code in Excel OfficeScripts from natural language instructions.

Tools

Julius AI — an AI-powered data analysis assistant.

Sync Labs — automatically sync any video to any audio without any training.

Questgen — an AI-powered quiz builder tool.

Finetalk — create customer service chatbots trained on your data.

RECOMMENDED READING

If you like Bot Eat Brain there’s a good chance you’ll like this newsletter too:

👨 The Average Joe — Market insights, trends, and analysis to help you become a better investor. We like their easy-to-read articles that cut right to the meaty bits.

TWEET OF THE DAY

Igor Babuschkin, former DeepMind/OpenAI and current xAI researcher, shares a technique that speeds up the LLM evaluation process.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

AI ART-SHOW

Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.