Microsoft's huge AI data leak

PLUS: OpenAI's new AI security initiative

TOGETHER WITH

Good morning, human brains. Welcome back to your daily munch of AI news.

Here’s what’s on the menu today:

  • Microsoft’s AI team leaked 38TB of private data 🫣 🚨

    They leaked secrets, private keys, passwords, and more on GitHub.

  • NVIDIA and Anyscale’s new AI partnership 🤖 ❤️ 🤖

    The goal is to accelerate the process of developing LLMs.

  • OpenAI’s AI security initiative 🔴 🚩

    It launched the Red Teaming Network to find security vulnerabilities.

APPETIZER

Microsoft’s huge AI data leak 🫣 🚨

Microsoft’s AI research team exposed 38TB of private data. They leaked secrets, private keys, passwords, and backups of employees’ workstations.

Good work, team.

Leaky deets:

1/ Microsoft’s AI research team exposed 38TB of private data while publishing open-source training data on GitHub.

2/ The team misconfigured an SAS (Shared Access Signature) token which shared the entire Azure storage account.

3/ This token allowed “full control“ permissions, which enabled potential attackers to view, delete, and overwrite files.

Our take: Creating impenetrable security in AI pipelines is a daunting task, but a company as big as Microsoft should’ve never made this mistake.

Collaboration between security, data science, and research teams is essential to mitigate risks in the AI development process.

BUZZWORD OF THE DAY

Scalable AI

The adaptability of AI components to function efficiently at varying sizes, speeds, and complexities. It’s essential for ensuring AI systems meet specific operational demands and challenges.

FROM OUR PARTNERS

Ideas are beautiful. Present them that way.

Ditch the design drama.

Instead, transform even the most intricate of your ideas into stunning slide decks in just minutes with Gamma.

😰 Too many ideas, too little time?

Say goodbye to late-night slide struggles. Let Gamma turn your ideas into visual masterpieces.

🚀 Speed up your design process.

Quickly create captivating presentations.

📱 Go beyond static slides.

Every slide is interactive and mobile-responsive by default.

🎨 No design degree? No problem.

Gamma does all the heavy lifting, so you look like a design pro.

And it’s a web app - so no downloads.

The best part? It’s free.

MAIN COURSE

NVIDIA and Anyscale team up 🤖 ❤️ 🤖

NVIDIA announced its new partnership with Anyscale. Anyscale is a computing platform to create scalable AI and Python applications with its Ray framework.

Rays of AI sunshine.

Friends with benefits:

1/ The goal of the partnership is to speed up the building, training, and deployment of generative AI models.

So, not cash?

2/ NVIDIA integrates TensorRT-LLM, Triton Inference Server, and NeMo into Ray’s open-source ecosystem.

3/ With Anyscale’s Endpoints, you can access developer APIs to integrate pre-tuned LLMs into applications.

4/ These integrations are projected to launch in Q4.

Our take: The collab could make AI development easier for more developers, but NVIDIA’s strategy of forming partnerships probably has more to do with its long-term success.

A LITTLE SOMETHING EXTRA

OpenAI’s new AI safety team 🔴 🚩

OpenAI launched the “Red Teaming Network.“ They want experts to evaluate and test their AI models.

Code red.

The goal is to find potential risks and enhance the safety of AI models before they launch.

Red alert:

1/ OpenAI’s Red Teaming Network simulates adversarial attacks to uncover security vulnerabilities.

2/ OpenAI emphasizes its need for diverse, outside perspectives.

3/ OpenAI wants experts outside of tech as well, including, biology, law, linguistics, and more.

Our take: If you’ve read George Orwell’s 1984, you might cringe when you hear a big tech company prioritize “safety.” The huge task of safer AI does need to be addressed, however. It’s good to see OpenAI being transparent about this.

Allegedly.

MEMES FOR DESSERT

YOUR DAILY MUNCH

Think Pieces

An a16z article on healthcare AI. A look at the tasks AI needs to complete in the healthcare sector.

Famous actor, Stephen Fry, claims AI stole his voice. His agents claim an AI-generated voiceover was used in the Harry Potter audiobooks without his consent.

A look at Salesforce’s CoD (Chain of Density) prompt. How it allegedly creates more accurate, coherent prompts than generic methods.

Startup News

TikTok unveils new AI tools. It’s testing ways to automatically label AI-generated content.

OpenAI’s DevDay is open for registration. It closes on September 22 and if accepted, tickets are $450.

Writer raises $100 million. It’s a startup that enables businesses to build AI models and virtual assistants.

Research

Clinical Text Summarization — eight LLMs that outperform human experts in completeness and correctness in clinical text summarization.

Agents — a paper on an open-source library to create autonomous language agents.

AMBIG-ICL — a method that mitigates LLMs’ sensitivity to prompts by considering label ambiguity, model misclassification, and semantic similarity.

Tools

Briefly — an AI-powered, customizable content summarization tool.

DialMe — a voice-enabled AI interviewer/insight-gathering tool.

Tabby — an AI coding assistant that runs natively on Apple’s M1 and M2 GPUs.

Klu — chat and interact with your data. Used by Netflix.

TWEET OF THE DAY

Greg Brockman, President of OpenAI, tweets about an AI Safety discussion with Elon Musk and others.

Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.

RECOMMENDED READING

If you like Bot Eat Brain, you might like these other newsletters too:

🚢 Semafor Flagship — The daily global news briefing you can trust. Get the biggest stories across the globe summarized for you each morning in 100 words or less. Read by 100k+ intellectually curious subscribers.

AI ART-SHOW

Until next time 🤖😋🧠

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.