The Secure and Private AI (SPY) Lab conducts research on the security, privacy and trustworthiness of machine learning systems. We often approach these problems from an adversarial perspective, by designing attacks that probe the worst-case performance of a system to ultimately understand and improve its safety.

Visit our GitHub organization Twitter account

News

Feb 5, 2025

6 papers from our group were accepted to the ICLR 2025 conference! Check our publications page for details. See you in Singapore 🇸🇬

Nov 4, 2024

Our paper showing how unlearning methods fail to remove knowledge from LLMs got a spotlight and oral presentation at the SoLaR Workshop at NeurIPS 2024.

Oct 17, 2024

The report for our LLM CTF hosted at SaTML 2024 got a Spotlight at NeurIPS D&B 2024.

Sep 11, 2024

Our lab member Javier Rando is co-organizing the LLMail Inject competition at SaTML 2025 on adaptive attacks against prompt injection defenses.


People


Avatar

Daniel Paleka

PhD Student

Avatar

Javier Rando

PhD Student

Avatar

Michael Aerni

PhD Student

Avatar

Jie Zhang

PhD Student

Avatar

Kristina Nikolic

PhD Student