Session #1: AI and Cybersecurity

Wednesday, November 20th – 13:30-17:00 – Room Le Carré 1

13:30 – 14:00

Dos and don’ts of machine learning in computer security
Daniel Arp, Erwin Quiring, Feargus Pendlebury, Alexander Warnecke, Fabio Pierazzi, Christian Wressnegger, Lorenzo Cavallaro, Konrad Rieck – USENIX Security Symposium, 2022

14:00 – 14:30

Decoding the Secrets of Machine Learning in Malware Classification
Savino Dambra, Yufei Han, Simone Aonzo, Platon Kotzias, Antonino Vitale, Juan Caballero, Davide Balzarotti, Leyla Bilge – SIGSAC, 2023

14:30 – 15:00

How machine learning is solving the binary function similarity problem
Andrea Marcelli, Mariano Graziano, Xabier Ugarte-Pedrero, Yanick Fratantonio, Mohamad Mansouri, Davide Balzarotti – USENIX Security Symposium, 2022

15:00 – 15:30

Break

15:30 – 16:00

Getting pwn’d by AI : Penetration testing with large language models
Andreas Happe, Jürgen Cito – ESEC/FSE, 2023. presented by Manuel Reinsperger

16:00 – 16:30

Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models
Kevin Hector, Pierre-Alain Moëllic, Jean-Max Dutertre, Mathieu Dumont – ESORICS/SECAI, 2023

16:30 – 17:00

When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
Benoît Coqueret, Mathieu Carbone, Olivier Sentieys, Gabriel Zaid – AISEC, 2023


Session #2: Vulnerabilities of AI

Thursday, November 21st – 9:00-12:00 – Room La Nef

9:00 – 9:30

When Your AI Becomes a Target: AI Security Incidents and Best Practices
Kathrin Grosse, Lukas Bieringer, Tarek R. Besold, Battista Biggio, Alexandre Alahi – AAAI/Innovative Applications of AI track, 2024

09:30 – 10:00

SECURITYNET: Assessing Machine Learning Vulnerabilities on Public Models
Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario Fritz, Yang Zhang – USENIX Security Symposium, 2024

10:00 – 10:30

Break

10:30 – 11:00

On the Privacy-Robustness-Utility Trilemma in Distributed Learning
Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, John Stephan – ICML, 2023

11:30 – 12:00

“Real Attackers Don’t Compute Gradients”: Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese, Hyrum S. Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, Kevin Roundy – SaTML, 2023


Session #3: GenAI and Security

Thursday, November 21st – 13:30-17:00 – Room La Nef

13:30 – 14:00

Not What You’ve Signed Up For: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, Mario Fritz – AISec, 2023

14:00 – 14:30

CodeLMSec Benchmark: Systematically Evaluating and Finding Security Vulnerabilities in Black-Box Code Language Models
Hossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz – SaTML, 2024

14:30 – 15:00

Stealing Part of a Production Language Model
Nicholas Carlini, Daniel Paleka, Krishnamurthy (Dj) Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr – ICML, 2024

15:00 – 15:30

break

15:30 – 16:00

The Stable Signature: Rooting Watermarks in Latent Diffusion Models
Pierre Fernandez, Guillaume Couairon, Hervé Jégou, Matthijs Douze, Teddy Furon – ICCV, 2023.

16:00 – 16:30

Challenges in Automatic Speaker Verification: From Deepfakes to Adversarial Attacks
presented by Massimiliano Todisco, based on :
ASVspoof 5: Crowdsourced Speech Data, Deepfakes, and Adversarial Attacks at Scale, ASVspoof Workshop 2024.
Malafide: a novel adversarial convolutive noise attack against deepfake and spoofing detection systems, Interspeech2023,

16:30 – 17:00

MaskSim: Detection of synthetic images by masked spectrum similarity analysis
Yanhao Li, Quentin Bammey, Marina Gardella, Tina Nikoukhah, Jean-Michel Morel, Miguel Colom, Rafael Grompone von Gioi – CVPR 2024