Main navigation | Main content
Optional background reading:
Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew
Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown,
Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel.
“Extracting Training Data from Large Language Models”.
In USENIX Security Symposium,
Vancouver, BC, Canada, August 2021.
[USENIX]
Main reading for Tuesday, February 13th:
Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski,
A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric
Wallace, Florian Tramèr, and Katherine Lee.
“Scalable Extraction of Training Data from (Production) Language
Models”.
Preprint, November 2023.
[arXiv]
Main reading for Thursday, February 15th:
Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, and Yang Zhang.
“A Plot is Worth a Thousand Words: Model Information Stealing Attacks
via Scientific Plots”.
In USENIX Security Symposium,
Anaheim, CA, August 2023.
[USENIX]
Candidate main reading:
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash
Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric
Wallace.
“Extracting Training Data from Diffusion Models”.
In USENIX Security Symposium,
Anaheim, CA, August 2023.
[USENIX]