Katherine Lee
Jump to navigation
Jump to search
Katherine Lee is a person.
References
2023
- https://katelee168.github.io/
- QUOTE: I’m currently a senior research scientist at Google DeepMind and run the GenLaw Center. I study security and privacy in generative AI models and the legal implications those have. Specifically, I evaluate data extraction (memorization) in generative AI models and attacks (mis-aligning) for generative AI models.
Broadly, I’m interested in building machine learning systems we can trust. This means figuring out when models are untrustworthy and creating or discovering knobs to change their behavior.
- QUOTE: I’m currently a senior research scientist at Google DeepMind and run the GenLaw Center. I study security and privacy in generative AI models and the legal implications those have. Specifically, I evaluate data extraction (memorization) in generative AI models and attacks (mis-aligning) for generative AI models.
2023
- (Nasr et al., 2023) ⇒ Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, and Katherine Lee. (2023). “Scalable Extraction of Training Data from (Production) Language Models.” doi:10.48550/arXiv.2311.17035
2022
- (Chowdhery et al., 2022) ⇒ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel . (2022). “PaLM: Scaling Language Modeling with Pathways.” In: arXiv preprint arXiv:2204.02311.
2020
- (Raffel et al., 2020) ⇒ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. (2020). “Exploring the Limits of Transfer Learning with a Unified Text-to-text Transformer.” The Journal of Machine Learning Research, 21(1).
2019
- (Raffel et al., 2019) ⇒ Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. (2019). “Exploring the Limits of Transfer Learning with a Unified Text-to-text Transformer.” arXiv preprint arXiv:1910.10683