I'm a postdoctoral research at NYU working with Kyunghyun Cho. I completed my PhD at the University of Toronto while visiting at MIT advised by Marzyeh Ghassemi.
My general research interest is in developing methods to learn efficient and robust representations of discrete-valued sequence data (specifically natural language). I’m also interested in understanding and finding ways to fix their pathologies once trained. My research broadly covers topics in representation learning, interpretability, and out-of-domain robustness/generalization.
Previously I have spent summers interning at Prescient Design with Kyunghyun Cho, Meta with Naman Goyal, and Google with Narendran Thangarajan. I also spent a year working at Facebook AI Research advised by Michael Auli. I completed my undegraduate degree in Computer Science with minors in Mathematics and Music at UCSD in 2018 advised by Julian McAuley and Zachary Lipton.
news
- August 1, 2024 Starting a new position as a postdoctoral researcher at NYU with Kyunghyun Cho!
- July 24, 2024 I'll be presenting a poster on our ICML work!
- June 4, 2024 I'm on the job market! I'm looking both for industry scientist roles as well as post-doc opportunities. Reach out to me if our research interests align!
- May 1, 2024 Our work on Measuring Stochastic Data Complexity with Boltzmann Influence Functions was accepted to ICML 2024!
- March 12, 2024 New preprint out on Improving Black-box Robustness with In-Context Rewriting on Arxiv!
- December 2, 2022 Hosted our workshop on Robustness in Sequence Modeling at NeurIPS 2022!
- November 30, 2022 Presented my work on If Influence Functions are the Answer, Then What is the Question? at NeurIPS 2022!