I'm a fifth year PhD student at the University of Toronto visiting at MIT advised by Marzyeh Ghassemi. My general research interest is in developing methods to learn efficient and robust representations of discrete-valued sequence data (specifically natural language). I’m also interested in understanding and finding ways to fix their pathologies once trained. My research broadly covers topics in representation learning, interpretability, and out-of-domain robustness/generalization.
Previously I have spent summers interning at Prescient Design with Kyunghyun Cho, Meta with Naman Goyal, and Google with Narendran Thangarajan. I also spent a year working at Facebook AI Research advised by Michael Auli. I completed my undegraduate degree in Computer Science with minors in Mathematics and Music at UCSD in 2018 advised by Julian McAuley and Zachary Lipton.
news
- March 12, 2024 New preprint out on Improving Black-box Robustness with In-Context Rewriting on Arxiv!
- December 6, 2023 I'm on the job market and will be attending NeurIPS 2023! Reach out to me if you're interested in chatting about research or job opportunities.
- December 2, 2022 Hosted our workshop on Robustness in Sequence Modeling at NeurIPS 2022!
- November 30, 2022 Presented my work on If Influence Functions are the Answer, Then What is the Question? at NeurIPS 2022!