Lunch at 12:30pm, talk at 1pm, in 148 Fitzpatrick

Title: Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning

Abstract: As the domain of large language models (LLMs) continues to evolve, tailoring these models to meet individual user preferences has emerged as a critical endeavor. This presentation will explore the advances of LLM personalization, introducing our innovative contribution: One PEFT Per User (OPPU). OPPU introduces a novel approach to store and capture user-specific behavior patterns and preferences within personal PEFT modules. OPPU enhances the LLMs’ capability to capture the nuances of user behavior, especially in complex contexts. OPPU also empowers users with model ownership, effectively mitigating customization constraints. Experiments on the LaMP benchmark across seven varied tasks showcases OPPU’s superiority over conventional prompt-based personalization methods. We will explore the mechanisms behind OPPU’s success, including its adaptability to shifts in user behavior, its effectiveness across a spectrum of user engagement levels, its robustness in handling diverse user history data, and the versatility of OPPU with different PEFT strategies.

Bio: Zhaoxuan Tan is a first-year CSE PhD student at Notre Dame, advised by Prof. Meng Jiang. His research interest lies at the intersection of NLP and data mining, with a focus on user modeling. He is currently working on personalizing large language models. He has published multiple papers in top-notch AI conferences, such as NeurIPS, EMNLP, ACL, WWW, AAAI, and SIGIR.