Lunch at 12:30pm, (virtual) talk at 1pm, in 148 Fitzpatrick

Title: Understanding and Explaining Styles in NLP

Abstract: People use various styles, word choices, and communication strategies to express themselves more effectively. Huge pretrained language models, such as BERT, have been well known to be successful in reaching high performance for predicting these linguistic styles (e.g., politeness, sentiment, emotions). However, do these models learn stylistic cues as humans perceive? To answer this question, we develop Hummingbird, a new dataset of human perception of stylistic lexica on top of benchmarking datasets of linguistics styles. Then, I will talk about how these explanations can be used to develop a model with lexical explanation ability: StyLEx. StyLEx learns from human annotated explanations of stylistic features and jointly learns to perform the task and predict these features as model explanations. Our experiments show that StyLEx can provide human-like stylistic lexical explanations without sacrificing the performance of sentence-level style prediction on both in-domain and out-of-domain datasets. Explanations from StyLEx show significant improvements in explanation metrics (sufficiency, plausibility) and when evaluated with human annotations. They are also more understandable by human judges compared to the widely-used saliency-based explanation baseline.

Bio: Shirley A. Hayati is a PhD student in computer science at the University of Minnesota, Twin Cities. She received her MS in language technologies from Carnegie Mellon University and BS in computer science from Universitas Indonesia. Her research interests include human-centered natural language processing, especially stylistic studies, explainable NLP, and computational social science. She has served as a reviewer for multiple ACL conferences and actively promotes diversity in tech. Her current work is about human-centered NLP.