Lunch at 12:30pm, (virtual) talk at 1pm, in 148 Fitzpatrick

Title: Towards reliable neural text generation

Abstract: Recent advances in large-scale neural language models have transformed the field of text generation, including applications like dialogue and document summarization. Despite human-like fluency, the generated text tends to contain incorrect, inconsistent, or hallucinated information, which hinders the deployment of text generation models in real applications. I will review observations of such errors in current generation tasks, explain challenges in evaluating and mitigating factual errors, and describe our recent attempts on addressing these problems. I will conclude with a discussion on future challenges and directions.

Bio: He He is an assistant professor at the Center for Data Science and the Department of Computer Science at New York University. Before joining NYU, she spent a year at Amazon Web Services and was a postdoc at Stanford University. She received her PhD from University of Maryland, College Park. She is broadly interested in machine learning and natural language processing. Her current research interests include text generation, dialogue systems, and robust language understanding.