Andy Yang
Lunch at 12:30pm, talk at 1pm, in 148 Fitzpatrick
Title: On the Expressivity of Transformer Encoders
Abstract: Transformers have gained prominence in natural language processing (NLP), both in direct applications like machine translation and in pretrained models like BERT and GPT. Lately, empirical work has noted significant limits, heuristics, and perplexing behavior in transformer models. Thus, formal investigation into these models’ theoretical properties can provide valuable insight into what these models can and can’t do. Today, we explore recent developments towards understanding the formal expressivity of transformers.
Bio: Bio: Andy J Yang is a first year PhD student in the NLP lab at Notre Dame, advised by David Chiang. He is interested in linguistics, model theory, machine learning, and their intersections. He hopes theoretical insights will enable researchers and engineers to reliably create helpful language-processing systems.