Noah Ziems
Lunch at 12:30pm, talk at 1pm, in 148 Fitzpatrick
Title: Open-Book Graph Fetching for Question Answering
Abstract: Pretrained language models (PLMs) have shown to be surprisingly effective at a variety of downstream tasks including question answering (QA). However, PLMs often must rely on memorized information from the training data to answer questions, leading to poor performance on tasks that require niche knowledge or commonsense reasoning. While knowledge graphs (KG) have been shown to improve performance of PLMs in QA, fetching a subgraph relevant to the question remains an open problem. Traditional approaches use node relevance scoring that relies on memorized relations of a PLM in a closed book setting, leading to noisy subgraphs. In this work we show fetching subgraphs in an open domain setting with documents from a large corpus actually has no effect on downstream performance.
Bio: Noah Ziems is a first year PhD student in the Computer Science and Engineering department, where he is advised by Dr. Meng Jiang. Noah’s main research focus is in NLP and Question Answering.