Hackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 21218
In recent years, the field of Natural Language Processing has seen a profusion of tasks, datasets, and systems that facilitate reasoning about real-world situations through language (e.g., RTE, MNLI, COMET). Such systems might, for example, be trained to consider a situation where ?somebody dropped a glass on the floor,? and conclude it is likely that ?the glass shattered? as a result. In this talk, I will discuss three pieces of work that revisit assumptions made by or about thesesystems. In the first work, I develop a Defeasible Inference task, which enables a system to recognize when a prior assumption it has made may nolonger be true in light of new evidence it receives. The second work I will discuss revisits partial-input baselines, which have highlighted issues of spurious correlations in natural language reasoning datasets and led to unfavorable assumptions about models’ reasoning abilities. In particular, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. Finally, I will touch on work analyzing harmful assumptions made by reasoning models in the form of social stereotypes, particularly in the case of free-form generative reasoning models.
Rachel Rudinger is an Assistant Professor in theDepartment of Computer Science at the University of Maryland, College Park. She holds joint appointments in the Department of Linguistics and the Institute for Advanced Computer Studies (UMIACS). In 2019, Rachel completed her Ph.D. in Computer Science at Johns Hopkins University in the Centerfor Language and Speech Processing. From 2019-2020, she was a Young Investigator at the Allen Institute for AI in Seattle, and a visiting researcher at the University of Washington. Her research interests include computational semantics, common-sense reasoning, and issues of social bias andfairness in NLP.