Rachel Rudinger (University of Maryland, College Park) “Not So Fast!: Revisiting Assumptions in (and about) Natural Language Reasoning”
3400 N. Charles Street
Baltimore
MD 21218
Abstract
In recent years, the field of Natural Language Processing has seen a profusion of tasks, datasets, and systems that facilitate reasoning about real-world situations through language (e.g., RTE, MNLI, COMET). Such systems might, for example, be trained to consider a situation where “somebody dropped a glass on the floor,” and conclude it is likely that “the glass shattered” as a result. In this talk, I will discuss three pieces of work that revisit assumptions made by or about these systems. In the first work, I develop a Defeasible Inference task, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I will discuss revisits partial-input baselines, which have highlighted issues of spurious correlations in natural language reasoning datasets and led to unfavorable assumptions about models’ reasoning abilities. In particular, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. Finally, I will touch on work analyzing harmful assumptions made by reasoning models in the form of social stereotypes, particularly in the case of free-form generative reasoning models.
Biography
Rachel Rudinger is an Assistant Professor in the Department of Computer Science at the University of Maryland, College Park. She holds joint appointments in the Department of Linguistics and the Institute for Advanced Computer Studies (UMIACS). In 2019, Rachel completed her Ph.D. in Computer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020, she was a Young Investigator at the Allen Institute for AI in Seattle, and a visiting researcher at the University of Washington. Her research interests include computational semantics, common-sense reasoning, and issues of social bias and fairness in NLP.