Introduction
Large-scale pre-trained language models have recently led to improvements across a range of natural language understanding (NLU) tasks (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019), but there is some scepticism that benchmark leaderboards do not represent the full picture (Kaushik and Lipton, 2018; Jumelet and Hupkes, 2018; Poliak et al., 2018). An open question is whether these models generalize beyond their training data samples.
In this paper, we examine how pre-trained language models generalize on the Winograd Schema Challenge (WSC).
Named after Terry Winograd, the WSC, in its current form, was proposed by Levesque et al. (2012) as an alternative to the Turing Test. The
The man couldn't lift his son because he was so heavy. The man couldn't lift his son because he was so weak. The men couldn't lift their sons because they were so heavy. The men couldn't lift their sons because they were so weak. (a) (b)
Figure 1: An example pair from the Winograd Schema Challange (a) and its perturbation (b). The pronoun resolves to one of the two referents, depending on the choice of the discriminatory segment. The perturbation in (b) pluralizes the referents and the antecedents.
task takes the form of a binary reading comprehension test where a statement with two referents and a pronoun (or a possessive adjective) is given, and the correct antecedent of the pronoun must be chosen. Examples are chosen carefully to have a preferred reading, based on semantic plausibility rather than co-occurrence statistics. WSC examples come in pairs that are distinguished only by a discriminatory segment that flips the correct referent, as shown in Figure 1a. Levesque et al. define a set of qualifying criteria for instances and the pitfalls to be avoided when constructing examples (see §3.2). These combine to ensure an instance functions as a test of what they refer to as 'thinking' (or common sense reasoning).
Recent work has reported significant improvements on the WSC (Kocijan et al., 2019; Sakaguchi et al., 2019). As with many other NLU tasks, this improvement is primarily due to largescale language model pre-training, followed by fine-tuning for the target task. We believe that further examination is warranted to determine whether these impressive results reflect a fundamental advance in reasoning ability, or whether our models have learned to simulate this ability in ways that do not generalize. In other words, do models learn accidental correlations in our datasets, or do they extract patterns that generalize in robust ways beyond the dataset samples?
In this paper, we conduct experiments to investigate this question. We define a set of lexical and syntactic variations and perturbations for the WSC examples and use altered examples (Figure 1b) to test models that have recently reported improved results. These variations and perturbations are designed to highlight the robustness of human linguistic and reasoning abilities and to test models under these conditions.
Contributions We introduce a new Winograd Schema dataset for evaluating generalization across seven controlled linguistic perturbations.1 We use this dataset to compare human and language model sensitivity to those perturbations, finding marked differences in model performance. We present a detailed analysis of the behaviour of the language models and how they are affected by the perturbations. Finally, we investigate the effect of fine-tuning with large task-specific datasets, and present an error analysis for all models.