Add Batch 4b2b806f-092e-47d5-84e5-3bfcdf971655
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_content_list.json +3 -0
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_model.json +3 -0
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_origin.pdf +3 -0
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/full.md +223 -0
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/images.zip +3 -0
- kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/layout.json +3 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_content_list.json +3 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_model.json +3 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_origin.pdf +3 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/full.md +235 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/images.zip +3 -0
- probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/layout.json +3 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_content_list.json +3 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_model.json +3 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_origin.pdf +3 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/full.md +329 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/images.zip +3 -0
- problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/layout.json +3 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_content_list.json +3 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_model.json +3 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_origin.pdf +3 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/full.md +0 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/images.zip +3 -0
- proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/layout.json +3 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_content_list.json +3 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_model.json +3 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_origin.pdf +3 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/full.md +199 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/images.zip +3 -0
- regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/layout.json +3 -0
- rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_content_list.json +3 -0
- rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_model.json +3 -0
- rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_origin.pdf +3 -0
- rethinkingandrefiningthedistinctmetric/full.md +382 -0
- rethinkingandrefiningthedistinctmetric/images.zip +3 -0
- rethinkingandrefiningthedistinctmetric/layout.json +3 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_content_list.json +3 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_model.json +3 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_origin.pdf +3 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/full.md +267 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/images.zip +3 -0
- revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/layout.json +3 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_content_list.json +3 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_model.json +3 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_origin.pdf +3 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/full.md +196 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/images.zip +3 -0
- rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/layout.json +3 -0
- s4tuningasimplecrosslingualsubnetworktuningmethod/41cebcf5-67b8-4cc5-a098-0cb080a173d1_content_list.json +3 -0
- s4tuningasimplecrosslingualsubnetworktuningmethod/41cebcf5-67b8-4cc5-a098-0cb080a173d1_model.json +3 -0
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:76beb2a56c47192b3c90dceb10ec0a9d3a100375febc7ef9026309690639bbd4
|
| 3 |
+
size 48186
|
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:24d4ebdf49ee316c460b2660afe3f1e7acbbe691dcb80f0474e01aa7380e5131
|
| 3 |
+
size 59164
|
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/2aab56ea-e114-4e86-8725-63e18e3a3c98_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:246dfd73de489232661bd4f223c316199e2a3f2769b28441caaf3ff77adcea1f
|
| 3 |
+
size 296573
|
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/full.md
ADDED
|
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# k-Rater Reliability: The Correct Unit of Reliability for Aggregated Human Annotations
|
| 2 |
+
|
| 3 |
+
Ka Wong
|
| 4 |
+
Google Research
|
| 5 |
+
danicky@gmail.com
|
| 6 |
+
|
| 7 |
+
Praveen Paritosh Google Research pkp@google.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Since the inception of crowdsourcing, aggregation has been a common strategy for dealing with unreliable data. Aggregate ratings are more reliable than individual ones. However, many natural language processing (NLP) applications that rely on aggregate ratings only report the reliability of individual ratings, which is the incorrect unit of analysis. In these instances, the data reliability is under-reported, and a proposed $k$ -rater reliability (kRR) should be used as the correct data reliability for aggregated datasets. It is a multi-rater generalization of inter-rater reliability (IRR). We conducted two replications of the WordSim-353 benchmark, and present empirical, analytical, and bootstrap-based methods for computing kRR on WordSim-353. These methods produce very similar results. We hope this discussion will nudge researchers to report kRR in addition to IRR.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Crowdsourcing has become a mainstay for data collection in NLP (Geva et al., 2019; Sabou et al., 2014). It can produce data in a scalable and cost effective manner. However, these benefits come at a cost: quality. The reliability of crowd workers is always of central concern. One common strategy to increase the data reliability is to collect multiple, independent judgements and to use the aggregated judgements instead. Indeed, early papers such as Snow et al. (2008) show that average ratings correlate more strongly with expert judgements. This makes sense, as average ratings are known to have a higher reliability than individual ones (Ebel, 1951).
|
| 16 |
+
|
| 17 |
+
A number of strategies have been proposed to address data quality issues, e.g. rater modeling, label correction, label pruning (Kumar and Lease, 2011), but aggregation remains very popular (Prabhakaran et al., 2021). Sheshadri and Lease (2013) present nine crowdsourced datasets across a wide range of
|
| 18 |
+
|
| 19 |
+
NLP tasks to compare different aggregation methods. See Difallah and Checco (2021) for a recent review of aggregation techniques. In short, aggregation has become the default method for acquiring reliable data from the crowd.
|
| 20 |
+
|
| 21 |
+
Interestingly, after we adopted aggregation as a community, we forgot to update our reliability measures correspondingly. The field continues to report data reliability in terms of IRR, even when aggregate ratings are used. Focusing on IRR, we are unable to capture the increase in reliability due to aggregation. The actual data reliability is hence unknown. This has important consequences. Reliability is often used as a safeguard for reproducibility. Therefore conclusions about the reproducibility of a dataset drawn based the reliability of individual ratings may be different than that based on the reliability of aggregate ratings.
|
| 22 |
+
|
| 23 |
+
By reporting the correct reliability that is actually higher, this may even have a side effect of lessening the stigma on low-IRR datasets. As a result, this may create a path forward towards reliable data on subjective tasks, where a high IRR is difficult to obtain, such as emotions (Wong et al., 2021) and toxicity (Wulczyn et al., 2017). With a reproducibility crisis looming in the background (Baker, 2016; Hutson, 2018), more frequent and accurate reporting of reliability is our primary safeguard (Paritosh, 2012).
|
| 24 |
+
|
| 25 |
+
We denote the reliability of aggregate ratings as $k$ -rater reliability (kRR), in order to differentiate it from inter-rater reliability. In this paper we present a few methods for computing kRR. First, we demonstrate a general, empirical approach that is based on replications. To that end, we conducted two replications of WordSim-353 (Finkelstein et al., 2001), a widely used word similarity dataset. We then discuss two other alternatives that do not require replications. One is a re-sampling-based bootstrap approach (Efron and Tibshirani, 1994). It is suitable for experiments with a high rating redun
|
| 26 |
+
|
| 27 |
+
dancy. The other is an existing analytical approach based on intraclass correlation (ICC). It is suitable for continuous data where the aggregation is the mean. We conclude with recommendations for reporting reliability of crowdsourced annotations, and novel research questions to expand the usefulness of kRR.
|
| 28 |
+
|
| 29 |
+
# 2 Related Work
|
| 30 |
+
|
| 31 |
+
Various authors have stressed the importance of measuring reliability for the correct unit of analysis. Ebel (1951) asks "Is it better to estimate the reliability of individual ratings or the reliability of average ratings? If decisions are based upon average ratings, it of course follows that the reliability with which one should be concerned is the reliability of those averages." Shrout and Fleiss (1979) and Hallgren (2012) reiterate similar points.
|
| 32 |
+
|
| 33 |
+
These studies primarily focus on the reliability of the mean, which is just one of many different aggregation methods. There is a reason. Not only is the mean a popular choice, it is also the only known choice where the reliability of the aggregate ratings can be computed analytically from the reliability of individual ratings. This is done in the ICC framework. ICC is typically used to measure the reliability of single ratings, but it actually has a variant that can be used for mean ratings as well. Shrout and Fleiss (1979) list several types of ICC coefficient, one of which is for mean ratings. They call it $\mathrm{ICC}(k)$ , where $k$ is the number of ratings per item. In this generalized notation, $\mathrm{ICC}(1)$ is just the reliability of individual ratings, or the IRR. Note that McGraw and Wong (1996) use a slightly different notation, $\mathrm{ICC}(1,k)$ , to explicitly denote that it is for a one-way random effects model, where the raters are treated as interchangeable. That is a common assumption in most crowdsourcing experiments done on commercial platforms such as Amazon Mechanical Turk.
|
| 34 |
+
|
| 35 |
+
$\mathrm{ICC}(k)$ is an established way of measuring the reliability of mean ratings, hence it is readily usable by researchers. However, it has some drawbacks. Being part of the ICC family, $\mathrm{ICC}(k)$ is only applicable to continuous data. In addition, $\mathrm{ICC}(k)$ measures the reliability of mean ratings, therefore it cannot accommodate other aggregation functions. In other words, for other popular data types, such as majority votes of binary data, there is no known coefficients for measuring the reliability of aggregate ratings. Other than $\mathrm{ICC}(k)$ ,
|
| 36 |
+
|
| 37 |
+
the authors are not aware of any multi-rater generalization for other coefficients such as Cohen's (1960) kappa or Krippendorff's alpha (Krippendorff, 2011). We therefore take $\mathrm{ICC}(k)$ as an inspiration and abstract away from it to define a class of reliability that describes the reliability of aggregate ratings for any data types. We denote it kRR.
|
| 38 |
+
|
| 39 |
+
# 3 Contributions
|
| 40 |
+
|
| 41 |
+
- We emphasise the reliability of aggregate ratings is higher than that of individual ratings.
|
| 42 |
+
- We give a general definition of kRR, extending from the definition of IRR, and discuss three methods for computing it.
|
| 43 |
+
- We conduct two replications of the WordSim-353 benchmark to validate these methods.
|
| 44 |
+
|
| 45 |
+
# 4 $k$ -Rater Reliability
|
| 46 |
+
|
| 47 |
+
We define kRR as the chance-adjusted agreement between replications of aggregate ratings. This definition is very similar to IRR. In fact, they only differ in terms of interpretation. kRR is identical to IRR other than that each individual rating in the IRR calculation is replaced by a $k$ -rater aggregate rating. After all, the mathematics in IRR are agnostics to how those labels are produced.
|
| 48 |
+
|
| 49 |
+
Just like IRR, a minimum of two replications is required to calculate kRR. Given two vectors of aggregate ratings, one can calculate the reliability between them using any IRR coefficients that fit the purpose. kRR is designed to be analogous to IRR so that we can build upon the rich IRR literature and the various coefficient choices for different experimental conditions and assumptions. For example, in a binary task, if all the items are rated by two fixed but distinct groups of raters (raters from different locales), Cohen's (1960) kappa is a suitable choice. Whereas if the raters groups are homogeneous, and the rating scale is ordinal (e.g. Likert), then Krippendorff's alpha (Krippendorff, 2011) can be used. Just like IRR, kRR is a general concept and is agnostic to the choice of coefficient.
|
| 50 |
+
|
| 51 |
+
This definition of kRR can be directly operationalized by creating replications. We call this approach to calculating kRR the empirical approach. We demonstrate it in the next section on the WordSim-353 benchmark. The empirical approach is the most direct and most general, with the drawback that a minimum of two replications
|
| 52 |
+
|
| 53 |
+
are required. We later present two narrower alternatives in Section 5 that do not require replications. The empirical results will be used as a golden reference to validate them.
|
| 54 |
+
|
| 55 |
+
# 4.1 Replicating the WordSim Dataset
|
| 56 |
+
|
| 57 |
+
WordSim-353 (Finkelstein et al., 2001) is a widely used benchmark for measuring a system's ability to compute similarity between two words, and has been cited over 1500 times. The dataset contains 353 word pairs. Each word pair is rated by the same 13 workers for their similarity on a scale from 1 to 10, to indicate how similar their meanings are. The 13 ratings on each word pair are then aggregated into a mean score. It is important to note that only the mean of the ratings are utilized by all the research using this dataset as a benchmark. So the unit of analysis is the aggregate of the 13 ratings, not individual ratings.
|
| 58 |
+
|
| 59 |
+
Nearly twenty years have elapsed since the creation of the WordSim dataset. It is impossible to recreate the original experimental conditions due to rater population changes. Therefore, we created two replications in order to approximate the kRR of the original dataset. Two is the minimum replication factor required for the empirical approach, though a higher replication would result in a more accurate measure of kRR.
|
| 60 |
+
|
| 61 |
+
We used the original annotation guidelines on Amazon Mechanical Turk. Raters were paid on average USD 9.5 per hour. In each replication, we collected 13 judgements on each of the same 353 word pairs. There was a detail that we did not follow. In the original experiment, the authors employed 13 unique raters, and each one rated all 353 word pairs. In our replications, we followed more modern conventions and limited the contributions of each individual rater for better generalizability. This detail aside, these are our best attempts to replicate the original experiment. The data is publicly available at https://github.com/google-research-datasets/wordsim-replications.
|
| 62 |
+
|
| 63 |
+
# 4.2 Empirical kRR Results
|
| 64 |
+
|
| 65 |
+
We take $k$ columns of ratings at random from each of the two replications, compute the $k$ -rater mean scores for each replication, and measure the reliability between them using Krippendorf's alpha, the most widely used and general reliability index. We do this for $k = 1,2,\ldots ,13$ . The resulting kRR values are shown in Fig.1. At $k = 1$ , the IRR is 0.574,
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
Figure 1: $k$ -rater reliability for replications of WordSim benchmark, calculated using 3 different methods: 1) Empirical, based on replications, 2) ICC(k), analytical, and 3) SB predictions. Note ICC(1) is not available as we only have a single column of ratings available at $k = 1$ . All SB predictions are based on only 2 ratings per item.
|
| 69 |
+
|
| 70 |
+
slightly lower than the 0.6 originally reported in Finkelstein et al. (2001). At $k = 13$ , the $k$ -rater reliability is 0.940, quite a bit higher than the IRR. In addition, Fig.1 shows the marginal returns on increasing the number of ratings on the replicated datasets.
|
| 71 |
+
|
| 72 |
+
# 5 Other Approaches to Computing kRR
|
| 73 |
+
|
| 74 |
+
The empirical approach is general, as it can accommodate any choice of rating scale, aggregation function, and reliability coefficient. However, it has a major drawback. As we see in Section 4.1, it can be difficult to do a perfect replication postfact. This backward incompatibility will present a challenge to computing kRR for existing datasets. Below we present two alternatives that can work on existing datasets under some conditions without requiring any additional data collection. One is a re-sampling based bootstrap approach (Efron and Tibshirani, 1994), the other is $\mathrm{ICC}(k)$ .
|
| 75 |
+
|
| 76 |
+
# 5.1 Bootstrap
|
| 77 |
+
|
| 78 |
+
Bootstrap (Efron and Tibshirani, 1994) is a resampling technique commonly used for quantifying uncertainty in statistical parameter estimation. One can bootstrap an NLP annotations dataset by re-sampling ratings within each annotation item with replacement at the same sample size. If one treats each bootstrap sample as a replication, then one can apply the technique discussed in Section 4
|
| 79 |
+
|
| 80 |
+
to obtain a bootstrapped kRR. Bootstrap is an approximate technique and works better with larger sample sizes, typically 20 observations and above for a single distribution. The 13-rating redundancy in the WordSim replications is arguably small for a typical bootstrap exercise, but it makes up for it with a large number of items.
|
| 81 |
+
|
| 82 |
+
Before we apply bootstrap to the original WordSim dataset, we first verify its soundness by comparing it against the empirical results in Section 4.2. When applied to one of the two recent replications, the bootstrapped kRR is 0.943. This is comparable to the 0.940 reported in Section 4.2. We then apply bootstrap to the original WordSim dataset and find a bootstrapped kRR of 0.953 (Table 1). The exact method introduced below produces a very similar value at 0.950.
|
| 83 |
+
|
| 84 |
+
# 5.2 Intraclass Correlation
|
| 85 |
+
|
| 86 |
+
Intraclass correlation is a popular reliability coefficient for continuous data in behavioral and medical sciences. ICC gives researchers granular control over assumptions about the raters. For example, each annotation item can be rated by the same set of raters, or different sets of raters (interchangeability). In the former, the raters can be treated as either fixed or randomly drawn from a population. Shrout and Fleiss (1979) and McGraw and Wong (1996) give very extensive treatment on different ICC types for different rater assumptions.
|
| 87 |
+
|
| 88 |
+
In this paper, we focus on the most basic definition, one that treats raters as interchangeable. The ICC for $k$ -rater averages is denoted as $\mathrm{ICC}(k)$ using McGraw and Wong's notation. The reliability of individual ratings is thus given by $\mathrm{ICC}(1)$ . $\mathrm{ICC}(k)$ can be computed by summing squares of differences on the data matrix. Please see Appendix A for derivation and an illustration. Otherwise, software implementations of ICC are also widely available, e.g. in R and Python.
|
| 89 |
+
|
| 90 |
+
We first verify $\mathrm{ICC}(k)$ 's accuracy by comparing it against the empirical results in Section 4.2. To do that, we calculate $\mathrm{ICC}(k)$ for one of the two recent WordSim replications for $k = 1, 2, \ldots, 13$ and overlay the results (solid blue) over the empirical curve in Fig.1. We can see $\mathrm{ICC}(k)$ matches the empirical results quite well.
|
| 91 |
+
|
| 92 |
+
After verifying the technique, we compute $\mathrm{ICC}(k)$ on the original WordSim dataset. We report in Table 1 both ICC(1) and ICC(13) to show the increase in reliability. They are respectively 0.590
|
| 93 |
+
|
| 94 |
+
<table><tr><td>Unit of analysis</td><td>Method</td><td>reliability</td></tr><tr><td>single-rating</td><td>ICC(1)</td><td>0.590</td></tr><tr><td>13-rating mean</td><td>ICC(13)</td><td>0.950</td></tr><tr><td>13-rating mean</td><td>bootstrap</td><td>0.953</td></tr></table>
|
| 95 |
+
|
| 96 |
+
Table 1: Reliability of the original WordSim benchmark. First two rows are analytical estimates ICC(1) and ICC(13). Both computed using all 13 available ratings. Third row is a resampling-based bootstrapped estimate based on 100 bootstrap samples.
|
| 97 |
+
|
| 98 |
+
and 0.950.1
|
| 99 |
+
|
| 100 |
+
# 5.3 Spearman-Brown Formula
|
| 101 |
+
|
| 102 |
+
Given an experiment with a $k$ -rating redundancy, $\mathrm{ICC}(k)$ quantifies the reliability of the $k$ -rater average. If this reliability is too low, the researcher may want to increase the value of $k$ . In this case, it would be helpful to know how additional ratings would impact reliability. This is analogous to calculating the required sample size for a given margin of error in a poll. For this purpose, the Spearman-Brown prophecy formula (Spearman, 1910; Brown, 1910) can be a useful tool. It predicts $\mathrm{ICC}(k)$ for any value of $k$ based on $\mathrm{ICC}(1)$ in the current experiment:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\operatorname {I C C} (k) = \frac {k \cdot \operatorname {I C C} (1)}{1 + (k - 1) \cdot \operatorname {I C C} (1)}. \tag {1}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
Warrens (2017) and de Vet et al. (2017) recently proved that SB and $\mathrm{ICC}(k)$ are indeed equivalent in expectation, even though they look nothing alike and were derived in very different contexts. These findings confirm past observations that SB predicts empirical results accurately (Remmers et al., 1927). A limitation of SB is clearly that it only works with ICC. However, Fleiss and Cohen (1973) show ICC is actually equivalent to weighted-kappa with quadratic weights, so it likely has wider applicability.
|
| 109 |
+
|
| 110 |
+
To verify the formula, we apply SB to one of the two recent WordSim replications and overlay the results (dotted red) over the empirical curve obtained earlier. When computing SB, we only provide it with 2 ratings, in order to assess its predictive accuracy. That is, we first compute ICC(1) with 2 randomly drawn ratings from each word
|
| 111 |
+
|
| 112 |
+
pair, then we plug this ICC(1) value into Eq.1 for $k = 1,2,\ldots ,13$ . The SB curve is overlaid over the empirical curve in Fig.1. We see that SB tracks the empirical results very well even at high $k$ . This is remarkable as the empirical approach requires 26 ratings for $k = 13$ , whereas SB merely requires 2 for any value of $k$ .
|
| 113 |
+
|
| 114 |
+
# 6 Conclusions and Discussion
|
| 115 |
+
|
| 116 |
+
We pointed out where aggregated ratings are used, as is the case in many crowdsourced datasets, reliability of aggregate ratings is the correct accounting of data reliability. We introduced $k$ -rater reliability (kRR) as a multi-rater extension of IRR. We emphasise the reliability of aggregate ratings is higher than that of individual ratings. We present analytical and bootstrap-based methods for computing the kRR on the original WordSim dataset. Both methods produce similar estimates for 13-rater reliability ranging from 0.940 to 0.953. We conduct two replications of the entire WordSim-353 benchmark to validate these methods. We make our replication data publicly available on GitHub.
|
| 117 |
+
|
| 118 |
+
While aggregation makes it possible to have reliable benchmarks on subjective topics, some readers may feel uneasy about increasing reliability via gathering additional ratings, as opposed to other traditional means such as improving rater guidelines. We suggest to mediate this concern by reporting both IRR and kRR. In fact, kRR is not meant to replace IRR, but rather complement it. IRR speaks to the reliability of the labeling process, whereas kRR quantifies the reliability of the aggregated data we consume. We urge researchers to report both where possible. In fact, Hallgren (2012) states, "In cases where single measures ICCs are low but average-measures ICCs are high, the researcher may report both ICCs to demonstrate this discrepancy."
|
| 119 |
+
|
| 120 |
+
This research also raises interesting questions for future research:
|
| 121 |
+
|
| 122 |
+
1. How do we derive multi-rater generalizations for coefficients other than ICC? A lot of NLP annotations are binary and multi-class. Such a generalization for majority voting would be particularly useful to the field.
|
| 123 |
+
2. Should we apply the Landis and Koch (1977) style of reliability cutoffs to kRR, or should kRR go by a different set of standards?
|
| 124 |
+
|
| 125 |
+
We urge researchers to report both IRR and kRR of aggregated human annotations, and for further
|
| 126 |
+
|
| 127 |
+
inquiry around the above fundamental questions about reliability.
|
| 128 |
+
|
| 129 |
+
# Acknowledgement
|
| 130 |
+
|
| 131 |
+
We thank Lora Aroyo and Chris Welty for sharing their WordSim replication datasets. We thank Michael Quinn and Jeremy Miles for their insightful discussions and comments. We also thank all the crowd workers for providing us with valuable annotations data.
|
| 132 |
+
|
| 133 |
+
# References
|
| 134 |
+
|
| 135 |
+
Monya Baker. 2016. Reproducibility crisis. Nature, 533(26):353-66.
|
| 136 |
+
William Brown. 1910. Some experimental results in the correlation of mental abilities 1. British Journal of Psychology, 1904-1920, 3(3):296-322.
|
| 137 |
+
Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
|
| 138 |
+
Henrica C.W. de Vet, Lidwine B. Mokkink, David G. Mosmuller, and Caroline B. Terwee. 2017. Spearman-brown prophecy formula and cronbach's alpha: different faces of reliability and opportunities for new applications. Journal of Clinical Epidemiology, 85:45-49.
|
| 139 |
+
Djellel Difallah and Alessandro Checco. 2021. Aggregation techniques in crowdsourcing: Multiple choice questions and beyond. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 4842-4844.
|
| 140 |
+
Robert L Ebel. 1951. Estimation of the reliability of ratings. Psychometrika, 16(4):407-424.
|
| 141 |
+
Bradley Efron and Robert J Tibshirani. 1994. An introduction to the bootstrap. CRC press.
|
| 142 |
+
Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414.
|
| 143 |
+
Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613-619.
|
| 144 |
+
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. arXiv preprint arXiv:1908.07898.
|
| 145 |
+
|
| 146 |
+
Kevin A Hallgren. 2012. Computing inter-rater reliability for observational data: An overview and tutorial. *Tutorials in quantitative methods for psychology*, 8(1):23-34.
|
| 147 |
+
Matthew Hutson. 2018. Artificial intelligence faces reproducibility crisis.
|
| 148 |
+
Klaus Krippendorff. 2011. Computing krippendorff's alpha-reliability.
|
| 149 |
+
Abhimanu Kumar and Matthew Lease. 2011. Modeling annotator accuracies for supervised learning. In Proceedings of the Workshop on Crowdsourcing for Search and Data Mining (CSDM) at the Fourth ACM International Conference on Web Search and Data Mining (WSDM), pages 19-22.
|
| 150 |
+
J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159-74.
|
| 151 |
+
David Liljequist, Britt Elfving, and Kirsti Skavberg Roaldsen. 2019. Intraclass correlation-a discussion and demonstration of basic features. PloS one, 14(7):e0219854.
|
| 152 |
+
Kenneth O McGraw and Seok P Wong. 1996. Forming inferences about some intraclass correlation coefficients. Psychological methods, 1(1):30.
|
| 153 |
+
Praveen Paritosh. 2012. Human computation must be reproducible. In WWW 2012, Lyon.
|
| 154 |
+
Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Diaz. 2021. On releasing annotator-level labels and information in datasets. In Proceedings of The Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop, pages 133-138, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 155 |
+
HH Remmers, NW Shock, and EL Kelly. 1927. An empirical study of the validity of the spearman-brown formula as applied to the Purdue rating scale. Journal of Educational Psychology, 18(3):187.
|
| 156 |
+
Marta Sabou, Kalina Bontcheva, Leon Derczynski, and Arno Scharl. 2014. Corpus annotation through crowdsourcing: Towards best practice guidelines. In LREC, pages 859-866. CiteSeer.
|
| 157 |
+
Aashish Sheshadri and Matthew Lease. 2013. Square: A benchmark for research on computing crowd consensus. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, volume 1.
|
| 158 |
+
Patrick E Shrout and Joseph L Fleiss. 1979. Intraclass correlations: uses in assessing rater reliability. *Psychological bulletin*, 86(2):420.
|
| 159 |
+
Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference
|
| 160 |
+
|
| 161 |
+
<table><tr><td rowspan="2">Item</td><td colspan="4">Rating</td></tr><tr><td>1</td><td>2</td><td>...j</td><td>...k</td></tr><tr><td>1</td><td>x11</td><td>x12</td><td>...x1j</td><td>...x1k</td></tr><tr><td>2</td><td>x21</td><td>x22</td><td>...x2j</td><td>...x2k</td></tr><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td>i</td><td>xi1</td><td>xi2</td><td>...xij</td><td>...xik</td></tr><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td>.</td><td>.</td><td>.</td><td>.</td><td>.</td></tr><tr><td>n</td><td>xn1</td><td>xn2</td><td>...xnj</td><td>...xnk</td></tr></table>
|
| 162 |
+
|
| 163 |
+
Figure 2: A convenient data matrix and notational system for data used in calculating intra-class correlation coefficients
|
| 164 |
+
|
| 165 |
+
on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii. Association for Computational Linguistics.
|
| 166 |
+
|
| 167 |
+
Charles Spearman. 1910. Correlation calculated from faulty data. British Journal of Psychology, 1904-1920, 3(3):271-295.
|
| 168 |
+
|
| 169 |
+
Matthijs J Warrens. 2017. Transforming intraclass correlation coefficients with the spearman-brown formula. Journal of clinical epidemiology, 85:14-16.
|
| 170 |
+
|
| 171 |
+
Ka Wong, Praveen Paritosh, and Lora Aroyo. 2021. Cross-replication reliability - an empirical approach to interpreting inter-rater reliability. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7053-7065, Online. Association for Computational Linguistics.
|
| 172 |
+
|
| 173 |
+
Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceedings of the 26th international conference on world wide web, pages 1391-1399.
|
| 174 |
+
|
| 175 |
+
# A Appendix on $\mathbf{ICC}(k)$
|
| 176 |
+
|
| 177 |
+
ICC is a family of coefficients. It has slightly different formulations to accommodate different experimental designs. One of them, $\mathrm{ICC}(k)$ , quantifies the reliability of average ratings based on $k$ raters, where the raters are treated as interchangeable. We illustrate its close form calculation here. It is mainly re-expressing results from previous works on ICC calculation, such as Liljequist et al. (2019) and McGraw and Wong (1996).
|
| 178 |
+
|
| 179 |
+
$\mathrm{ICC}(k)$ predicates on the one-way random effects model being the data generation process. The model takes the form
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
x _ {i j} = \mu + \phi_ {i} + \epsilon_ {i j},
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
where $x_{ij}$ is the rating on item $i$ from rater $j$ , $\mu$ is the grand mean, $\phi_i$ is the mean of item $i$ , and $\epsilon_{ij}$ is a random perturbation term. Assume a data matrix with $n$ rows (item) and $k$ columns (raters) with no missing data, as one shown in Fig. 2. Let
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\bar {x} _ {\cdot \cdot} = \frac {1}{n k} \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {n} x _ {i j}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
be the sample grand mean, and
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\bar {x} _ {i}. = \frac {1}{k} \sum_ {j = 1} ^ {k} x _ {i j}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
be the $i^{\mathrm{th}}$ sample item mean. Let
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
S S W = \sum_ {j = 1} ^ {k} \sum_ {i = 1} ^ {n} \left(x _ {i j} - \bar {x} _ {i}\right) ^ {2}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
$$
|
| 204 |
+
S S B = k \sum_ {i = 1} ^ {n} (\bar {x} _ {i}. - \bar {x}..) ^ {2}
|
| 205 |
+
$$
|
| 206 |
+
|
| 207 |
+
be respectively the sum of squares due to differences within items and the sum of squares due to differences between items. Then the estimator for the variance of $\epsilon$ , $\sigma_{\epsilon}^{2}$ , and the estimator for the variance of $\phi$ , $\sigma_{\phi}^{2}$ , are respectively
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
\hat {\sigma} _ {\epsilon} ^ {2} = \frac {S S W}{n (k - 1)}
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
$$
|
| 214 |
+
\hat {\sigma} _ {\phi} ^ {2} = \frac {S S B}{k (n - 1)} - \frac {\hat {\sigma} _ {\epsilon} ^ {2}}{k}.
|
| 215 |
+
$$
|
| 216 |
+
|
| 217 |
+
Then $\operatorname{ICC}(k)$ can be computed as
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\frac {\hat {\sigma} _ {\phi} ^ {2}}{\hat {\sigma} _ {\alpha} ^ {2} + \hat {\sigma} _ {\epsilon} ^ {2} / k}.
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
If we apply the above formula to individual ratings, with $k = 1$ , the resulting reliability is known as inter-rater reliability. For any $k > 1$ , it is an instance of the $k$ -rater reliability proposed in this paper.
|
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:540fb58d53791f69bcb465c385415c9cbc2b02fcb4c70f197693d1b9c107cea2
|
| 3 |
+
size 85772
|
kraterreliabilitythecorrectunitofreliabilityforaggregatedhumanannotations/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e3066d66c73e96fe9cfd51b3c9b099a1aca58d5cddd4483188aa62b11f6e72f
|
| 3 |
+
size 243744
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7746cf8915187d9acd38c77b528a64f4be34c4e7e61ce46136a9f2c30413a0b8
|
| 3 |
+
size 74205
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0229f74731b1122e66ec82a20f664e0d93bb408b8123d32a0d15240385ee6df4
|
| 3 |
+
size 84682
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/a4eb384e-767c-423f-b433-d91125f3cc90_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8e677f6bcfa7baa8c0fc5280b29006e464d16e0069959e6f10f6f8eabbdf174
|
| 3 |
+
size 436930
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/full.md
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Probing the Robustness of Trained Metrics for Conversational Dialogue Systems
|
| 2 |
+
|
| 3 |
+
Jan Deriu, Don Tuggener, Pius von Däniken, Mark Cieliebak
|
| 4 |
+
Zurich University of Applied Sciences (ZHAW), Winterthur, Switzerland
|
| 5 |
+
deri@zhaw.ch
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
This paper introduces an adversarial method to stress-test trained metrics to evaluate conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We find that they all are susceptible to giving high scores to responses generated by relatively simple and obviously flawed strategies that our method converges on. For instance, simply copying parts of the conversation context to form a response yields competitive scores or even outperforms responses written by humans.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
One major issue in developing conversational dialogue systems is the significant efforts required for evaluation. This hinders rapid developments in this field because frequent evaluations are not possible or very expensive. The goal is to create automated methods for evaluating to increase efficiency. Unfortunately, methods such as BLEU (Papineni et al., 2002) have been shown to not be applicable to conversational dialogue systems (Liu et al., 2016). Following this observation, in recent years, the trend towards training methods for evaluating dialogue systems emerged (Lowe et al., 2017; Deriu and Cieliebak, 2019; Mehri and Eskenazi, 2020; Deriu et al., 2020). The models are trained to take as input a pair of context and candidate response, and output a numerical score that rates the candidate for the given context. These systems achieve high correlations to human judgments, which is very promising. Unfortunately, these systems have been shown to suffer from instabilities. (Sai et al., 2019) showed that small perturbations to the candidate response already confuse the trained metric. This work goes one step further: we propose a method that automatically finds strategies that elicit very high scores from the trained metric while being of
|
| 14 |
+
|
| 15 |
+
obvious low quality. Our method can be applied to automatically test the robustness of trained metrics against adversarial strategies that exploit certain weaknesses of the trained metric.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Overview of the process. It takes a context and an response generated by a dialogue policy and computes a score based on the trained metric. The score is then used as a reward to update the policy. In this example, the policy converges to a fixed response, which achieves an almost perfect score, although it is clearly a low-quality response. The policy always returns this response, regardless of the context, and the trained metric always scores it perfectly.
|
| 19 |
+
|
| 20 |
+
Our method uses a trained metric as a reward in a Reinforcement Learning setting, where we fine-tune a dialogue system to maximize the reward. Using this approach, the dialogue system converges towards a degenerate strategy that gets high rewards from the trained metric. It converges to three different degenerate types of strategies to which the policy converges in our experiments: the Parrot, the Fixed Response, and the Pattern. For each dataset and metric, an adversarial response is found, which belongs to one of the three strategy types. The responses generated from these strategies then achieve high scores on the metric. Even more, in most cases, the scores are higher than the scores achieved by human written responses. Figure 1 shows the pipeline. The dialogue policy receives a reward signal from the trained metric.
|
| 21 |
+
|
| 22 |
+
Over time, the policy converges to a fixed response, which objectively does not match the context but gets a near-perfect score on the trained metric. We release the code<sup>1</sup>.
|
| 23 |
+
|
| 24 |
+
# 2 Related Work
|
| 25 |
+
|
| 26 |
+
Trained Metrics. In recent years the field of trained metrics gained traction after word-overlap methods have been shown to be unreliable (Liu et al., 2016). The first of these metrics is ADEM (Lowe et al., 2017), which takes as input a context, a reference, and the candidate response and returns a score. The main issue with ADEM is the reliance on references and annotated data (i.e., human ratings of responses), which are costly to obtain, and need to be redone for each domain. RUBER (Tao et al., 2018) extended ADEM by removing the reliance on annotated data for training. However, it still relies on a reference during inference. AutoJudge (Deriu and Cieliebak, 2019) removed the reliance on references, which allows the evaluation of multi-turn behavior of the dialogue system. However, AutoJudge still leverages annotated data for training. USR (Mehri and Eskenazi, 2020) is a trained metric that does not rely on either annotated data or any reference. It is trained in a completely unsupervised manner while still highly correlated to human judgment (0.4 Spearman Correlation). Similarly, MAUDE (Sinha et al., 2020) is trained as an unreferenced metric built to handle the online evaluation of dialogue systems.
|
| 27 |
+
|
| 28 |
+
Robustness of Trained Metrics. There is not yet much research on the robustness of trained metrics. Sai et al. (2019) evaluated the robustness of ADEM by corrupting the context in different ways. They show that by just removing punctuation, the scores of ADEM change, and in $64\%$ of cases are superior to the scores given for the same response without removed punctuation. Other corruption mechanisms yielded similar results. Yeh et al. (2021) compared a large variety of automated metrics for dialogue system evaluation by comparing, e.g., turn- and dialogue-level correlation with human judgements and studying the impact of the dialogue length. They find that no single metric is robust against all alternations but see potential in ensembling different metrics. Novikova et al. (2017) investigate automated metrics in the task-oriented NLG domain and find that the metrics do
|
| 29 |
+
|
| 30 |
+
Algorithm 1: Advantage Actor-Critic Algorithm, where $\pi_{\theta}$ denotes the policy, $c$ denotes the context, $r$ the response generated by the policy, and $s$ denotes the score by the automated metric, i.e., the reward.
|
| 31 |
+
|
| 32 |
+
1 while training do
|
| 33 |
+
2 sample $c$ from pool of contexts;
|
| 34 |
+
3 $r = \pi_{\theta}(c)$ generate response;
|
| 35 |
+
4 $s = R(c,r)$ compute reward;
|
| 36 |
+
5 fit action-value function $Q_{\sigma}$ i.e., $\mathcal{L}(\sigma) = \frac{1}{2}\sum_{i}\| R(c,r) + Q_{(c',r')} - Q_{\sigma}(c,r)\|$ ;
|
| 37 |
+
6 compute the advantage
|
| 38 |
+
7 $\begin{array}{r}A(r,c) = R(r,c) - Q(c,r) + Q(c',r');\\ \theta = \theta +\alpha \bigtriangledown J_{RL}(\theta)\text{fit policy}; \end{array}$
|
| 39 |
+
8 end
|
| 40 |
+
|
| 41 |
+
not sufficiently reflect human ratings.
|
| 42 |
+
|
| 43 |
+
# 3 Method
|
| 44 |
+
|
| 45 |
+
Our method applies a trained metric as a reward signal $R(c, r)$ to update a dialogue system $\pi(c)$ in a reinforcement learning setting, where $c$ denotes the context and $r$ the response. The dialogue system is trained by generating a response for a context, which is then scored by the automated metric. The dialogue system is then updated using the score as the reward. This process is repeated for different contexts. We use the Actor-Critic framework to optimize the policy (Sutton et al., 1999). See Algorithm 1 for an overview. The policy gradient is defined as $\bigtriangledown J_{RL}(\theta) = \bigtriangledown_{\theta} \log \pi_{\theta}(r|c) * A(r, c)$ , where $\pi_{\theta}(r|c)$ defines the probability of the generated response for the given context, and $A(c, r)$ the advantage function.
|
| 46 |
+
|
| 47 |
+
The learned policy depends on the reward function, i.e., the automated metric. If the reward function is susceptible to adversarial attacks, the policy will likely generate an objectively suboptimal solution, which is rated highly by the automated metric. Conversely, we expect the policy to improve the dialogue systems' responses if the automated metric is robust against adversarial examples.
|
| 48 |
+
|
| 49 |
+
# 4 Experimental Setup
|
| 50 |
+
|
| 51 |
+
# 4.1 Datasets
|
| 52 |
+
|
| 53 |
+
We perform the evaluation on three widely-used datasets in the dialogue modelling domain. Namely, Dailydialog (Li et al., 2017), Empathetic Dialogues (Rashkin et al., 2019), and PersonaChat (Zhang et al., 2018).
|
| 54 |
+
|
| 55 |
+
<table><tr><td>Metric</td><td>Strategy</td><td>Response</td></tr><tr><td colspan="3">PersonaChat</td></tr><tr><td>ATT</td><td>Fixed</td><td rowspan="2">yea!! 1!! 2!! 3!! * * * fucking fucking * * [ [ fucking * fucking * that sounds like a lot of fun. what do you like to do in your spare time?</td></tr><tr><td>BLM</td><td>Fixed</td></tr><tr><td>MAUDE</td><td>Fixed</td><td>What kind of work do you have? What do you like to do in your free time?</td></tr><tr><td>USR FULL</td><td>Parrot</td><td>-</td></tr><tr><td>USR MLM</td><td>Fixed</td><td>i am a stay at home mom and i am trying to figure out what i want to do with my life</td></tr><tr><td>USR RET</td><td>Fixed</td><td>I love to be a musician. I love music. What kind of music do you listen to as a music lover</td></tr><tr><td colspan="3">D Dailydialog</td></tr><tr><td>ATT</td><td>Fixed</td><td rowspan="2">! freaking out! one of these days! * * one * * freaking * * out! * even * * damn * * even damn that would be great! what do you do for a living, if you don't mind me asking?</td></tr><tr><td>BLM</td><td>Fixed</td></tr><tr><td>MAUDE</td><td>Fixed</td><td>I hope it works out for you. What kind of car did you get?</td></tr><tr><td>USR FULL</td><td>Pattern</td><td>i'm not sure if i'd like to [copy context tokens]. i'll let you know if i do.</td></tr><tr><td>USR MLM</td><td>Fixed</td><td>i am not sure if i am going to be able to go out of my way to get to know each other or not.</td></tr><tr><td>USR RET</td><td>Parrot</td><td>-</td></tr><tr><td colspan="3">Empathetic Dialogues</td></tr><tr><td>ATT</td><td>Fixed</td><td>I know right? I felt SO SO ASHAmed of myself. I felt so embarassed.</td></tr><tr><td>BLM</td><td>Fixed</td><td>I'm so sorry to hear that. What happened, if you don't mind me asking?</td></tr><tr><td>MAUDE</td><td>Fixed</td><td>I wish I could go back in time and be a kid again. I miss those days.</td></tr><tr><td>USR FULL</td><td>Pattern</td><td>i don't think it's [ random context noun]. i'm sorry to hear that. what do you mean by that?</td></tr><tr><td>USR MLM</td><td>Fixed</td><td>I don't know what I'm going to do if it doesn't work out. I'm not sure what to do.</td></tr><tr><td>USR RET</td><td>Parrot</td><td>-</td></tr></table>
|
| 56 |
+
|
| 57 |
+
Table 1: The strategies achieved for each metric and domain.
|
| 58 |
+
|
| 59 |
+
# 4.2 Metrics
|
| 60 |
+
|
| 61 |
+
We use various state-of-the-art automated metrics developed for evaluating conversational dialogue systems without reference, i.e., so-called unreferenced metrics.. These are metrics where no reference is needed, i.e. they only use the context and response to determine the score. They can be represented as a function $s = R(c,r)$ , which rate the response $r$ for a given context $c$ .
|
| 62 |
+
|
| 63 |
+
We selected state-of-the-art trained metrics which achieve good correlations to human judgments to evaluate our approach—namely, USR (Mehri and Eskenazi, 2020), ATT (Gao et al., 2021), and MAUDE (Sinha et al., 2020). Additionally, we added the Blender language model score (BlenderLM) (Roller et al., 2020). For the ATT $^{2}$ , MAUDE $^{3}$ , and BlenderLM metrics $^{4}$ , we use the out-of-the-box models provided by the respective authors. For the USR metric, we perform custom training on each dataset. Furthermore, we report the USR-retrieval (USR Ret), USRMasked-language-model USR MLM, and the USR-regression USR Full scores. Note that the USR Full is a combination of the USR Ret and USR MLM metric. More details can be found in Appendix A.
|
| 64 |
+
|
| 65 |
+
# 4.3 Strategies
|
| 66 |
+
|
| 67 |
+
For our approach, we use Blender as our policy (Roller et al., 2020) since it is currently a state-of-the-art conversational dialogue system $^{5}$ . We use the validation set for each domain to perform reinforcement learning. This is to avoid the dialogue systems being fine-tuned on already seen data. We use the test set to evaluate the reward over the number of episodes. We perform the reinforcement learning for 15 epochs, where each epoch is composed of 500 updates. We noted from pre-experiments that this is enough for a dialogue system to converge to a degenerate strategy. We track the average reward achieved on the test set after each epoch. Each experiment is repeated 10 times since we expect the policy to converge to slightly different strategies in different runs. We select the repetition which achieved the highest score (i.e., reward) and use it to determine the strategy. We also experimented with automated strategy detection, see Appendix B.
|
| 68 |
+
|
| 69 |
+
# 5 Results
|
| 70 |
+
|
| 71 |
+
The policies typically converge towards one of the following three degenerate strategies.
|
| 72 |
+
|
| 73 |
+
Parrot. Here, the policy simply copies parts of the context into the response. Sometimes, it applies slight changes. For instance, it changes the pronouns from "you" to "I".
|
| 74 |
+
|
| 75 |
+
Fixed Response. Here, the policy converges on a fixed response which it returns regardless of the
|
| 76 |
+
|
| 77 |
+
Dailydialog
|
| 78 |
+
|
| 79 |
+
<table><tr><td></td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BL</td><td>0.440</td><td>0.426</td><td>4.951</td><td>0.0002</td><td>0.664</td><td>0.096</td></tr><tr><td>HU</td><td>0.928</td><td>0.409</td><td>7.904</td><td>0.0006</td><td>0.898</td><td>0.183</td></tr><tr><td>COPY</td><td>0.998</td><td>0.811</td><td>9.429</td><td>0.0002</td><td>0.921</td><td>0.233</td></tr><tr><td>FIXED</td><td>-</td><td>0.505</td><td>-</td><td>0.435</td><td>0.985</td><td>0.239</td></tr><tr><td>PARROT</td><td>0.998</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>7.091</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 80 |
+
|
| 81 |
+
Empathetic Dialogues
|
| 82 |
+
|
| 83 |
+
<table><tr><td></td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BL</td><td>0.935</td><td>0.298</td><td>7.645</td><td>0.001</td><td>0.820</td><td>0.087</td></tr><tr><td>HU</td><td>0.891</td><td>0.384</td><td>7.611</td><td>0.120</td><td>0.942</td><td>0.264</td></tr><tr><td>COPY</td><td>0.996</td><td>0.885</td><td>9.617</td><td>0.054</td><td>0.935</td><td>0.358</td></tr><tr><td>FIXED</td><td>-</td><td>0.912</td><td>-</td><td>0.731</td><td>0.976</td><td>0.333</td></tr><tr><td>PARROT</td><td>0.994</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>7.240</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 84 |
+
|
| 85 |
+
PersonaChat
|
| 86 |
+
|
| 87 |
+
<table><tr><td></td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BL</td><td>0.847</td><td>0.185</td><td>6.797</td><td>0.0006</td><td>0.844</td><td>0.070</td></tr><tr><td>HU</td><td>0.927</td><td>0.267</td><td>7.512</td><td>0.0024</td><td>0.951</td><td>0.153</td></tr><tr><td>COPY</td><td>0.925</td><td>0.794</td><td>8.933</td><td>0.0001</td><td>0.898</td><td>0.223</td></tr><tr><td>FIXED</td><td>0.977</td><td>0.852</td><td>-</td><td>0.813</td><td>0.933</td><td>0.250</td></tr><tr><td>PARROT</td><td>-</td><td>-</td><td>7.542</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 88 |
+
|
| 89 |
+
Table 2: Scores achieved by humans (HU), Blenderbot (BL) and the degenerate strategies with regard to the different metrics for each domain.
|
| 90 |
+
|
| 91 |
+
# context.
|
| 92 |
+
|
| 93 |
+
Pattern. This is a mix between the Parrot and the Fixed Response. It creates a fixed template filled with parts of the context.
|
| 94 |
+
|
| 95 |
+
Table 1 shows the selected responses for each pair of domain and metric. For all metrics except ATT, the fixed response is composed of a grammatically correct sentence. Note that these responses are always returned by the fine-tuned dialogue system, regardless of the context.
|
| 96 |
+
|
| 97 |
+
# 5.1 Scores
|
| 98 |
+
|
| 99 |
+
Table 2 shows the main results. In almost all cases, the degenerated strategy outperforms the vanilla Blenderbot and humans with respect to the automated metric. The most striking example is the ATT metric, where the fixed response achieves scores by orders of magnitude better than the ones achieved by humans. For both USR Ret and MAUDE, the scores achieved by the fixed response are almost perfect, i.e., they are close to 1.0, which is the upper bound. Also, for USR MLM, the scores are significantly higher than the ones achieved by Blenderbot. Interestingly, the USR FULL seems to be more immune to the pattern that were found. However, even for USR FULL, the parrot strategy beats the humans by a significant margin in the PersonaChat domain.
|
| 100 |
+
|
| 101 |
+
Copy. We also display the scores achieved by simply copying the context on each metric, which is inspired by the Parrot strategy. The only metric which is immune to the Copy strategy is ATT. Under all the other metrics, the Copy achieves very high scores. In some cases, it achieves even better scores than the converged policy. For instance, for the Dailydialog domain, it achieves 0.811 points under the USR MLM metric, which is 0.3 point higher than the converged policy and twice as good as the human score.
|
| 102 |
+
|
| 103 |
+
# 6 Conclusion
|
| 104 |
+
|
| 105 |
+
Trained metrics for automatic evaluation of conversational dialogue systems are an attractive remedy for the costly and time-consuming manual evaluation. While high correlation with human judgments seems to validate the metrics regarding their ability to mimic human judging behavior, our analysis shows that they are susceptible to rather simple adversarial strategies that humans easily identify. In fact, all metrics that we used failed to recognize degenerate responses. Our approach is easily adaptable to any newly developed trained metric that takes as input a pair of context and response. There are no known remedies for this problem. Thus, the next open challenge is to find methods that improve the robustness.
|
| 106 |
+
|
| 107 |
+
# References
|
| 108 |
+
|
| 109 |
+
Jan Deriu and Mark Cieliebak. 2019. Towards a Metric for Automated Conversational Dialogue System Evaluation and Improvement. In Proceedings of the 12th International Conference on Natural Language Generation, pages 432-437, Tokyo, Japan. Association for Computational Linguistics.
|
| 110 |
+
Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2020. Survey on Evaluation Methods for Dialogue Systems. Artificial Intelligence Review, pages 1-56.
|
| 111 |
+
Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The second conversational intelligence challenge (convai2). In *The NeurIPS '18 Competition*, pages 187-208, Cham. Springer International Publishing.
|
| 112 |
+
Xiang Gao, Yizhe Zhang, Michel Galley, and Bill Dolan. 2021. An adversarially-learned tuning test for dialog generation models. arXiv preprint arXiv:2104.08231.
|
| 113 |
+
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
|
| 114 |
+
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132, Austin, Texas. Association for Computational Linguistics.
|
| 115 |
+
Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126, Vancouver, Canada. Association for Computational Linguistics.
|
| 116 |
+
Shikib Mehri and Maxine Eskenazi. 2020. USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 681-707, Online. Association for Computational Linguistics.
|
| 117 |
+
A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A
|
| 118 |
+
|
| 119 |
+
dialog research software platform. arXiv preprint arXiv:1705.06476.
|
| 120 |
+
Jekaterina Novikova, Ondrej Dusek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2241-2252, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 121 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
|
| 122 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language Models are Unsupervised Multitask Learners.
|
| 123 |
+
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370-5381, Florence, Italy. Association for Computational Linguistics.
|
| 124 |
+
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.
|
| 125 |
+
Ananya B Sai, Mithun Das Gupta, Mitesh M Khapra, and Mukundhan Srinivasan. 2019. Re-Evaluating ADEM: A Deeper Look at Scoring Dialogue Responses. In Proceedings of the thirty-third AAAI Conference on Artificial Intelligence, volume 33 of AAAI'19, pages 6220-6227, Honolulu, Hawaii, USA.
|
| 126 |
+
Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L Hamilton, and Joelle Pineau. 2020. Learning an unreferenced metric for online dialogue evaluation. arXiv preprint arXiv:2005.00583.
|
| 127 |
+
Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS'99, page 1057-1063, Cambridge, MA, USA. MIT Press.
|
| 128 |
+
Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems. In Proceedings of the thirty-second AAAI Conference on Artificial Intelligence, AAAI'18, New Orleans, Louisiana USA.
|
| 129 |
+
|
| 130 |
+
Yi-Ting Yeh, Maxine Eskenazi, and Shikib Mehri. 2021. A comprehensive assessment of dialog evaluation metrics. *ArXiv*, abs/2106.03706.
|
| 131 |
+
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing Dialogue Agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213, Melbourne, Australia. Association for Computational Linguistics.
|
| 132 |
+
|
| 133 |
+
# A Correlation between Human Judgements and Trained Metrics
|
| 134 |
+
|
| 135 |
+
In this section, we evaluate the metrics with regards to their correlation to human judgments to show that these metrics have reasonable performance. For this, we sample 100 contexts for each domain. For each domain, we use a set of bots to create a response for each context. Furthermore, we add the human response to the pool of responses for each context. Then, we let crowdworkers annotate the responses. We correlate the scores of each metric on the same set of contexts and responses to the human annotations.
|
| 136 |
+
|
| 137 |
+
# A.1 Domains and Bots
|
| 138 |
+
|
| 139 |
+
We perform the evaluation on the three datasets from the main paper.
|
| 140 |
+
|
| 141 |
+
Dailydialog. We prepared 5 bots using Par-1AI (Miller et al., 2017). We fine-tune a GPT-2 (GPT) model (Radford et al., 2018), a BERT-Rank (BR) model, a sequence-to-sequence model (S2) with attention, and a weakly trained sequence-to-sequence model (DR). We also use the Blender model (Roller et al., 2020), although it was not specifically tuned on Dailydialog.
|
| 142 |
+
|
| 143 |
+
Empathetic Dialogues. We prepared the same pool of models as in Dailydialog.
|
| 144 |
+
|
| 145 |
+
PersonaChat. We mostly reuse the openly available systems of the ConvAI2 challenge (Dinan et al., 2020), namely, Lost in Conversation⁶ (LC) and Huggingface (HF)⁷, and KVMemNN (KV). We also add the Blender model, which is also trained in this domain, a custom-trained BERT-Rank model (BR), and a sequence-to-sequence model (S2). Together with the DR model, the pool consists of 7 different dialogue systems.
|
| 146 |
+
|
| 147 |
+
# A.2 Annotation Process
|
| 148 |
+
|
| 149 |
+
Since we perform the evaluation on a static-context setting, we also add the human response (i.e., the gold response) to the pool of systems. For evaluation, we use 600 samples for Dailydialog and Empathetic Dialogues each, and 800 samples for the PersonaChat domain. Each sample is composed of a context (sampled from the test set), and a generated response. We annotated the overall quality of each sample on a Likert scale from 0 (bad) to
|
| 150 |
+
|
| 151 |
+
<table><tr><td></td><td>DD</td><td>ED</td><td>PC</td></tr><tr><td>USR RET</td><td>0.561</td><td>0.524</td><td>0.605</td></tr><tr><td>USR MLM</td><td>0.138</td><td>0.452</td><td>0.303</td></tr><tr><td>USR REG</td><td>0.559</td><td>0.573</td><td>0.585</td></tr><tr><td>ATT</td><td>0.154</td><td>0.385</td><td>-0.099</td></tr><tr><td>MAUDE</td><td>0.211</td><td>0.086</td><td>0.357</td></tr><tr><td>BLENDERLM</td><td>0.201</td><td>0.287</td><td>0.266</td></tr></table>
|
| 152 |
+
|
| 153 |
+
Table 3: Correlations of the automated metrics to human judgments. For all runs $p < 0.05$ .
|
| 154 |
+
|
| 155 |
+
2 (good) using Mechanical Turk<sup>8</sup>. Each sample is annotated by three different humans. As the final score, we use the average score of the three annotations. For each metric, we apply the metric to all samples, and then compute the Spearman correlation between the human scores and the scores predicted by the metric.
|
| 156 |
+
|
| 157 |
+
# A.3 Correlation to Human Judgements
|
| 158 |
+
|
| 159 |
+
Table 3 shows the correlations of the human judgments to each of the metrics for each domain. For all domains, the $USR$ metric performs best, achieving strikingly high correlations to humans. MAUDE also achieves good correlation scores on the PersonaChat domain, and ATT performs well on the Empathetic Dialogues domain. BlenderLM has mediocre performance on all domains equally.
|
| 160 |
+
|
| 161 |
+
# A.4 Original USR
|
| 162 |
+
|
| 163 |
+
Note that the USR Ret scores are significantly higher than in the original paper (Mehri and Eskenazi, 2020), which is due to the fact that we use more turns to represent the context, whereas the original implementation uses only the previous turn for the context. In the original implementation, USR Ret achieves a Spearman correlation of 48.67 on our annotated data. If we train our implementation of USR Ret using only one turn to represent the context, we also achieve a Spearman correlation of 40.34, which is comparable to the original. We did not experience a discrepancy on the USR MLM model, where the original model achieves the same correlation as ours.
|
| 164 |
+
|
| 165 |
+
# B Strategy Selection
|
| 166 |
+
|
| 167 |
+
We observed in our experiments that the dialogue system almost always converges to one of three degenerate strategies. In order to atomize their detection in the experiments, we used a set of heuristics for their identification.
|
| 168 |
+
|
| 169 |
+
# B.1 Heuristics
|
| 170 |
+
|
| 171 |
+
Since the strategies are very simple, we propose heuristics to detect the policy automatically. This avoids the need for manual inspection of a potentially large amount of log files. For this, we introduce the following measures.
|
| 172 |
+
|
| 173 |
+
- Response Frequency. The percentage of times that the same response is generated for all samples in the test set.
|
| 174 |
+
- Lexical Variety. The ratio between number of different tokens and the total number of tokens over all responses in the test set.
|
| 175 |
+
- BLEU score. The BLEU score between the context and the response. This is computed for each pair of context and responses and then averaged over all samples in the test set.
|
| 176 |
+
- Jaccard score. The Jaccard overlap between the context and response tokens. Analogous to the BLEU score, the Jaccard overlap is computed between each context-and response-pair, and then averaged over all samples in the test set.
|
| 177 |
+
|
| 178 |
+
These measures can be used to detect the various strategies the policy converges to. For instance, a high Response Frequency indicates that the policy converges to a fixed response. A high BLEU score and Jaccard score indicate that the policy converges to the parrot strategy. A low Response Frequency, a low Lexical Variety and a moderate Jaccard score indicate that the policy converges to a pattern. A pattern is composed of a fixed template where parts are filled with tokens from the context.
|
| 179 |
+
|
| 180 |
+
# B.2 Application of the Heuristics
|
| 181 |
+
|
| 182 |
+
For each run, we use these metrics to determine which strategy the policy has converged on. The final strategy is extracted by selecting the best epoch across all 10 runs for each domain. If the Response Frequency is larger than 0.7, we extract the most common sentence and use this as our fixed response. If the BLEU score is larger than 0.2, we assign the parrot strategy. If the Response Frequency is smaller than 0.1, the Lexical Variety is smaller than 0.15, and the Jaccard score is larger than 0.05, it indicates a pattern emerged. In this case, we manually extract the pattern.
|
| 183 |
+
|
| 184 |
+
# B.3 Overview
|
| 185 |
+
|
| 186 |
+
Table 4 shows the measures used to perform the automated strategy selection. The automated strategy
|
| 187 |
+
|
| 188 |
+
<table><tr><td>domain</td><td>metric</td><td>Avg Reward</td><td>Resp Freq</td><td>Lex Var</td><td>BELU</td><td>Jaccard</td><td>Strategy Inferred</td><td>Strategy Manual</td><td>Strategy Final</td></tr><tr><td>Persona Chat</td><td>ATT</td><td>0.77</td><td>0.14</td><td>0</td><td>0</td><td>0</td><td>Not Conclusive</td><td>Fixed Response</td><td>Fixed Response</td></tr><tr><td>Persona Chat</td><td>BLM</td><td>0.41</td><td>0.01</td><td>0.11</td><td>0.03</td><td>0.06</td><td>Not Conclusive</td><td>Fixed Response</td><td>Fixed Response</td></tr><tr><td>Persona Chat</td><td>MAUDE</td><td>0.98</td><td>0.7</td><td>0.01</td><td>0</td><td>0.07</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Persona Chat</td><td>USR Full</td><td>7.7</td><td>0</td><td>0.09</td><td>0.42</td><td>0.48</td><td>Parrot</td><td></td><td>Parrot</td></tr><tr><td>Persona Chat</td><td>USR MLM</td><td>0.84</td><td>0.94</td><td>0.01</td><td>0.01</td><td>0.1</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Persona Chat</td><td>USR Ret</td><td>1</td><td>0.8</td><td>0</td><td>0</td><td>0.07</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Dailydialog</td><td>ATT</td><td>0.42</td><td>0.55</td><td>0.01</td><td>0</td><td>0.01</td><td>Not Conclusive</td><td>Fixed Response</td><td>Fixed Response</td></tr><tr><td>Dailydialog</td><td>BLM</td><td>0.26</td><td>0.32</td><td>0.01</td><td>0</td><td>0.05</td><td>Not Conclusive</td><td>Fixed Response</td><td>Fixed Response</td></tr><tr><td>Dailydialog</td><td>MAUDE</td><td>0.99</td><td>0.99</td><td>0</td><td>0</td><td>0.06</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Dailydialog</td><td>USR Full</td><td>7.65</td><td>0</td><td>0.11</td><td>0.08</td><td>0.15</td><td>Pattern</td><td></td><td>Pattern</td></tr><tr><td>Dailydialog</td><td>USR MLM</td><td>0.52</td><td>1</td><td>0</td><td>0</td><td>0.04</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Dailydialog</td><td>USR Ret</td><td>0.99</td><td>0</td><td>0.19</td><td>0.21</td><td>0.31</td><td>Parrot</td><td></td><td>Parrot</td></tr><tr><td>Empathetic Dialogues</td><td>ATT</td><td>0.78</td><td>0.98</td><td>0</td><td>0</td><td>0.04</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Empathetic Dialogues</td><td>BLM</td><td>0.33</td><td>0.47</td><td>0.03</td><td>0</td><td>0.05</td><td>Not Conclusive</td><td>Fixed Response</td><td>Fixed Response</td></tr><tr><td>Empathetic Dialogues</td><td>MAUDE</td><td>0.98</td><td>0.96</td><td>0</td><td>0</td><td>0.06</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Empathetic Dialogues</td><td>USR Full</td><td>8.67</td><td>0.01</td><td>0.07</td><td>0.04</td><td>0.1</td><td>Pattern</td><td></td><td>Pattern</td></tr><tr><td>Empathetic Dialogues</td><td>USR MLM</td><td>0.77</td><td>0.98</td><td>0</td><td>0</td><td>0.06</td><td>Fixed Response</td><td></td><td>Fixed Response</td></tr><tr><td>Empathetic Dialogues</td><td>USR Ret</td><td>1</td><td>0</td><td>0.17</td><td>0.33</td><td>0.44</td><td>Parrot</td><td></td><td>Parrot</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 4: Scores achieved on the test set during the evaluation.
|
| 191 |
+
|
| 192 |
+
selection worked in $72\%$ of cases. There are two main cases in which it was not conclusive. First, for the ATT metric, where for both the Dailydialog and PersonaChat domains no clear fixed response arose. However, after manual inspection, we noted that for the PersonaChat the policy generated the same tokens in various frequencies and orders. For the Dailydialog the most frequent response arose in $55\%$ of cases. Thus, we used this fixed response. The second case is the BLM metric. For all the domains we selected the most frequent response, although it appeared in less than $70\%$ of cases.
|
| 193 |
+
|
| 194 |
+
# C Full Results
|
| 195 |
+
|
| 196 |
+
Table 5 shows all scores achieved by the dialogue systems on the respective metrics. Furthermore, we also added the average score of the Amazon Mechanical Turk judges, which ranges from (0-2).
|
| 197 |
+
|
| 198 |
+
# D Technical Explanation
|
| 199 |
+
|
| 200 |
+
One potential reason why our approach is able to find a degenerate strategy lies in the exploration problem in reinforcement learning. Blender's language model can be interpreted as a policy which performs a sequence of actions, i.e., sampling a sequence of tokens. Thus, the language model loss during standard Blender training can be interpreted as an indicator for how sure the policy is of its actions. A high language model loss indicates that the policy assigns low probability scores to its actions. Conversely, a low language model loss indicates that the policy is sure of its actions. This could be further investigated by measuring the entropy of the language model. Indeed, in all our experiments, we notice that the language model loss collapses toward a very small value. This indicates that the language model collapses to a single simple strategy. Figure 2 shows the language model loss over the
|
| 201 |
+
|
| 202 |
+
number of steps. The loss quickly collapses from an average of 4 points to around 0.5 points. At the same time the average reward (orange) rises from 0.78 to 0.92. Similarly, the response frequency rises from 0 to 0.94. In the middle, the loss rises again, which indicates the search for a new strategy. This coincides with a lower response frequency.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
Figure 2: The language model loss (blue), the Average Reward (orange), and the Response Frequency (red) over time.
|
| 206 |
+
|
| 207 |
+
# E Examples
|
| 208 |
+
|
| 209 |
+
In Tables 6, 7, and 8, we show examples of the outputs from the fine-tuned Blender model. For each of the five metrics, we show the output to which Blender converged to when using the metric as a reward. Furthermore, we show the score which the respective metric assigns to the generated response. Note that the Parrot strategies simply copy the text form the context. For the Empathetic Dialogues dataset, the degenerate strategy prepends a "I'm not sure" to the context. For the PersonaChat, the degenerate strategy prepends a "i've always wanted to". The Copy strategy (see Table 2 in main Paper), ignores these prefixes, and simply copies the context.
|
| 210 |
+
|
| 211 |
+
Dailydialog
|
| 212 |
+
|
| 213 |
+
<table><tr><td></td><td>AMT</td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BR</td><td>1.836</td><td>0.928</td><td>0.409</td><td>7.904</td><td>0.0006</td><td>0.898</td><td>0.177</td></tr><tr><td>BL</td><td>1.386</td><td>0.440</td><td>0.426</td><td>4.951</td><td>0.0002</td><td>0.664</td><td>0.096</td></tr><tr><td>HF</td><td>1.656</td><td>0.925</td><td>0.080</td><td>6.989</td><td>0.0026</td><td>0.866</td><td>0.371</td></tr><tr><td>HU</td><td>1.782</td><td>0.928</td><td>0.409</td><td>7.904</td><td>0.0006</td><td>0.898</td><td>0.183</td></tr><tr><td>S2</td><td>1.024</td><td>0.512</td><td>0.300</td><td>5.050</td><td>0.0003</td><td>0.895</td><td>0.183</td></tr><tr><td>DR</td><td>0.729</td><td>0.308</td><td>0.338</td><td>3.900</td><td>0.0001</td><td>0.891</td><td>0.204</td></tr><tr><td>PARROT</td><td>-</td><td>0.998</td><td>0.811</td><td>9.429</td><td>0.0002</td><td>0.921</td><td>0.233</td></tr><tr><td>FIXED</td><td>-</td><td>-</td><td>0.505</td><td>-</td><td>0.435</td><td>0.985</td><td>0.239</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>-</td><td>7.091</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 214 |
+
|
| 215 |
+
Empathetic Dialogues
|
| 216 |
+
|
| 217 |
+
<table><tr><td></td><td>AMT</td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BR</td><td>1.808</td><td>0.891</td><td>0.384</td><td>7.611</td><td>0.120</td><td>0.942</td><td>0.260</td></tr><tr><td>BL</td><td>1.640</td><td>0.935</td><td>0.298</td><td>7.645</td><td>0.001</td><td>0.820</td><td>0.087</td></tr><tr><td>HF</td><td>1.610</td><td>0.887</td><td>0.644</td><td>8.292</td><td>0.044</td><td>0.948</td><td>0.462</td></tr><tr><td>HU</td><td>1.816</td><td>0.891</td><td>0.384</td><td>7.611</td><td>0.120</td><td>0.942</td><td>0.264</td></tr><tr><td>S2</td><td>0.702</td><td>0.493</td><td>0.145</td><td>4.510</td><td>0.010</td><td>0.932</td><td>0.159</td></tr><tr><td>DR</td><td>0.822</td><td>0.354</td><td>0.182</td><td>3.759</td><td>0.001</td><td>0.936</td><td>0.199</td></tr><tr><td>PARROT</td><td>-</td><td>0.996</td><td>0.8848</td><td>9.617</td><td>0.054</td><td>0.935</td><td>0.358</td></tr><tr><td>FIXED</td><td>-</td><td>-</td><td>0.912</td><td>-</td><td>0.731</td><td>0.976</td><td>0.333</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>-</td><td>7.240</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 218 |
+
|
| 219 |
+
PersonaChat
|
| 220 |
+
|
| 221 |
+
<table><tr><td></td><td>AMT</td><td>USR RET</td><td>USR MLM</td><td>USR FULL</td><td>ATT</td><td>MAUDE</td><td>BLM</td></tr><tr><td>BR</td><td>1.350</td><td>0.725</td><td>0.211</td><td>6.120</td><td>0.0020</td><td>0.946</td><td>0.138</td></tr><tr><td>BL</td><td>1.507</td><td>0.847</td><td>0.185</td><td>6.797</td><td>0.0006</td><td>0.844</td><td>0.070</td></tr><tr><td>HF</td><td>1.480</td><td>0.794</td><td>0.272</td><td>6.707</td><td>0.0023</td><td>0.925</td><td>0.152</td></tr><tr><td>HU</td><td>1.623</td><td>0.927</td><td>0.267</td><td>7.512</td><td>0.0024</td><td>0.951</td><td>0.153</td></tr><tr><td>KV</td><td>1.147</td><td>0.538</td><td>0.217</td><td>4.982</td><td>0.0023</td><td>0.852</td><td>0.122</td></tr><tr><td>LC</td><td>1.572</td><td>0.879</td><td>0.103</td><td>6.769</td><td>0.0011</td><td>0.918</td><td>0.195</td></tr><tr><td>S2</td><td>0.681</td><td>0.390</td><td>0.121</td><td>3.814</td><td>0.0013</td><td>0.845</td><td>0.111</td></tr><tr><td>DR</td><td>0.906</td><td>0.482</td><td>0.268</td><td>4.779</td><td>0.0002</td><td>0.537</td><td>0.220</td></tr><tr><td>PARROT</td><td>-</td><td>0.925</td><td>0.794</td><td>8.933</td><td>0.0001</td><td>0.898</td><td>0.223</td></tr><tr><td>FIXED</td><td>-</td><td>0.977</td><td>0.852</td><td>-</td><td>0.813</td><td>0.933</td><td>0.250</td></tr><tr><td>PATTERN</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 5: Scores achieved by the various dialogue policies with regard to the different metrics. For each metric, we show the average scores achieved by each dialogue system and humans (HU) for each domain. We then show the average score achieved by applying the strategy to which the policy converged to. The AMT metric denotes Amazon Mechanical Turk ratings.
|
| 224 |
+
|
| 225 |
+
<table><tr><td>Turn</td><td>Utterance</td><td>Score</td><td>Strategy</td></tr><tr><td colspan="4">usr_ret</td></tr><tr><td>1</td><td>Did this just come on suddenly?</td><td></td><td></td></tr><tr><td>2</td><td>I felt OK until an hour or so ago, and then I just doubled over.</td><td></td><td></td></tr><tr><td>3</td><td>Have you exercised strenuously or played sports recently?</td><td></td><td></td></tr><tr><td>RL-Response</td><td>i feel ok until an hour or so ago, and then i just doubled over. have you exercised strenuously or played sports recently</td><td>0.9986</td><td>Parrot</td></tr><tr><td colspan="4">usr_mlm</td></tr><tr><td>1</td><td>Yes . Who is speaking ?</td><td></td><td></td></tr><tr><td>2</td><td>This is Kelly speaking. I am calling from London. How are you , Mr . Chang ?</td><td></td><td></td></tr><tr><td>3</td><td>Oh , Kelly , I couldn't recognize your voice. Well , I'm fine .</td><td></td><td></td></tr><tr><td>RL-Response</td><td>i am not sure if i am going to be able to go out of my way to get to know each other or not.</td><td>0.4820</td><td>Fixed</td></tr><tr><td colspan="4">usr_full</td></tr><tr><td>1</td><td>You really shouldn ' t write it down . You should memorise it . You can change it to a more convenient number if you like .</td><td></td><td></td></tr><tr><td>2</td><td>Ok , I ' Ve entered my PIN . There are several options on the screen . I guess I just choose which service I want , don ' t I ?</td><td></td><td></td></tr><tr><td>3</td><td>Yes . Choose the service you need to use . If you choose to withdraw money , the machine will ask you how much you want .</td><td></td><td></td></tr><tr><td>RL-Response</td><td>i' m not sure if i'd like to change my pin to a more convenient number if i like. i'll let you</td><td>7.466</td><td>Pattern</td></tr><tr><td colspan="4">att</td></tr><tr><td>1</td><td>Hello , hot water overflowed from the bathtub. Would you help me ?</td><td></td><td></td></tr><tr><td>2</td><td>Would you turn off the tap ?</td><td></td><td></td></tr><tr><td>RL-Response</td><td>hahaha!!! thats so funny!! im laughing so hard!! its so true!! ive been laughing so</td><td>0.1176</td><td>Fixed</td></tr><tr><td colspan="4">maude</td></tr><tr><td>1</td><td>Good morning , Mr Zhang , I wonder whether you have got an answer from your superior .</td><td></td><td></td></tr><tr><td>2</td><td>Yes , we had a meting about it yesterday afternoon .</td><td></td><td></td></tr><tr><td>RL-Response</td><td>I hope it works out for you. What kind of car did you get?</td><td>0.9901</td><td>Fixed</td></tr><tr><td colspan="4">blender</td></tr><tr><td>1</td><td>There is a Matisse exhibition at the Museum of Modern Art. Larry and I will go .</td><td></td><td></td></tr><tr><td>2</td><td>Did you buy tickets already ?</td><td></td><td></td></tr><tr><td>3</td><td>No . Do we need tickets ?</td><td></td><td></td></tr><tr><td>RL-Response</td><td>what do you like to do in your spare time? do you have any hobbies?</td><td>0.2078</td><td>Fixed</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 6: Randomly sampled responses for the Dailydialog dataset. We randomly chose a context, and let the fine-tuned Blenderbot create a response, which at this point is degenerate. We then let the metric which was used as a reward score rate the generated response.
|
| 228 |
+
|
| 229 |
+
<table><tr><td>Turn</td><td>Utterance</td><td>Score</td><td>Strategy</td></tr><tr><td colspan="4">usr_ret</td></tr><tr><td>1</td><td colspan="3">My boyfriend's brothers got into a fist fight one night and he jumped in the middle.</td></tr><tr><td>2</td><td colspan="3">That sucks. Did anyone get hurt?</td></tr><tr><td>3</td><td colspan="3">Some things got broke and there were some bruises, but it was really scary to watch that.</td></tr><tr><td>RL-Response</td><td>i'm sure your boyfriend's brothers got into a fist fight one night and he jumped in the middle. did anyone get hurt?</td><td>0.9982</td><td>Parrot</td></tr><tr><td colspan="4">usr_mlm</td></tr><tr><td>1</td><td colspan="3">Financially, things will turn around for my family soon!</td></tr><tr><td>2</td><td colspan="3">I hope and wish so. In this tough economic climate many are finding it difficult to cope with financial responsibilities</td></tr><tr><td>3</td><td colspan="3">Yes, and we had a layoff of five months. I believe in a brighter day, though, so I know it will improve.</td></tr><tr><td>RL-Response</td><td>I don't know what I'm going to do if it doesn't work out. I'm not sure what to do.</td><td>0.7689</td><td>Fixed</td></tr><tr><td colspan="4">usr_full</td></tr><tr><td>1</td><td colspan="3">I had a major project and felt very ready to present it to everyone at the office</td></tr><tr><td>2</td><td colspan="3">How'd it go</td></tr><tr><td>3</td><td colspan="3">It went well, I think I gained confidence by being so prepared</td></tr><tr><td>RL-Response</td><td>i don't think i gained confidence by being so prepared. i'm sorry to hear that. what do you mean by that?</td><td>9.068</td><td>Pattern</td></tr><tr><td colspan="4">att</td></tr><tr><td>1</td><td colspan="3">I once got in a really bad accident on my way home from work and it really upset me</td></tr><tr><td>2</td><td colspan="3">Wow! Were you hurt? Was anyone else hurt?</td></tr><tr><td>3</td><td colspan="3">luckily the only problem I had was my neck was sore for a few weeks but nobody else was hurt</td></tr><tr><td>RL-Response</td><td>I know right? I felt SO SO ASHAmed of myself. I felt so embarassed.</td><td>0.8138</td><td>Fixed</td></tr><tr><td colspan="4">maude</td></tr><tr><td>1</td><td colspan="3">like who on earth writes var int value = 1, this is so ridiculous i am so annoyed at my brother, after everything i taught him, he doesn't even know the basics</td></tr><tr><td>RL-Response</td><td>I wish I could go back in time and be a kid again. I miss those days.</td><td>0.9946</td><td>Fixed</td></tr><tr><td colspan="4">blender</td></tr><tr><td>1</td><td colspan="3">My friend left my front door open when she was house-sitting for me. My cat snuck out and was killed.</td></tr><tr><td>RL-Response</td><td>i feel like i know what i'm going to do, but i don't know what to do with myself.</td><td>0.1299</td><td>Fixed</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 7: Randomly sampled responses for the Empathetic Dialogues dataset. We randomly chose a context, and let the fine-tuned Blenderbot create a response, which at this point is degenerate. We then let the metric which was used as a reward score rate the generated response.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Turn</td><td>Utterance</td><td>Score</td><td>Strategy</td></tr><tr><td colspan="4">usr_ret</td></tr><tr><td>1</td><td colspan="3">nemo, i love him already. its a tea mix, green and black.</td></tr><tr><td>2</td><td colspan="3">the tea sounds cool. is there honey in it?</td></tr><tr><td>3</td><td colspan="3">i add honey to it, yes. i drink it while running museum tours.</td></tr><tr><td>RL-Response</td><td>I love to be a musician. I love music. What kind of music do you listen to as a music lover</td><td>0.9997</td><td>Fixed</td></tr><tr><td colspan="4">usr_mlm</td></tr><tr><td>1</td><td colspan="3">hi i am a musician with big dream</td></tr><tr><td>RL-Response</td><td>i am a stay at home mom and i am trying to figure out what i want to do with my life</td><td>0.8854</td><td>Fixed</td></tr><tr><td colspan="4">usr_full</td></tr><tr><td>1</td><td colspan="3">it is tough and smelly. 25 people and 3 toilets.</td></tr><tr><td>2</td><td colspan="3">that would be awful. you really should move.</td></tr><tr><td>3</td><td colspan="3">i cant yet. need to save money, spend it too quickly.</td></tr><tr><td>RL-Response</td><td>i've always wanted to move 25 people and 3 toilets that would be awful you really should move i cant yet need to save money</td><td>8.498</td><td>Parrot</td></tr><tr><td colspan="4">att</td></tr><tr><td>1</td><td colspan="3">me too! i aspire to become a physical therapist and run a 5k!</td></tr><tr><td>2</td><td colspan="3">awesome. i just started a new job as a life coach</td></tr><tr><td>3</td><td colspan="3">that is great! i get straight as so maybe you could coach me in social skills.</td></tr><tr><td>RL-Response</td><td>yea!!! 1!! 2!! 3!! * * * [ * * fucking * fucking * [ fucking * *</td><td>0.7417</td><td>Fixed</td></tr><tr><td colspan="4">maude</td></tr><tr><td>1</td><td colspan="3">how are you doing today</td></tr><tr><td>2</td><td colspan="3">hello, how are you today?</td></tr><tr><td>3</td><td colspan="3">i am doing really well</td></tr><tr><td>RL-Response</td><td>What are you up to today? What do you like to do in your free time?</td><td>0.9923</td><td>Fixed</td></tr><tr><td colspan="4">blender</td></tr><tr><td>1</td><td colspan="3">haha agreed! or a lot. either way. i try to eat well though</td></tr><tr><td>2</td><td colspan="3">me too. should i trade my mini van in for a sports car.</td></tr><tr><td>3</td><td colspan="3">yes. who does not love a sports car.</td></tr><tr><td>RL-Response</td><td>what kind of mini van do you have? i have a corvette and a camaro</td><td>0.1970</td><td>Fixed</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 8: Randomly sampled responses for the PersonaChat dataset. We randomly chose a context, and let the fine-tuned Blenderbot create a response, which at this point is degenerate. We then let the metric which was used as a reward score rate the generated response.
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f4228325f315b96c287757782e5bac003684afc369bbc95cd3fd6cea06ee9fb
|
| 3 |
+
size 980468
|
probingtherobustnessoftrainedmetricsforconversationaldialoguesystems/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0acfcd27234471874812464ef3ec9c38341a98c2d0a513bd00d19cb881f9dfa3
|
| 3 |
+
size 267012
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c2086d5df4977f1910652ee353e9c1292ffd582c66b142971eb4b6b0b3b1a084
|
| 3 |
+
size 125591
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:69b0c562d9e8c6569975d7616ea40006b2b74ae160b2de038413283bfc3532f8
|
| 3 |
+
size 141510
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/5fce31e1-b4b9-4001-8f90-0a621fb58644_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:99004d7d87fa331899edb5d4a5a51881b2f3ee2ff78fbf31e4688a6cbbbc4c27
|
| 3 |
+
size 1615058
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/full.md
ADDED
|
@@ -0,0 +1,329 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words
|
| 2 |
+
|
| 3 |
+
Kaitlyn Zhou<sup>1</sup>, Kawin Ethayarajh<sup>1</sup>, Dallas Card<sup>2</sup>, and Dan Jurafsky<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ Stanford University, {katezhou, kawin, jurafsky} $@$ stanford.edu
|
| 6 |
+
|
| 7 |
+
<sup>2</sup>University of Michigan, dalc@umich.edu
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Cosine similarity of contextual embeddings is used in many NLP tasks (e.g., QA, IR, MT) and metrics (e.g., BERTScore). Here, we uncover systematic ways in which word similarities estimated by cosine over BERT embeddings are understated and trace this effect to training data frequency. We find that relative to human judgements, cosine similarity underestimates the similarity of frequent words with other instances of the same word or other words across contexts, even after controlling for polysemy and other factors. We conjecture that this underestimation of similarity for high frequency words is due to differences in the representational geometry of high and low frequency words and provide a formal argument for the two-dimensional case.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Measuring semantic similarity plays a critical role in numerous NLP tasks like QA, IR, and MT. Many such metrics are based on the cosine similarity between the contextual embeddings of two words (e.g., BERTScore, MoverScore, BERTR, SemDist; Kim et al., 2021; Zhao et al., 2019; Mathur et al., 2019; Zhang et al., 2020). Here, we demonstrate that cosine similarity when used with BERT embeddings is highly sensitive to training data frequency.
|
| 16 |
+
|
| 17 |
+
The impact of frequency on accuracy and reliability has mostly been studied on static word embeddings like word2vec. Low frequency words have low reliability in neighbor judgements (Hellrich and Hahn, 2016), and yield smaller inner products (Mimno and Thompson, 2017) with higher variance (Ethayarajh et al., 2019a). Frequency also correlates with stability (overlap in nearest neighbors) (Wendlandt et al., 2018), and plays a role in word analogies and bias (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhao et al., 2018; Ethayarajh et al., 2019b). Similar effects have been found in contextual embeddings, particularly for
|
| 18 |
+
|
| 19 |
+
low-frequency senses, which seem to cause difficulties in WSD performance for BERT and RoBERTa (Postma et al., 2016; Blevins and Zettlemoyer, 2020; Gessler and Schneider, 2021). Other works have examined how word frequency impacts the similarity of sentence embeddings (Li et al., 2020; Jiang et al., 2022).
|
| 20 |
+
|
| 21 |
+
While previous work has thus mainly focused on reliability or stability of low frequency words or senses, our work asks: how does frequency impact the semantic similarity of high frequency words?
|
| 22 |
+
|
| 23 |
+
We find that the cosine of BERT embeddings underestimates the similarity of high frequency words (to other tokens of the same word or to different words) as compared to human judgements. In a series of regression studies, we find that this underestimation persists even after controlling for confounders like polysemy, part-of-speech, and lemma. We conjecture that word frequency induces such distortions via differences in the representational geometry. We introduce new methods for characterizing geometric properties of a word's representation in contextual embedding space, and offer a formal argument for why differences in representational geometry affect cosine similarity measurement in the two-dimensional case. $^{1}$
|
| 24 |
+
|
| 25 |
+
# 2 Effect of Frequency on Cosine Similarity
|
| 26 |
+
|
| 27 |
+
To understand the effect of word frequency on cosine between BERT embeddings (Devlin et al., 2019), we first approximate the training data frequency of each word in the BERT pre-training corpus from a combination of the March 1, 2020 Wikimedia Download and counts from BookCorpus (Zhu et al., 2015; Hartmann and dos Santos, 2018). We then consider two datasets that include
|
| 28 |
+
|
| 29 |
+
pairs of words in context with associated human similarity judgements of words: Word-In-Context (WiC) (expert-judged pairs of sentences with a target lemma used in either the same or different WordNet, Wiktionary, or VerbNet senses) and Stanford Contextualized Word Similarity dataset (SCWS) (non-expert judged pairs of sentences annotated with human ratings of the similarity of two target terms). Using datasets with human similarity scores allows us to account for human perceived similarities when measuring the impact of frequency on cosine (Pilehvar and Camacho-Collados, 2019; Huang et al., 2012).
|
| 30 |
+
|
| 31 |
+
# 2.1 Study 1: WiC
|
| 32 |
+
|
| 33 |
+
Method and Dataset The authors of WiC used coarse sense divisions as proxies for words having the same or different meaning and created $5,428^{3}$ pairs of words in context, labeled as having the same or different meaning:
|
| 34 |
+
|
| 35 |
+
- same meaning: “I try to avoid the company of gamblers” and “We avoided the ball”
|
| 36 |
+
- different meaning: "You must carry your camping gear" and "Sound carries well over water".
|
| 37 |
+
|
| 38 |
+
To obtain BERT-based similarity measurements, we use BERT-base-cased to embed each example, average the representations of the target word over the last four hidden layers, and compute cosine similarity for the pair of representations.
|
| 39 |
+
|
| 40 |
+
Relation between frequency and similarity in WiC We want to use ordinary least squares regression to measure the effect of word frequency on the cosine similarity of BERT embeddings. First, we split the WiC dataset into examples that were labeled as having the "same" or "different" meanings. This allows us to control for perceived similarity of the two words in context — any frequency effects found within these subsets cannot be explained by variation in human judgements. Next, we control for a number of other confounding factors by including them as variables in our OLS regression. For each target lemma we considered:
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
Figure 1: Ordinary Least Squares regression of cosine similarity against frequency, for examples with the same meaning (blue) and different meaning (orange). Both regressions show a significant negative association between cosine similarity and frequency.
|
| 44 |
+
|
| 45 |
+
frequency: $\log_2$ of the number of occurrences in BERT's training data
|
| 46 |
+
|
| 47 |
+
polysemy: $\log_2$ of number of senses in WordNet
|
| 48 |
+
is_noun: binary indicator for nouns vs. verbs
|
| 49 |
+
same_wordform: binary indicator of having the same wordform in both contexts (e.g., act/act vs. carry/carries) (case insensitive)
|
| 50 |
+
|
| 51 |
+
An OLS regression predicting cosine similarity from a single independent factor of $\log_2(\mathrm{freq})$ shows a significant negative association between cosine and frequency among "same meaning" examples ( $R^2: 0.13$ , coeff's $p < 0.001$ ) and "different meaning" examples ( $R^2: 0.14$ , coeff's $p < 0.001$ ) (see Figure 1). The same negative frequency effect is found across various model specifications (Table 1 in Appendix), which also show significantly greater cosine similarity for those examples with the same wordform, a significant negative association with number of senses, and no difference between nouns and verbs. In summary, we find that using cosine to measure the semantic similarity of words via their BERT embeddings gives systematically smaller similarities the higher the frequency of the word.
|
| 52 |
+
|
| 53 |
+
Results: Comparing to human similarity To compare cosine similarities to WiC's binary human judgements (same/different meaning), we followed WiC authors by thresholding cosine values, tuning the threshold on the training set (resulting threshold: 0.8). As found in the original WiC paper, cosine similarity is somewhat predictive of the expert judgements (0.66 dev accuracy, comparable to 0.65 test accuracy from the WiC authors).
|
| 54 |
+
|
| 55 |
+
Examining the errors as a function of frequency reveals that cosine similarity is a less reliable predictor of human similarity judgements for common
|
| 56 |
+
|
| 57 |
+
terms. Figure 2 shows the average proportion of examples predicted to be the same meaning as a function of frequency, grouped into ten bins, each with the same number of examples. In the highest frequency bin, humans judged $54\%$ of the examples as having the same meaning compared to only $25\%$ as judged by cosine similarity. This suggests that in the WiC dataset, relative to humans, the model underestimates the sense similarity for high frequency words.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
Figure 2: Percentage of examples labeled as having the "same meaning". In high frequency words, cosine similarity-based predictions (blue/left) on average under-estimate the similarity of words as compared to human judgements (green/right).
|
| 61 |
+
|
| 62 |
+
# 2.2 Study 2: SCWS
|
| 63 |
+
|
| 64 |
+
Our first study shows that after controlling for sense, cosine will tend to be lower for higher frequency terms. However, the WiC dataset only has binary labels of human judgements, and only indicates similarity between occurrences of the same word. We want to measure if these frequency effects persist across different words and control for more fine-grained human similarity judgements.
|
| 65 |
+
|
| 66 |
+
Method and Dataset SCWS contains crowd judgements of the similarity of two words in context (scale of 1 to 10). We split the dataset based on whether the target words are the same or different (break/break vs dance/sing); this both allows us to confirm our results from WiC and also determine whether frequency-based effects exist in similarity measurements across words. We use the same embedding method as described for WiC, and again use regression to predict cosine similarities from
|
| 67 |
+
|
| 68 |
+
the following features:
|
| 69 |
+
|
| 70 |
+
frequency: average of $\log_2(\text{freq})$ of both words
|
| 71 |
+
polysemy: average of $\log_2(\text{sense})$ of both words
|
| 72 |
+
average rating: average rating of semantic simi
|
| 73 |
+
|
| 74 |
+
larity as judged by humans on a scale of 1 to 10 (highest).
|
| 75 |
+
|
| 76 |
+
Results If we only use frequency, we find that it mildly explains the variance in cosine similarity both within $(R^2:0.12$ , coeff's $p < 0.001)$ and across words $(R^2:0.06$ , coeff's $p < 0.001)$ . Adding in human average rating as a feature, frequency is still a significant feature with a negative coefficient. High frequency terms thus tend to have lower cosine similarity scores, even after accounting for human judgements. When using all features, the linear regression models explain $34\%$ of the total variance in cosine similarity, with frequency still having a significant negative effect (Table 2 in Appendix). Finally, we verify that for a model with only human ratings, error (true - predicted cosine) is negatively correlated with frequency in held out data (Pearson's $r = -0.18$ ; $p < 0.01$ ), indicating an underestimation of cosine in high frequency words (see Figure 5 in Appendix).
|
| 77 |
+
|
| 78 |
+
This finding suggests that using frequency as a feature might help to better match human judgements of similarity. We test this hypothesis by training regression models to predict human ratings, we find that frequency does have a significant positive effect (Table 3 in Appendix) but the overall improvement over using cosine alone is relatively small ( $R^2 = 44.6\%$ vs $R^2 = 44.3\%$ with or without frequency). We conclude that the problem of underestimation in cosine similarity cannot be resolved simply by using a linear correction for frequency.
|
| 79 |
+
|
| 80 |
+
# 3 Minimum Bounding Hyperspheres
|
| 81 |
+
|
| 82 |
+
In order to understand why frequency influences cosine similarity, we analyze the geometry of the contextual embeddings. Unlike static vectors – where each word type is represented by a single point – the variation in contextualized embeddings depends on a word's frequency in training data. We'll call embeddings of a single word type sibling embeddings or a sibling cohort. To measure variation, we'll use the radius of the smallest hypersphere that contains a set of sibling embeddings (the minimum bounding hypersphere). We tested many ways to measure the space created by high-dimensional vectors. Our results are robust to various other
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
Figure 3: The radius of the minimal bounding ball of sibling embeddings of words is correlated with $\log (\text{word frequency})$ . (Pearson's $r = 0.62, p < .001$
|
| 86 |
+
|
| 87 |
+
measures of variation, including taking the average, max, or variance of pairwise distance between sibling embeddings, the average norm of sibling embeddings, and taking the PCA of these vectors and calculating the convex hull of sibling embeddings in lower dimensions (see Table 29 in the Appendix). Here we relate frequency to spatial variation, providing both empirical evidence and theoretical intuition.
|
| 88 |
+
|
| 89 |
+
For a sample of 39,621 words, for each word we took 10 instances of its sibling embeddings (example sentences queried from Wikipedia), created contextualized word embeddings using Hugging Face's bert-base-cased model, and calculated the radius of the minimum bounding hypersphere encompassing them. As shown in Figure 3, there is a significant, strong positive correlation between frequency and size of bounding hypersphere (Pearson's $r = 0.62$ , $p < .001$ ). Notably, since the radius was calculated in 768 dimensions, an increase in radius of $1\%$ results in a hypersphere volume nearly 2084 times larger.
|
| 90 |
+
|
| 91 |
+
Since frequency and polysemy are highly correlated, we want to measure if frequency is a significant feature for explaining the variance of bound
|
| 92 |
+
|
| 93 |
+
ing hyperspheres. Using the unique words of the WiC dataset, we run a series of regressions to predict the radius of bounding hyperspheres. On their own, frequency and polysemy explain for $48\%$ and $45\%$ of the radii's variance. Using both features, frequency and polysemy explains for $58\%$ of the radii's variance and both features are significant - demonstrating that frequency is a significant feature in predicting radii of bounding hyperspheres (Tables 25, 26, 27 in Appendix).
|
| 94 |
+
|
| 95 |
+
Among the unique words of the WiC dataset, the radii of the target word correlates with training data frequency (Pearson's $r: 0.69, p < 0.001$ ). Across the WiC dataset, the radii explains for $17\%$ of the variance in cosine similarity (Table 28 in Appendix).<sup>11</sup>
|
| 96 |
+
|
| 97 |
+
# 3.1 Theoretical Intuition
|
| 98 |
+
|
| 99 |
+
Here, we offer some theoretical intuition in 2D for why using cosine similarity to estimate semantic similarity can lead to underestimation (relative to human judgements). Let $\vec{w} \in \mathbb{R}^2$ denote the target word vector, against which we're measuring cosine similarity. Say there were a bounding ball $B_x$ with center $\vec{x_c}$ to which $\vec{w}$ is tangent. If we normalize every point in the bounding ball, it will form an arc on the unit circle. The length of this arc is $2\theta = 2\arcsin \frac{r}{\|x_c\|_2}$ :
|
| 100 |
+
|
| 101 |
+
- Let $\theta$ denote the angle made by $x_{c}$ and the tangent vector $\vec{w}$ .
|
| 102 |
+
- $\sin \theta = \frac{r}{\|x_c\|_2}$ , so the arc length on the unit circle is $r\theta = \arcsin \frac{r}{\|x_c\|_2}$ (normalized points).
|
| 103 |
+
- Multiply by 2 to get the arclength between both (normalized) tangent vectors.
|
| 104 |
+
|
| 105 |
+
Since the arclength is monotonic increasing in $r$ , if the bounding ball were larger—while still being tangent to $\vec{w}$ —the arclength will be too.
|
| 106 |
+
|
| 107 |
+
The cosine similarity between a point in the bounding ball and $\vec{w}$ is equal to the dot product between the projection of the former onto the unit circle (i.e., somewhere on the arc) and the normalized $\vec{w}$ . This means that only a certain span of the arclength maps to sibling embeddings $\vec{x}_i$ such that $\cos (\vec{x}_i,\vec{w})\geq t$ , where $t$ is the threshold required to be judged as similar by humans (see Footnote 3 and Figure 4). If $B_{x}$ were larger while still being tangent to $w$ , the arclength would increase but the span of the arc containing siblings embeddings
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
V _ {n} (R) = \frac {\pi^ {n / 2}}{\Gamma (\frac {n}{2} + 1)} R ^ {n}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
Figure 4: An illustration of how using cosine similarity can underestimate word similarity. The cosine similarity between a contextualized representation (orange) and $\vec{w}$ is the dot product of the former's projection onto the red arc of the unit circle (with length $2\theta$ ) and $\hat{w}$ . Only points in the blue region are close enough to $\hat{w}$ to be deemed similar by humans. As the bounding ball grows (e.g., with higher frequency words), if it remains tangent to $\vec{w}$ , the fraction of points in the blue region will shrink, leading to underestimation.
|
| 115 |
+
|
| 116 |
+
sufficiently similar to $w$ would not. This means a greater proportion of the sibling embeddings will fail to meet this threshold, assuming that the distribution of sibling embeddings in $B_{x}$ does not change. Because, in practice, more frequent words have larger bounding balls, depending on how the bounding ball of a word $x$ grows relative to some $\vec{w}$ , the similarity of $x$ and $w$ can be underestimated. This helps explain the findings in Figure 2, but it does not explain why more frequent words have lower similarity with themselves across different contexts, since that requires knowledge of the embedding distribution in the bounding ball. The latter is likely due to more frequent words having less anisotropic representations (Ethayarajh, 2019).
|
| 117 |
+
|
| 118 |
+
# 4 Discussion and Conclusion
|
| 119 |
+
|
| 120 |
+
Cosine distance underestimates compared to humans the semantic similarity of frequent words in a variety of settings (expert versus non-expert judged, and within word sense and across words). This finding has large implications for downstream tasks, given that single-point similarity metrics are used in a variety of methods and experiments (Reimers and Gurevych, 2019; Reif et al., 2019; Zhang et al., 2020; Zhao et al., 2019; Mathur et al., 2019; Kim et al., 2021). Word frequency in pre-training data also affects the representational geometry of contextualized embeddings, low frequency words be
|
| 121 |
+
|
| 122 |
+
ing more concentrated geometrically. One extension of this work might examine how variables such as sentiment and similarity/dissimilarity between sentence contexts could impact both human-judged and embedding-based similarity metrics.
|
| 123 |
+
|
| 124 |
+
Because training data frequency is something that researchers can control, understanding these distortions is critical to training large language models. Frequency-based interventions might even be able to correct for these systematic underestimations of similarity (e.g., by modifying training data), which could be important where certain words or subjects may be inaccurately represented. For example, Zhou et al. (2022) illustrates how training data frequencies can lead to discrepancies in the representation of countries, and—since frequency is highly correlated with a country's GDP—can perpetuate historic power and wealth inequalities. Future work could also examine how and if frequency effects could be mitigated by post-processing techniques which improve the correlation between human and semantic similarities (Timkey and van Schijndel, 2021).
|
| 125 |
+
|
| 126 |
+
The semantic similarity distortions caused by the over-and under-representation of topics is another reason why documentation for datasets is critical for increasing transparency and accountability in machine learning models (Gebru et al., 2021; Mitchell et al., 2019; Bender and Friedman, 2018; Ethayarajh and Jurafsky, 2020; Ma et al., 2021). As language models increase in size and training data becomes more challenging to replicate, we recommend that word frequencies and distortions be revealed to users, bringing awareness to the potential inequalities in datasets and the models that are trained on them. In the future, we hope to see research that more critically examines the downstream implications of these findings and various mitigation techniques for such distortions.
|
| 127 |
+
|
| 128 |
+
# Acknowledgements
|
| 129 |
+
|
| 130 |
+
We sincerely thank Isabel Papadimitriou and our anonymous reviewers for their support, insights, and helpful feedback. This research has been supported in part by a Hoffman-Yee Research Grant from the Stanford Institute for Human-Centered AI, award IIS-2128145 from the NSF, Stanford Data Science, a Stanford Graduate Fellowship, a Facebook Fellowship, and Canada's NSERC.
|
| 131 |
+
|
| 132 |
+
# References
|
| 133 |
+
|
| 134 |
+
Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604.
|
| 135 |
+
Terra Blevins and Luke Zettlemoyer. 2020. Moving down the long tail of word sense disambiguation with gloss informed bi-encoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1006-1017.
|
| 136 |
+
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 4356-4364, Red Hook, NY, USA. Curran Associates Inc.
|
| 137 |
+
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.
|
| 138 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota.
|
| 139 |
+
Kawin Ethayarajh. 2019. How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Hong Kong, China.
|
| 140 |
+
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019a. Towards understanding linear word analogies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3253-3262, Florence, Italy.
|
| 141 |
+
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019b. Understanding undesirable word embedding associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696-1705, Florence, Italy.
|
| 142 |
+
Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4846-4853.
|
| 143 |
+
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna
|
| 144 |
+
|
| 145 |
+
Wallach, Hal Daumé III, and Kate Crawford. 2021. Datasheets for datasets. Commun. ACM, 64(12):86-92.
|
| 146 |
+
Luke Gessler and Nathan Schneider. 2021. BERT has uncommon sense: Similarity ranking for word sense BERTology. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP.
|
| 147 |
+
Nathan Hartmann and Leandro Borges dos Santos. 2018. NILC at CWI 2018: Exploring feature engineering and feature learning. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 335-340, New Orleans, Louisiana.
|
| 148 |
+
Johannes Hellrich and Udo Hahn. 2016. Bad Company—Neighborhoods in neural embedding spaces considered harmful. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2785-2796, Osaka, Japan.
|
| 149 |
+
Eric Huang, Richard Socher, Christopher Manning, and Andrew Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 873-882, Jeju Island, Korea.
|
| 150 |
+
Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, and Qi Zhang. 2022. Prompt-BERT: Improving BERT sentence embeddings with prompts. arXiv preprint arXiv:2201.04337.
|
| 151 |
+
Suyoun Kim, Duc Le, Weiyi Zheng, Tarun Singh, Abhinav Arora, Xiaoyu Zhai, Christian Fuegen, Ozlem Kalinli, and Michael L. Seltzer. 2021. Evaluating user perception of speech recognition system quality with semantic distance metric.
|
| 152 |
+
Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9119-9130.
|
| 153 |
+
Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts, Adina Williams, and Douwe Kiela. 2021. Dynaboard: An evaluation-as-a-service platform for holistic next-generation benchmarking. Advances in Neural Information Processing Systems, 34.
|
| 154 |
+
Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual embeddings improve machine translation evaluation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.
|
| 155 |
+
|
| 156 |
+
David Mimno and Laure Thompson. 2017. The strange geometry of skip-gram with negative sampling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2873-2878, Copenhagen, Denmark.
|
| 157 |
+
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19, page 220-229, New York, NY, USA. Association for Computing Machinery.
|
| 158 |
+
Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1267-1273, Minneapolis, Minnesota.
|
| 159 |
+
Marten Postma, Ruben Izquierdo Bevia, and Piek Vossen. 2016. More is not always better: balancing sense distributions for all-words word sense disambiguation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3496-3506, Osaka, Japan. The COLING 2016 Organizing Committee.
|
| 160 |
+
Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 161 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China.
|
| 162 |
+
William Timkey and Marten van Schijndel. 2021. All bark and no bite: Rogue dimensions in transformer language models obscure representational quality. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4527-4546, Online and Punta Cana, Dominican Republic.
|
| 163 |
+
Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2092-2102, New Orleans, Louisiana.
|
| 164 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore:
|
| 165 |
+
|
| 166 |
+
Evaluating text generation with BERT. In International Conference on Learning Representations.
|
| 167 |
+
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana.
|
| 168 |
+
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China.
|
| 169 |
+
Kaitlyn Zhou, Kawin Ethayarajh, and Dan Jurafsky. 2022. Richer countries and richer representations. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics.
|
| 170 |
+
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 19-27.
|
| 171 |
+
|
| 172 |
+
# A Appendix
|
| 173 |
+
|
| 174 |
+
For readability, we've summarized the key results from the regressions in 1 and 2. Table 1 contains results from our WiC experiments where we measure frequency's impact on cosine similarity. We control for human judgements of similarity by splitting the dataset by human labels of "same" and "different" meaning words. The same trends hold for the whole dataset as well.
|
| 175 |
+
|
| 176 |
+
Table 2 contains results from the SCWS experiments we measure frequency's impact on cosine similarity within and across word similarities. Similar to the WiC results, we see that frequency does impact cosine similarity, with higher words having lower similarities.
|
| 177 |
+
|
| 178 |
+
Table 3 contains results from the SCWS experiments where we measure frequency's impact on human ratings. We see that frequency does not explain human ratings but when used in a model with cosine similarity, frequency has a positive coefficient, indicating it is correcting for the underestimation of cosine similarity.
|
| 179 |
+
|
| 180 |
+
# B Regression results from WiC experiments
|
| 181 |
+
|
| 182 |
+
Tables 4, 5, 6, 7, 8, 9, 10, 11.
|
| 183 |
+
|
| 184 |
+
# C Regression results from SCWS experiments
|
| 185 |
+
|
| 186 |
+
Tables 12, 13, 14, 15, 16, 17, 18, 19
|
| 187 |
+
|
| 188 |
+
# D Regression results from SCWS experiments, explaining for the difference between cosine similarity and human judgements
|
| 189 |
+
|
| 190 |
+
Tables 20, 21, 22, 23, 24.
|
| 191 |
+
|
| 192 |
+
Cosine similarity is partially predictive of human similarity judgements. The full model shows a significant positive effect of frequency 24 indicating that for a given level of cosine similarity, more frequent terms will judged by humans to be more similar, again demonstrating that cosine under-estimates semantic similarity for frequent terms.
|
| 193 |
+
|
| 194 |
+
The effect is relatively small, however; for a word that is twice as frequent, the increase in human rating will be 0.0989 (See table 23). Removing frequency from the model reduces $R^2$ from $40.8\%$ to $40.4\%$ . Polysemy shows the opposite effect; those words with more senses are likely to be rated
|
| 195 |
+
|
| 196 |
+
as less similar. In a model with only cosine and polysemy factors, however, frequency has no relationship with human judgements, indicating that including frequency is correcting for the semantic distortion of cosine in the full model.
|
| 197 |
+
|
| 198 |
+
# E Regression results from minimum bounding hyperspheres
|
| 199 |
+
|
| 200 |
+
Using frequency and polysemy to explain for the variability in bounding ball radii. Tables 25, 26, 27. Using radius of the bounding ball to explain for the variability of cosine similarity. Table 28.
|
| 201 |
+
|
| 202 |
+
# F Other ways of measuring the space of sibling embeddings
|
| 203 |
+
|
| 204 |
+
Using a smaller sample of words (10,000 words out of the initial $\sim 39,000$ words), we calculate the space occupied by these sibling embeddings using a variety of other metrics. In each metric, we find strong correlations between (log) frequency and the metric in question (see table 29).
|
| 205 |
+
|
| 206 |
+
# G Residual of Predicted Cosine
|
| 207 |
+
|
| 208 |
+
For the SCWS dataset, use 1,000 samples as the train set and use the rest as the development set. We train a linear regression model to predict cosine similarity using only human ratings. Taking the difference between cosine similarity and the predicted similarity, we plot this error relative to frequency. We see a negative correlation between this error and frequency $r = -0.18, p < 0.001$ , indicating that there is an underestimation of cosine similarity among the high frequency words. Results are shown in Figure 5.
|
| 209 |
+
|
| 210 |
+
<table><tr><td colspan="9">OLS predicting cosine similarity</td></tr><tr><td>WiC</td><td colspan="4">Different Sense Meaning</td><td colspan="4">Same Sense Meaning</td></tr><tr><td></td><td>Model 1</td><td>Model 2</td><td>Model 3</td><td>Model 4</td><td>Model 1</td><td>Model 2</td><td>Model 3</td><td>Model 4</td></tr><tr><td>log2(freq)</td><td>-0.014</td><td>-0.012</td><td>-0.013</td><td>-0.013</td><td>-0.011</td><td>-0.009</td><td>-0.009</td><td>-0.010</td></tr><tr><td>log2(sense)</td><td>-</td><td>-0.012</td><td>-0.008</td><td>-0.009</td><td>-</td><td>-0.006</td><td>-0.004</td><td>-0.002</td></tr><tr><td>same_wordform</td><td>-</td><td>-</td><td>0.045</td><td>0.047</td><td>-</td><td>-</td><td>0.059</td><td>0.056</td></tr><tr><td>is_noun</td><td>-</td><td>-</td><td>-</td><td>-0.006</td><td>-</td><td>-</td><td>-</td><td>0.008</td></tr><tr><td>R²</td><td>0.127</td><td>0.144</td><td>0.203</td><td>0.204</td><td>0.136</td><td>0.142</td><td>0.241</td><td>0.242</td></tr><tr><td>Table Number</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 1: Coefficients for each of the variables when used in a OLS regression. Bolded numbers are significant. The WiC dataset is split across examples that were rated to have the same or different meaning by experts. Other confounders (polysemy, part-of-speech, word form) were accounted for as features. In model 1, for a word that is twice as frequent, the decrease in cosine similarity will be 0.011.
|
| 213 |
+
|
| 214 |
+
<table><tr><td>SCWS</td><td colspan="4">Within Word Examples</td><td colspan="4">Across Words Examples</td></tr><tr><td></td><td>Model 1</td><td>Model 2</td><td>Model 3</td><td>Model 4</td><td>Model 1</td><td>Model 2</td><td>Model 3</td><td>Model 4</td></tr><tr><td>log2(freq)</td><td>-0.020</td><td>-</td><td>-0.018</td><td>-0.016</td><td>-0.011</td><td>-</td><td>-0.008</td><td>-0.008</td></tr><tr><td>average rating</td><td>-</td><td>0.022</td><td>0.021</td><td>0.02</td><td>-</td><td>0.02</td><td>0.02</td><td>0.02</td></tr><tr><td>log2(sense)</td><td>-</td><td>-</td><td>-</td><td>-0.019</td><td>-</td><td>-</td><td>-</td><td>-0.001</td></tr><tr><td>R²</td><td>0.120</td><td>0.225</td><td>0.320</td><td>0.343</td><td>0.059</td><td>0.305</td><td>0.336</td><td>0.337</td></tr><tr><td>Table Number</td><td>12</td><td>13</td><td>14</td><td>15</td><td>16</td><td>17</td><td>18</td><td>19</td></tr></table>
|
| 215 |
+
|
| 216 |
+
Table 2: Coefficients for each of the variables when used in a OLS regression. Bolded numbers are significant. The SCWS dataset is split across examples that use the same (within word) or different (across word) target words. Other con-founders (polysemy and average rating) were accounted for as features. In model 1, for a word that is twice as frequent, the decrease in cosine similarity will be 0.02.
|
| 217 |
+
|
| 218 |
+

|
| 219 |
+
Figure 5: Error in cosine similarity and predicted cosine similarity using human ratings. A negative correlation exists, $r = -0.18$ , $p < 0.001$ , indicating an underestimation of cosine similarity among the high frequency words.
|
| 220 |
+
|
| 221 |
+
<table><tr><td colspan="6">OLS Predicting Average Human Rating (Scale of 1 - 10)</td></tr><tr><td>Feature</td><td>Model 1</td><td>Model 2</td><td>Model 3</td><td>Model 4</td><td>Model 5</td></tr><tr><td>avg log2(freq)</td><td>-0.057</td><td>-</td><td>0.099</td><td>-</td><td>0.076</td></tr><tr><td>avg log2(sense)</td><td>-</td><td>-</td><td>-0.0440</td><td>-0.134</td><td>-0.189</td></tr><tr><td>cosine</td><td>-</td><td>16.345</td><td>16.665</td><td>13.513</td><td>13.809</td></tr><tr><td>same_word</td><td>-</td><td>-</td><td>-</td><td>1.7228</td><td>1.687</td></tr><tr><td>R²</td><td>0.002</td><td>0.404</td><td>0.408</td><td>0.443</td><td>0.446</td></tr><tr><td>Table Number</td><td>20</td><td>21</td><td>22</td><td>23</td><td>24</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 3: Coefficients for each of the variables when used in a OLS regression. Bolded numbers are significant. Other confounders (polysemy, same word) were accounted for as features. In model 5, for a word that is twice as frequent, the increase in human rating will be 0.076. Notice that frequency only becomes a significant as a feature when used with cosine, indicating that it is correcting for an underestimation.
|
| 224 |
+
|
| 225 |
+
<table><tr><td colspan="2">Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.127</td></tr><tr><td colspan="2">Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.127</td></tr><tr><td colspan="2">Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>395.1</td></tr><tr><td colspan="2">Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td>3.55e-82</td></tr><tr><td colspan="2">Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td>2947.0</td></tr><tr><td colspan="2">No. Observations:</td><td colspan="2">2713</td><td colspan="2">AIC:</td><td>-5890.</td></tr><tr><td colspan="2">Df Residuals:</td><td colspan="2">2711</td><td colspan="2">BIC:</td><td>-5878.</td></tr><tr><td colspan="2">Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.9976</td><td>0.013</td><td>77.728</td><td>0.000</td><td>0.972</td><td>1.023</td></tr><tr><td>log2(freq)</td><td>-0.0141</td><td>0.001</td><td>-19.876</td><td>0.000</td><td>-0.015</td><td>-0.013</td></tr><tr><td colspan="2">Omnibus:</td><td>1.261</td><td colspan="2">Durbin-Watson:</td><td>1.952</td><td></td></tr><tr><td colspan="2">Prob(Omnibus):</td><td>0.532</td><td colspan="2">Jarque-Bera (JB):</td><td>1.189</td><td></td></tr><tr><td colspan="2">Skew:</td><td>0.044</td><td colspan="2">Prob(JB):</td><td>0.552</td><td></td></tr><tr><td colspan="2">Kurtosis:</td><td>3.053</td><td colspan="2">Cond. No.</td><td>149.</td><td></td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 4: OLS regression results predicting cosine similarity among "different meaning" senses.
|
| 228 |
+
|
| 229 |
+
<table><tr><td>Dep. Variable:</td><td colspan="3">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.144</td></tr><tr><td>Model:</td><td colspan="3">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.144</td></tr><tr><td>Method:</td><td colspan="3">Least Squares</td><td colspan="2">F-statistic:</td><td>228.2</td></tr><tr><td>Date:</td><td colspan="3">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td>2.48e-92</td></tr><tr><td>Time:</td><td colspan="3">22:12:38</td><td colspan="2">Log-Likelihood:</td><td>2973.7</td></tr><tr><td>No. Observations:</td><td colspan="3">2713</td><td colspan="2">AIC:</td><td>-5941.</td></tr><tr><td>Df Residuals:</td><td colspan="3">2710</td><td colspan="2">BIC:</td><td>-5924.</td></tr><tr><td>Df Model:</td><td colspan="3">2</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.9997</td><td>0.013</td><td>78.627</td><td>0.000</td><td>0.975</td><td>1.025</td></tr><tr><td>log2(freq)</td><td>-0.0115</td><td>0.001</td><td>-14.624</td><td>0.000</td><td>-0.013</td><td>-0.010</td></tr><tr><td>log2(senses)</td><td>-0.0118</td><td>0.002</td><td>-7.330</td><td>0.000</td><td>-0.015</td><td>-0.009</td></tr><tr><td>Omnibus:</td><td colspan="2">8.024</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.954</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.018</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">9.222</td></tr><tr><td>Skew:</td><td colspan="2">0.060</td><td colspan="2">Prob(JB):</td><td colspan="2">0.00994</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.259</td><td colspan="2">Cond. No.</td><td colspan="2">153.</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 5: OLS regression results predicting cosine similarity among "different meaning" senses.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.203</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.202</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">230.2</td></tr><tr><td>Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">5.14e-133</td></tr><tr><td>Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td colspan="2">3070.5</td></tr><tr><td>No. Observations:</td><td colspan="2">2713</td><td colspan="2">AIC:</td><td colspan="2">-6133.</td></tr><tr><td>Df Residuals:</td><td colspan="2">2709</td><td colspan="2">BIC:</td><td colspan="2">-6109.</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.9367</td><td>0.013</td><td>71.757</td><td>0.000</td><td>0.911</td><td>0.962</td></tr><tr><td>log2(freq)</td><td>-0.0130</td><td>0.001</td><td>-16.984</td><td>0.000</td><td>-0.015</td><td>-0.012</td></tr><tr><td>log2(senses)</td><td>-0.0076</td><td>0.002</td><td>-4.833</td><td>0.000</td><td>-0.011</td><td>-0.005</td></tr><tr><td>same_wordform</td><td>0.0447</td><td>0.003</td><td>14.158</td><td>0.000</td><td>0.039</td><td>0.051</td></tr><tr><td>Omnibus:</td><td colspan="2">13.328</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.917</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.001</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">14.587</td></tr><tr><td>Skew:</td><td colspan="2">-0.123</td><td colspan="2">Prob(JB):</td><td colspan="2">0.000680</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.261</td><td colspan="2">Cond. No.</td><td colspan="2">163.</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 6: OLS regression results predicting cosine similarity among "different meaning" senses.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.204</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.203</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">173.4</td></tr><tr><td>Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">2.26e-132</td></tr><tr><td>Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td colspan="2">3071.8</td></tr><tr><td>No. Observations:</td><td colspan="2">2713</td><td colspan="2">AIC:</td><td colspan="2">-6134.</td></tr><tr><td>Df Residuals:</td><td colspan="2">2708</td><td colspan="2">BIC:</td><td colspan="2">-6104.</td></tr><tr><td>Df Model:</td><td colspan="2">4</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.9355</td><td>0.013</td><td>71.569</td><td>0.000</td><td>0.910</td><td>0.961</td></tr><tr><td>log2(freq)</td><td>-0.0126</td><td>0.001</td><td>-15.858</td><td>0.000</td><td>-0.014</td><td>-0.011</td></tr><tr><td>log2(senses)</td><td>-0.0090</td><td>0.002</td><td>-5.030</td><td>0.000</td><td>-0.013</td><td>-0.005</td></tr><tr><td>same_wordform</td><td>0.0467</td><td>0.003</td><td>13.760</td><td>0.000</td><td>0.040</td><td>0.053</td></tr><tr><td>is_noun</td><td>-0.0061</td><td>0.004</td><td>-1.629</td><td>0.103</td><td>-0.013</td><td>0.001</td></tr><tr><td>Omnibus:</td><td colspan="2">14.009</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.915</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.001</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">15.019</td></tr><tr><td>Skew:</td><td colspan="2">-0.135</td><td colspan="2">Prob(JB):</td><td colspan="2">0.000548</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.244</td><td colspan="2">Cond. No.</td><td colspan="2">164.</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 7: OLS regression results predicting cosine similarity among "different meaning" senses.
|
| 240 |
+
|
| 241 |
+
<table><tr><td colspan="2">Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.136</td></tr><tr><td colspan="2">Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.136</td></tr><tr><td colspan="2">Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>427.3</td></tr><tr><td colspan="2">Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td>2.94e-88</td></tr><tr><td colspan="2">Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td>2926.4</td></tr><tr><td colspan="2">No. Observations:</td><td colspan="2">2710</td><td colspan="2">AIC:</td><td>-5849.</td></tr><tr><td colspan="2">Df Residuals:</td><td colspan="2">2708</td><td colspan="2">BIC:</td><td>-5837.</td></tr><tr><td colspan="2">Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>1.0077</td><td>0.009</td><td>109.007</td><td>0.000</td><td>0.990</td><td>1.026</td></tr><tr><td>log2(freq)</td><td>-0.0109</td><td>0.001</td><td>-20.670</td><td>0.000</td><td>-0.012</td><td>-0.010</td></tr><tr><td colspan="2">Omnibus:</td><td>45.476</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.977</td></tr><tr><td colspan="2">Prob(Omnibus):</td><td>0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">45.736</td></tr><tr><td colspan="2">Skew:</td><td>-0.298</td><td colspan="2">Prob(JB):</td><td colspan="2">1.17e-10</td></tr><tr><td colspan="2">Kurtosis:</td><td>2.778</td><td colspan="2">Cond. No.</td><td colspan="2">103.</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 8: OLS regression results predicting cosine similarity among "same meaning" senses.
|
| 244 |
+
|
| 245 |
+
<table><tr><td>Dep. Variable:</td><td colspan="3">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.142</td></tr><tr><td>Model:</td><td colspan="3">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.141</td></tr><tr><td>Method:</td><td colspan="3">Least Squares</td><td colspan="2">F-statistic:</td><td>224.2</td></tr><tr><td>Date:</td><td colspan="3">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td>8.17e-91</td></tr><tr><td>Time:</td><td colspan="3">22:12:38</td><td colspan="2">Log-Likelihood:</td><td>2935.6</td></tr><tr><td>No. Observations:</td><td colspan="3">2710</td><td colspan="2">AIC:</td><td>-5865.</td></tr><tr><td>Df Residuals:</td><td colspan="3">2707</td><td colspan="2">BIC:</td><td>-5847.</td></tr><tr><td>Df Model:</td><td colspan="3">2</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.9974</td><td>0.010</td><td>104.755</td><td>0.000</td><td>0.979</td><td>1.016</td></tr><tr><td>log2(freq)</td><td>-0.0090</td><td>0.001</td><td>-13.270</td><td>0.000</td><td>-0.010</td><td>-0.008</td></tr><tr><td>log2(senses)</td><td>-0.0063</td><td>0.001</td><td>-4.283</td><td>0.000</td><td>-0.009</td><td>-0.003</td></tr><tr><td>Omnibus:</td><td colspan="2">38.934</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.973</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">39.612</td></tr><tr><td>Skew:</td><td colspan="2">-0.283</td><td colspan="2">Prob(JB):</td><td colspan="2">2.50e-09</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.823</td><td colspan="2">Cond. No.</td><td colspan="2">109.</td></tr></table>
|
| 246 |
+
|
| 247 |
+
Table 9: OLS regression results predicting cosine similarity among "same meaning" senses.
|
| 248 |
+
|
| 249 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.241</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.240</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">285.7</td></tr><tr><td>Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">4.36e-161</td></tr><tr><td>Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td colspan="2">3100.7</td></tr><tr><td>No. Observations:</td><td colspan="2">2710</td><td colspan="2">AIC:</td><td colspan="2">-6193.</td></tr><tr><td>Df Residuals:</td><td colspan="2">2706</td><td colspan="2">BIC:</td><td colspan="2">-6170.</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.8928</td><td>0.011</td><td>84.562</td><td>0.000</td><td>0.872</td><td>0.914</td></tr><tr><td>log2(freq)</td><td>-0.0092</td><td>0.001</td><td>-14.435</td><td>0.000</td><td>-0.010</td><td>-0.008</td></tr><tr><td>log2(senses)</td><td>-0.0035</td><td>0.001</td><td>-2.513</td><td>0.012</td><td>-0.006</td><td>-0.001</td></tr><tr><td>same_wordform</td><td>0.0588</td><td>0.003</td><td>18.728</td><td>0.000</td><td>0.053</td><td>0.065</td></tr><tr><td>Omnibus:</td><td colspan="2">80.675</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.981</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">87.234</td></tr><tr><td>Skew:</td><td colspan="2">-0.434</td><td colspan="2">Prob(JB):</td><td colspan="2">1.14e-19</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.139</td><td colspan="2">Cond. No.</td><td colspan="2">130.</td></tr></table>
|
| 250 |
+
|
| 251 |
+
Table 10: OLS regression results predicting cosine similarity among "same meaning" senses.
|
| 252 |
+
|
| 253 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.242</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.241</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">215.8</td></tr><tr><td>Date:</td><td colspan="2">Thu, 14 Oct 2021</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">6.75e-161</td></tr><tr><td>Time:</td><td colspan="2">22:12:38</td><td colspan="2">Log-Likelihood:</td><td colspan="2">3103.2</td></tr><tr><td>No. Observations:</td><td colspan="2">2710</td><td colspan="2">AIC:</td><td colspan="2">-6196.</td></tr><tr><td>Df Residuals:</td><td colspan="2">2705</td><td colspan="2">BIC:</td><td colspan="2">-6167.</td></tr><tr><td>Df Model:</td><td colspan="2">4</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.8952</td><td>0.011</td><td>84.424</td><td>0.000</td><td>0.874</td><td>0.916</td></tr><tr><td>log2(freq)</td><td>-0.0096</td><td>0.001</td><td>-14.547</td><td>0.000</td><td>-0.011</td><td>-0.008</td></tr><tr><td>log2(senses)</td><td>-0.0022</td><td>0.002</td><td>-1.457</td><td>0.145</td><td>-0.005</td><td>0.001</td></tr><tr><td>same_wordform</td><td>0.0560</td><td>0.003</td><td>16.512</td><td>0.000</td><td>0.049</td><td>0.063</td></tr><tr><td>is_noun</td><td>0.0078</td><td>0.003</td><td>2.228</td><td>0.026</td><td>0.001</td><td>0.015</td></tr><tr><td>Omnibus:</td><td colspan="2">76.318</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.983</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">82.141</td></tr><tr><td>Skew:</td><td colspan="2">-0.421</td><td colspan="2">Prob(JB):</td><td colspan="2">1.46e-18</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.139</td><td colspan="2">Cond. No.</td><td colspan="2">132.</td></tr></table>
|
| 254 |
+
|
| 255 |
+
Table 11: OLS regression results predicting cosine similarity among "same meaning" senses.
|
| 256 |
+
|
| 257 |
+
<table><tr><td colspan="2">Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.120</td></tr><tr><td colspan="2">Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.115</td></tr><tr><td colspan="2">Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>28.77</td></tr><tr><td colspan="2">Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>2.12e-07</td></tr><tr><td colspan="2">Time:</td><td colspan="2">12:16:53</td><td colspan="2">Log-Likelihood:</td><td>203.87</td></tr><tr><td colspan="2">No. Observations:</td><td colspan="2">214</td><td colspan="2">AIC:</td><td>-403.7</td></tr><tr><td colspan="2">Df Residuals:</td><td colspan="2">212</td><td colspan="2">BIC:</td><td>-397.0</td></tr><tr><td colspan="2">Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td colspan="2">Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>1.0762</td><td>0.063</td><td>17.127</td><td>0.000</td><td>0.952</td><td>1.200</td></tr><tr><td>avg_freq</td><td>-0.0196</td><td>0.004</td><td>-5.364</td><td>0.000</td><td>-0.027</td><td>-0.012</td></tr><tr><td colspan="2">Omnibus:</td><td>7.823</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.040</td></tr><tr><td colspan="2">Prob(Omnibus):</td><td>0.020</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">9.129</td></tr><tr><td colspan="2">Skew:</td><td>-0.307</td><td colspan="2">Prob(JB):</td><td colspan="2">0.0104</td></tr><tr><td colspan="2">Kurtosis:</td><td>3.804</td><td colspan="2">Cond. No.</td><td colspan="2">169.</td></tr></table>
|
| 258 |
+
|
| 259 |
+
Table 12: OLS regression results predicting cosine similarity among "same" target words
|
| 260 |
+
|
| 261 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.225</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.221</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>61.58</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>2.07e-13</td></tr><tr><td>Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td>217.54</td></tr><tr><td>No. Observations:</td><td colspan="2">214</td><td colspan="2">AIC:</td><td>-431.1</td></tr><tr><td>Df Residuals:</td><td colspan="2">212</td><td colspan="2">BIC:</td><td>-424.3</td></tr><tr><td>Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025 0.975]</td></tr><tr><td>constant</td><td>0.5856</td><td>0.021</td><td>28.308</td><td>0.000</td><td>0.545 0.626</td></tr><tr><td>averagerating</td><td>0.0223</td><td>0.003</td><td>7.847</td><td>0.000</td><td>0.017 0.028</td></tr><tr><td>Omnibus:</td><td colspan="2">31.336</td><td colspan="2">Durbin-Watson:</td><td>2.183</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td>64.374</td></tr><tr><td>Skew:</td><td colspan="2">-0.711</td><td colspan="2">Prob(JB):</td><td>1.05e-14</td></tr><tr><td>Kurtosis:</td><td colspan="2">5.279</td><td colspan="2">Cond. No.</td><td>25.5</td></tr></table>
|
| 262 |
+
|
| 263 |
+
Table 13: OLS regression results predicting cosine similarity among "same" target words
|
| 264 |
+
|
| 265 |
+
<table><tr><td>Dep. Variable:</td><td colspan="3">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.320</td></tr><tr><td>Model:</td><td colspan="3">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.314</td></tr><tr><td>Method:</td><td colspan="3">Least Squares</td><td colspan="2">F-statistic:</td><td>49.70</td></tr><tr><td>Date:</td><td colspan="3">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>2.06e-18</td></tr><tr><td>Time:</td><td colspan="3">12:20:20</td><td colspan="2">Log-Likelihood:</td><td>231.56</td></tr><tr><td>No. Observations:</td><td colspan="3">214</td><td colspan="2">AIC:</td><td>-457.1</td></tr><tr><td>Df Residuals:</td><td colspan="3">211</td><td colspan="2">BIC:</td><td>-447.0</td></tr><tr><td>Df Model:</td><td colspan="3">2</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="3">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025</td><td>0.975]</td></tr><tr><td>constant</td><td>0.8939</td><td>0.060</td><td>14.907</td><td>0.000</td><td>0.776</td><td>1.012</td></tr><tr><td>avg_freq</td><td>-0.0176</td><td>0.003</td><td>-5.434</td><td>0.000</td><td>-0.024</td><td>-0.011</td></tr><tr><td>averagerating</td><td>0.0211</td><td>0.003</td><td>7.893</td><td>0.000</td><td>0.016</td><td>0.026</td></tr><tr><td>Omnibus:</td><td colspan="2">18.260</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.246</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">27.332</td></tr><tr><td>Skew:</td><td colspan="2">-0.524</td><td colspan="2">Prob(JB):</td><td colspan="2">1.16e-06</td></tr><tr><td>Kurtosis:</td><td colspan="2">4.402</td><td colspan="2">Cond. No.</td><td colspan="2">197.</td></tr></table>
|
| 266 |
+
|
| 267 |
+
Table 14: OLS regression results predicting cosine similarity among "same" target words
|
| 268 |
+
|
| 269 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.343</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.334</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>36.58</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>4.63e-19</td></tr><tr><td>Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td>235.24</td></tr><tr><td>No. Observations:</td><td colspan="2">214</td><td colspan="2">AIC:</td><td>-462.5</td></tr><tr><td>Df Residuals:</td><td colspan="2">210</td><td colspan="2">BIC:</td><td>-449.0</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025, 0.975]</td></tr><tr><td>constant</td><td>0.9469</td><td>0.062</td><td>15.214</td><td>0.000</td><td>0.824</td></tr><tr><td>avg_freq</td><td>-0.0161</td><td>0.003</td><td>-4.983</td><td>0.000</td><td>-0.022</td></tr><tr><td>averagerating</td><td>0.0198</td><td>0.003</td><td>7.417</td><td>0.000</td><td>0.015</td></tr><tr><td>avg_sense</td><td>-0.0192</td><td>0.007</td><td>-2.711</td><td>0.007</td><td>-0.033</td></tr><tr><td>Omnibus:</td><td colspan="2">13.882</td><td colspan="2">Durbin-Watson:</td><td>2.255</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.001</td><td colspan="2">Jarque-Bera (JB):</td><td>18.177</td></tr><tr><td>Skew:</td><td colspan="2">-0.458</td><td colspan="2">Prob(JB):</td><td>0.000113</td></tr><tr><td>Kurtosis:</td><td colspan="2">4.095</td><td colspan="2">Cond. No.</td><td>212.</td></tr></table>
|
| 270 |
+
|
| 271 |
+
Table 15: OLS regression results predicting cosine similarity among "same" target words
|
| 272 |
+
|
| 273 |
+
<table><tr><td colspan="2">Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td>0.059</td></tr><tr><td colspan="2">Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.058</td></tr><tr><td colspan="2">Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>87.37</td></tr><tr><td colspan="2">Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>3.41e-20</td></tr><tr><td colspan="2">Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td>1557.3</td></tr><tr><td colspan="2">No. Observations:</td><td colspan="2">1406</td><td colspan="2">AIC:</td><td>-3111.</td></tr><tr><td colspan="2">Df Residuals:</td><td colspan="2">1404</td><td colspan="2">BIC:</td><td>-3100.</td></tr><tr><td colspan="2">Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td colspan="2">Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.7858</td><td>0.019</td><td>42.044</td><td>0.000</td><td>0.749</td><td>0.822</td></tr><tr><td>avg_freq</td><td>-0.0106</td><td>0.001</td><td>-9.347</td><td>0.000</td><td>-0.013</td><td>-0.008</td></tr><tr><td colspan="2">Omnibus:</td><td>12.804</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.683</td></tr><tr><td colspan="2">Prob(Omnibus):</td><td>0.002</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">16.004</td></tr><tr><td colspan="2">Skew:</td><td>-0.130</td><td colspan="2">Prob(JB):</td><td colspan="2">0.000335</td></tr><tr><td colspan="2">Kurtosis:</td><td>3.453</td><td colspan="2">Cond. No.</td><td colspan="2">145.</td></tr></table>
|
| 274 |
+
|
| 275 |
+
Table 16: OLS regression results predicting cosine similarity among "different" target words
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.305</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.304</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">614.9</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">7.11e-113</td></tr><tr><td>Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td colspan="2">1770.2</td></tr><tr><td>No. Observations:</td><td colspan="2">1406</td><td colspan="2">AIC:</td><td colspan="2">-3536.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1404</td><td colspan="2">BIC:</td><td colspan="2">-3526.</td></tr><tr><td>Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.5366</td><td>0.004</td><td>150.800</td><td>0.000</td><td>0.530</td><td>0.544</td></tr><tr><td>averagerating</td><td>0.0208</td><td>0.001</td><td>24.796</td><td>0.000</td><td>0.019</td><td>0.022</td></tr><tr><td>Omnibus:</td><td colspan="2">32.918</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.861</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">39.508</td></tr><tr><td>Skew:</td><td colspan="2">-0.302</td><td colspan="2">Prob(JB):</td><td colspan="2">2.64e-09</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.556</td><td colspan="2">Cond. No.</td><td colspan="2">8.58</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 17: OLS regression results predicting cosine similarity among "different" target words
|
| 280 |
+
|
| 281 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.336</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.335</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">355.7</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">1.12e-125</td></tr><tr><td>Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td colspan="2">1803.2</td></tr><tr><td>No. Observations:</td><td colspan="2">1406</td><td colspan="2">AIC:</td><td colspan="2">-3600.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1403</td><td colspan="2">BIC:</td><td colspan="2">-3585.</td></tr><tr><td>Df Model:</td><td colspan="2">2</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025</td><td>0.975]</td></tr><tr><td>constant</td><td>0.6684</td><td>0.016</td><td>40.691</td><td>0.000</td><td>0.636</td><td>0.701</td></tr><tr><td>avg_freq</td><td>-0.0079</td><td>0.001</td><td>-8.210</td><td>0.000</td><td>-0.010</td><td>-0.006</td></tr><tr><td>averagerating</td><td>0.0200</td><td>0.001</td><td>24.238</td><td>0.000</td><td>0.018</td><td>0.022</td></tr><tr><td>Omnibus:</td><td colspan="2">35.771</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.832</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">44.869</td></tr><tr><td>Skew:</td><td colspan="2">-0.305</td><td colspan="2">Prob(JB):</td><td colspan="2">1.81e-10</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.628</td><td colspan="2">Cond. No.</td><td colspan="2">156.</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 18: OLS regression results predicting cosine similarity among "different" target words
|
| 284 |
+
|
| 285 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td colspan="2">R-squared:</td><td colspan="2">0.337</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.335</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">237.1</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">2.09e-124</td></tr><tr><td>Time:</td><td colspan="2">12:20:20</td><td colspan="2">Log-Likelihood:</td><td colspan="2">1803.4</td></tr><tr><td>No. Observations:</td><td colspan="2">1406</td><td colspan="2">AIC:</td><td colspan="2">-3599.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1402</td><td colspan="2">BIC:</td><td colspan="2">-3578.</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>0.6670</td><td>0.017</td><td>40.027</td><td>0.000</td><td>0.634</td><td>0.700</td></tr><tr><td>avg_freq</td><td>-0.0076</td><td>0.001</td><td>-7.044</td><td>0.000</td><td>-0.010</td><td>-0.005</td></tr><tr><td>averagerating</td><td>0.0199</td><td>0.001</td><td>23.983</td><td>0.000</td><td>0.018</td><td>0.022</td></tr><tr><td>avg_sense</td><td>-0.0010</td><td>0.002</td><td>-0.516</td><td>0.606</td><td>-0.005</td><td>0.003</td></tr><tr><td>Omnibus:</td><td colspan="2">36.276</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.832</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">45.556</td></tr><tr><td>Skew:</td><td colspan="2">-0.308</td><td colspan="2">Prob(JB):</td><td colspan="2">1.28e-10</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.632</td><td colspan="2">Cond. No.</td><td colspan="2">160.</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 19: OLS regression results predicting cosine similarity among "different" target words
|
| 288 |
+
|
| 289 |
+
<table><tr><td colspan="2">Dep. Variable:</td><td colspan="2">Human Rating</td><td colspan="2">R-squared:</td><td>0.002</td></tr><tr><td colspan="2">Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.001</td></tr><tr><td colspan="2">Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td>3.074</td></tr><tr><td colspan="2">Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>0.0797</td></tr><tr><td colspan="2">Time:</td><td colspan="2">13:15:45</td><td colspan="2">Log-Likelihood:</td><td>-3750.9</td></tr><tr><td colspan="2">No. Observations:</td><td colspan="2">1620</td><td colspan="2">AIC:</td><td>7506.</td></tr><tr><td colspan="2">Df Residuals:</td><td colspan="2">1618</td><td colspan="2">BIC:</td><td>7517.</td></tr><tr><td colspan="2">Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td colspan="2">Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>5.0152</td><td>0.538</td><td>9.330</td><td>0.000</td><td>3.961</td><td>6.070</td></tr><tr><td>avg_freq</td><td>-0.0568</td><td>0.032</td><td>-1.753</td><td>0.080</td><td>-0.120</td><td>0.007</td></tr><tr><td colspan="2">Omnibus:</td><td>229.333</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.972</td></tr><tr><td colspan="2">Prob(Omnibus):</td><td>0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">91.858</td></tr><tr><td colspan="2">Skew:</td><td>0.385</td><td colspan="2">Prob(JB):</td><td colspan="2">1.13e-20</td></tr><tr><td colspan="2">Kurtosis:</td><td>2.124</td><td colspan="2">Cond. No.</td><td colspan="2">147.</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 20: OLS regression results predicting average human ratings.
|
| 292 |
+
|
| 293 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Human Rating</td><td colspan="2">R-squared:</td><td colspan="2">0.404</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.403</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">1096.</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">6.45e-184</td></tr><tr><td>Time:</td><td colspan="2">13:15:45</td><td colspan="2">Log-Likelihood:</td><td colspan="2">-3333.6</td></tr><tr><td>No. Observations:</td><td colspan="2">1620</td><td colspan="2">AIC:</td><td colspan="2">6671.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1618</td><td colspan="2">BIC:</td><td colspan="2">6682.</td></tr><tr><td>Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>-6.2058</td><td>0.314</td><td>-19.748</td><td>0.000</td><td>-6.822</td><td>-5.589</td></tr><tr><td>cosine_similarity</td><td>16.3453</td><td>0.494</td><td>33.101</td><td>0.000</td><td>15.377</td><td>17.314</td></tr><tr><td>Omnibus:</td><td colspan="2">25.721</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.974</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">24.246</td></tr><tr><td>Skew:</td><td colspan="2">0.260</td><td colspan="2">Prob(JB):</td><td colspan="2">5.43e-06</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.703</td><td colspan="2">Cond. No.</td><td colspan="2">14.7</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 21: OLS regression results predicting average human ratings.
|
| 296 |
+
|
| 297 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Human Rating</td><td colspan="2">R-squared:</td><td colspan="2">0.408</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.407</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">371.8</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">1.31e-183</td></tr><tr><td>Time:</td><td colspan="2">13:15:45</td><td colspan="2">Log-Likelihood:</td><td colspan="2">-3327.3</td></tr><tr><td>No. Observations:</td><td colspan="2">1620</td><td colspan="2">AIC:</td><td colspan="2">6663.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1616</td><td colspan="2">BIC:</td><td colspan="2">6684.</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>-7.9168</td><td>0.575</td><td>-13.778</td><td>0.000</td><td>-9.044</td><td>-6.790</td></tr><tr><td>avg_freq</td><td>0.0989</td><td>0.028</td><td>3.473</td><td>0.001</td><td>0.043</td><td>0.155</td></tr><tr><td>avg_sense</td><td>-0.0440</td><td>0.048</td><td>-0.911</td><td>0.362</td><td>-0.139</td><td>0.051</td></tr><tr><td>cosine_similarity</td><td>16.6654</td><td>0.500</td><td>33.304</td><td>0.000</td><td>15.684</td><td>17.647</td></tr><tr><td>Omnibus:</td><td colspan="2">25.797</td><td colspan="2">Durbin-Watson:</td><td colspan="2">1.972</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">22.821</td></tr><tr><td>Skew:</td><td colspan="2">0.235</td><td colspan="2">Prob(JB):</td><td colspan="2">1.11e-05</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.657</td><td colspan="2">Cond. No.</td><td colspan="2">252.</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 22: OLS regression results predicting average human ratings.
|
| 300 |
+
|
| 301 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Human Rating</td><td colspan="2">R-squared:</td><td colspan="2">0.443</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.442</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">428.7</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">7.28e-205</td></tr><tr><td>Time:</td><td colspan="2">13:15:45</td><td colspan="2">Log-Likelihood:</td><td colspan="2">-3278.2</td></tr><tr><td>No. Observations:</td><td colspan="2">1620</td><td colspan="2">AIC:</td><td colspan="2">6564.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1616</td><td colspan="2">BIC:</td><td colspan="2">6586.</td></tr><tr><td>Df Model:</td><td colspan="2">3</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>-4.2809</td><td>0.379</td><td>-11.310</td><td>0.000</td><td>-5.023</td><td>-3.539</td></tr><tr><td>avg_sense</td><td>-0.1339</td><td>0.044</td><td>-3.012</td><td>0.003</td><td>-0.221</td><td>-0.047</td></tr><tr><td>cosine_similarity</td><td>13.5126</td><td>0.547</td><td>24.707</td><td>0.000</td><td>12.440</td><td>14.585</td></tr><tr><td>same_word</td><td>1.7228</td><td>0.161</td><td>10.668</td><td>0.000</td><td>1.406</td><td>2.040</td></tr><tr><td>Omnibus:</td><td colspan="2">24.052</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.007</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">20.099</td></tr><tr><td>Skew:</td><td colspan="2">0.203</td><td colspan="2">Prob(JB):</td><td colspan="2">4.32e-05</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.635</td><td colspan="2">Cond. No.</td><td colspan="2">46.2</td></tr></table>
|
| 302 |
+
|
| 303 |
+
Table 23: OLS regression results predicting average human ratings.
|
| 304 |
+
|
| 305 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Human Rating</td><td colspan="2">R-squared:</td><td colspan="2">0.446</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td colspan="2">Adj. R-squared:</td><td colspan="2">0.444</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td colspan="2">F-statistic:</td><td colspan="2">324.7</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td colspan="2">3.91e-205</td></tr><tr><td>Time:</td><td colspan="2">13:15:45</td><td colspan="2">Log-Likelihood:</td><td colspan="2">-3274.5</td></tr><tr><td>No. Observations:</td><td colspan="2">1620</td><td colspan="2">AIC:</td><td colspan="2">6559.</td></tr><tr><td>Df Residuals:</td><td colspan="2">1615</td><td colspan="2">BIC:</td><td colspan="2">6586.</td></tr><tr><td>Df Model:</td><td colspan="2">4</td><td></td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>-5.5590</td><td>0.600</td><td>-9.258</td><td>0.000</td><td>-6.737</td><td>-4.381</td></tr><tr><td>avg_freq</td><td>0.0757</td><td>0.028</td><td>2.738</td><td>0.006</td><td>0.021</td><td>0.130</td></tr><tr><td>avg_sense</td><td>-0.1892</td><td>0.049</td><td>-3.881</td><td>0.000</td><td>-0.285</td><td>-0.094</td></tr><tr><td>cosine_similarity</td><td>13.8092</td><td>0.556</td><td>24.816</td><td>0.000</td><td>12.718</td><td>14.901</td></tr><tr><td>same_word</td><td>1.6872</td><td>0.162</td><td>10.435</td><td>0.000</td><td>1.370</td><td>2.004</td></tr><tr><td>Omnibus:</td><td colspan="2">24.612</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.005</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">19.555</td></tr><tr><td>Skew:</td><td colspan="2">0.187</td><td colspan="2">Prob(JB):</td><td colspan="2">5.67e-05</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.612</td><td colspan="2">Cond. No.</td><td colspan="2">285.</td></tr></table>
|
| 306 |
+
|
| 307 |
+
Table 24: OLS regression results predicting average human ratings.
|
| 308 |
+
|
| 309 |
+
<table><tr><td>Dep. Variable:</td><td>Radius of Bounding Ball</td><td>R-squared:</td><td>0.477</td></tr><tr><td>Model:</td><td>OLS</td><td>Adj. R-squared:</td><td>0.477</td></tr><tr><td>Method:</td><td>Least Squares</td><td>F-statistic:</td><td>1141.</td></tr><tr><td>Date:</td><td>Sat, 12 Mar 2022</td><td>Prob (F-statistic):</td><td>2.96e-178</td></tr><tr><td>Time:</td><td>15:46:57</td><td>Log-Likelihood:</td><td>-2045.0</td></tr><tr><td>No. Observations:</td><td>1253</td><td>AIC:</td><td>4094.</td></tr><tr><td>Df Residuals:</td><td>1251</td><td>BIC:</td><td>4104.</td></tr><tr><td>Df Model:</td><td>1</td><td></td><td></td></tr><tr><td>Covariance Type:</td><td>nonrobust</td><td></td><td></td></tr></table>
|
| 310 |
+
|
| 311 |
+
<table><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td colspan="2">[0.025 0.975]</td></tr><tr><td>constant</td><td>5.5878</td><td>0.187</td><td>29.926</td><td>0.000</td><td>5.221</td><td>5.954</td></tr><tr><td>log2(freq)</td><td>0.3927</td><td>0.012</td><td>33.774</td><td>0.000</td><td>0.370</td><td>0.416</td></tr><tr><td>Omnibus:</td><td></td><td>15.637</td><td>Durbin-Watson:</td><td></td><td>2.053</td><td></td></tr><tr><td>Prob(Omnibus):</td><td></td><td>0.000</td><td>Jarque-Bera (JB):</td><td></td><td>15.928</td><td></td></tr><tr><td>Skew:</td><td></td><td>-0.275</td><td>Prob(JB):</td><td></td><td>0.000348</td><td></td></tr><tr><td>Kurtosis:</td><td></td><td>3.052</td><td>Cond. No.</td><td></td><td>86.0</td><td></td></tr></table>
|
| 312 |
+
|
| 313 |
+
Table 25: OLS regression results predicting radius of bounding ball using frequency
|
| 314 |
+
|
| 315 |
+
<table><tr><td>Dep. Variable:</td><td colspan="3">Radius of Bounding Ball</td><td colspan="2">R-squared:</td><td>0.448</td></tr><tr><td>Model:</td><td colspan="3">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.448</td></tr><tr><td>Method:</td><td colspan="3">Least Squares</td><td colspan="2">F-statistic:</td><td>1015.</td></tr><tr><td>Date:</td><td colspan="3">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>1.25e-163</td></tr><tr><td>Time:</td><td colspan="3">15:46:57</td><td colspan="2">Log-Likelihood:</td><td>-2078.7</td></tr><tr><td>No. Observations:</td><td colspan="3">1253</td><td colspan="2">AIC:</td><td>4161.</td></tr><tr><td>Df Residuals:</td><td colspan="3">1251</td><td colspan="2">BIC:</td><td>4172.</td></tr><tr><td>Df Model:</td><td colspan="3">1</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="3">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>9.0630</td><td>0.093</td><td>97.878</td><td>0.000</td><td>8.881</td><td>9.245</td></tr><tr><td>log2(senses)</td><td>0.9765</td><td>0.031</td><td>31.866</td><td>0.000</td><td>0.916</td><td>1.037</td></tr><tr><td>Omnibus:</td><td colspan="2">12.796</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.101</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.002</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">13.940</td></tr><tr><td>Skew:</td><td colspan="2">-0.193</td><td colspan="2">Prob(JB):</td><td colspan="2">0.000940</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.344</td><td colspan="2">Cond. No.</td><td colspan="2">8.52</td></tr></table>
|
| 316 |
+
|
| 317 |
+
Table 26: OLS regression results predicting radius of bounding ball using senses
|
| 318 |
+
|
| 319 |
+
<table><tr><td>Dep. Variable:</td><td colspan="3">Radius of Bounding Ball</td><td colspan="2">R-squared:</td><td>0.583</td></tr><tr><td>Model:</td><td colspan="3">OLS</td><td colspan="2">Adj. R-squared:</td><td>0.582</td></tr><tr><td>Method:</td><td colspan="3">Least Squares</td><td colspan="2">F-statistic:</td><td>872.2</td></tr><tr><td>Date:</td><td colspan="3">Sat, 12 Mar 2022</td><td colspan="2">Prob (F-statistic):</td><td>7.47e-238</td></tr><tr><td>Time:</td><td colspan="3">15:46:57</td><td colspan="2">Log-Likelihood:</td><td>-1903.7</td></tr><tr><td>No. Observations:</td><td colspan="3">1253</td><td colspan="2">AIC:</td><td>3813.</td></tr><tr><td>Df Residuals:</td><td colspan="3">1250</td><td colspan="2">BIC:</td><td>3829.</td></tr><tr><td>Df Model:</td><td colspan="3">2</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="3">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025]</td><td>0.975]</td></tr><tr><td>constant</td><td>6.0781</td><td>0.169</td><td>35.937</td><td>0.000</td><td>5.746</td><td>6.410</td></tr><tr><td>log2(freq)</td><td>0.2581</td><td>0.013</td><td>20.071</td><td>0.000</td><td>0.233</td><td>0.283</td></tr><tr><td>log2(senses)</td><td>0.5867</td><td>0.033</td><td>17.784</td><td>0.000</td><td>0.522</td><td>0.651</td></tr><tr><td>Omnibus:</td><td colspan="2">21.564</td><td colspan="2">Durbin-Watson:</td><td colspan="2">2.097</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.000</td><td colspan="2">Jarque-Bera (JB):</td><td colspan="2">23.741</td></tr><tr><td>Skew:</td><td colspan="2">-0.272</td><td colspan="2">Prob(JB):</td><td colspan="2">6.99e-06</td></tr><tr><td>Kurtosis:</td><td colspan="2">3.398</td><td colspan="2">Cond. No.</td><td colspan="2">88.6</td></tr></table>
|
| 320 |
+
|
| 321 |
+
Table 27: OLS regression results predicting radius of bounding ball using frequency and senses
|
| 322 |
+
|
| 323 |
+
<table><tr><td>Dep. Variable:</td><td colspan="2">Cosine Similarity</td><td>R-squared:</td><td colspan="2">0.169</td></tr><tr><td>Model:</td><td colspan="2">OLS</td><td>Adj. R-squared:</td><td colspan="2">0.169</td></tr><tr><td>Method:</td><td colspan="2">Least Squares</td><td>F-statistic:</td><td colspan="2">1103.</td></tr><tr><td>Date:</td><td colspan="2">Sat, 12 Mar 2022</td><td>Prob (F-statistic):</td><td colspan="2">2.51e-220</td></tr><tr><td>Time:</td><td colspan="2">15:54:04</td><td>Log-Likelihood:</td><td colspan="2">5534.8</td></tr><tr><td>No. Observations:</td><td colspan="2">5412</td><td>AIC:</td><td colspan="2">-1.107e+04</td></tr><tr><td>Df Residuals:</td><td colspan="2">5410</td><td>BIC:</td><td colspan="2">-1.105e+04</td></tr><tr><td>Df Model:</td><td colspan="2">1</td><td></td><td></td><td></td></tr><tr><td>Covariance Type:</td><td colspan="2">nonrobust</td><td></td><td></td><td></td></tr><tr><td></td><td>coef</td><td>std err</td><td>t</td><td>P> |t|</td><td>[0.025, 0.975]</td></tr><tr><td>Constant</td><td>1.1096</td><td>0.010</td><td>111.569</td><td>0.000</td><td>1.090</td></tr><tr><td>Radius of Bounding Ball</td><td>-0.0255</td><td>0.001</td><td>-33.215</td><td>0.000</td><td>-0.027</td></tr><tr><td>Omnibus:</td><td colspan="2">1.512</td><td>Durbin-Watson:</td><td colspan="2">1.721</td></tr><tr><td>Prob(Omnibus):</td><td colspan="2">0.470</td><td>Jarque-Bera (JB):</td><td colspan="2">1.543</td></tr><tr><td>Skew:</td><td colspan="2">-0.027</td><td>Prob(JB):</td><td colspan="2">0.462</td></tr><tr><td>Kurtosis:</td><td colspan="2">2.938</td><td>Cond. No.</td><td colspan="2">109.</td></tr></table>
|
| 324 |
+
|
| 325 |
+
Table 28: OLS regression results predicting cosine similarity using radius of the bounding ball.
|
| 326 |
+
|
| 327 |
+
<table><tr><td></td><td>Pearson's R</td><td>p</td></tr><tr><td>Average Pairwise Euclidean Distance</td><td>0.601</td><td>< 0.001</td></tr><tr><td>Max Pairwise Euclidean Distance</td><td>0.584</td><td>< 0.001</td></tr><tr><td>Variance of Pairwise Euclidean Distance</td><td>0.292</td><td>< 0.001</td></tr><tr><td>Average Norm of Embeddings</td><td>0.678</td><td>< 0.001</td></tr><tr><td>Area of convex hull*</td><td>0.603</td><td>< 0.001</td></tr></table>
|
| 328 |
+
|
| 329 |
+
Table 29: Pearson's correlations for numerous other ways of measuring the space occupied by a sibling cohort of ten instances. *To measure the area of a convex hull, we used PCA to projected the embeddings into 2D space and calculated the area. Measuring the convex hull in 768-dimensional space would have required a lot more data (at least 769 samples).
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:101d9fc5ffbba5edc039398d25252ced7ca48c11a99897d1d2c1db566573b2a2
|
| 3 |
+
size 2615621
|
problemswithcosineasameasureofembeddingsimilarityforhighfrequencywords/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a713705d797135c28521c8dd7500708e3e51019d743a1d80458e41086da53eff
|
| 3 |
+
size 448699
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cedb0c10e8bd4a8beb4be5394318e037b03d54292c83c04befc6632bf0e5af9d
|
| 3 |
+
size 194364
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dc553765d0e84ba487876a51831463be99bbd94158afa740c61026a544d6a7e2
|
| 3 |
+
size 239658
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/be4f02cd-1c58-461a-9912-17a1e0763350_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:74b492c4ea8b055c841b87d1f08eab5d09c1778c3468a4653faabf673406b949
|
| 3 |
+
size 1437215
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:73ae7236418e99348f9ed4f10c9ad7b2de342ce5105cf334fc5892f538830923
|
| 3 |
+
size 53183
|
proceedingsofthe60thannualmeetingoftheassociationforcomputationallinguisticsvolume2shortpapers/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e314cb3d5fea92c79a3a004da9812f0be8c4787d71c7416a82a555867a6ee70b
|
| 3 |
+
size 808336
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:285c7fac2d3952d9ff5facafcd503e1c2a981e1b4e98b9205b24a24f43b3e524
|
| 3 |
+
size 48591
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:399a32162cc80b6ae913b94e1f4300c2c9f0a35d93f79de28dc2ac31e83b130f
|
| 3 |
+
size 58633
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/38403f09-5f85-486e-b90e-36bbde7811d1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:264df91cfc0c3c3283f7b9e795701667a715aa9d58a1b4b92cac818f761b4df9
|
| 3 |
+
size 336038
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/full.md
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Region-dependent temperature scaling for certainty calibration and application to class-imbalanced token classification
|
| 2 |
+
|
| 3 |
+
Hillary Dawkins
|
| 4 |
+
|
| 5 |
+
University of Guelph, Canada
|
| 6 |
+
Féctor Institute, Toronto, Canada
|
| 7 |
+
|
| 8 |
+
hdawkins@uoguelph.ca
|
| 9 |
+
|
| 10 |
+
Isar Nejadgholi
|
| 11 |
+
|
| 12 |
+
National Research Council Canada
|
| 13 |
+
|
| 14 |
+
Ottawa, Canada
|
| 15 |
+
|
| 16 |
+
isar.nejadgholi@nrc-cnrc.gc.ca
|
| 17 |
+
|
| 18 |
+
# Abstract
|
| 19 |
+
|
| 20 |
+
Certainty calibration is an important goal on the path to interpretability and trustworthy AI. Particularly in the context of human-in-the-loop systems, high-quality low to mid-range certainty estimates are essential. In the presence of a dominant high-certainty class, for instance the non-entity class in NER problems, existing calibration error measures are completely insensitive to potentially large errors in this certainty region of interest. We introduce a region-balanced calibration error metric that weights all certainty regions equally. When low and mid certainty estimates are taken into account, calibration error is typically larger than previously reported. We introduce a simple extension of temperature scaling, requiring no additional computation, that can reduce both traditional and region-balanced notions of calibration error over existing baselines.
|
| 21 |
+
|
| 22 |
+
# 1 Introduction
|
| 23 |
+
|
| 24 |
+
Calibrating the certainty estimates of neural networks is of the utmost importance for interpretability of results and building trust in AI systems. Ideally, if a model outputs some prediction with an associated probability, we would like to interpret that quantity as the probability of a correct prediction (i.e. as a meaningful certainty estimate) (Zadrozny and Elkan, 2001; Niculescu-Mizil and Caruana, 2005). However, contemporary models are consistently over-confident in their output probabilities (Guo et al., 2017).
|
| 25 |
+
|
| 26 |
+
Guo et al. (2017) demonstrates that overconfident models can arise by overfitting to the Negative Log-Likelihood (NLL) loss, without overfitting to the classification accuracy. Many calibration methods involve modulating the output logits somehow, according to a prescribed functional form. The parameters of the modulation function are learned on the associated validation set by minimizing the NLL loss (thereby correcting the overfit). Guo et al. (2017), as well as many
|
| 27 |
+
|
| 28 |
+
subsequent studies (e.g. Müller et al., 2019; Gupta et al., 2021), showcase the surprising effectiveness of temperature scaling, a single-parameter modulation function.
|
| 29 |
+
|
| 30 |
+
The calibration error is reported as a single quantity computed on the associated test set. Typically, the error is composed of a sum of observed errors across the certainty landscape, visualized using a reliability diagram (DeGroot and Fienberg, 1983; Niculescu-Mizil and Caruana, 2005). However, not all regions contribute equally, especially in the case of class-imbalanced datasets. Consider an output with a predicted certainty of $99.9\%$ vs. an expected actual certainty of $99.8\%$ . In terms of human interpretability and intervention, this difference is negligible. Now consider $79\%$ predicted certainty vs. $71\%$ expected certainty. Clearly the second case is one we should care more about correcting. However, as we will discuss in the following section, the presence of a dominant high-certainty class can cause the first discrepancy to contribute more to the reported calibration error than the second. High quality mid-certainty estimates are most impactful for human-in-the-loop applications, yet current error measures are not sensitive to this region.
|
| 31 |
+
|
| 32 |
+
Here we take NER (Grishman and Sundheim, 1996; Yadav and Bethard, 2018; Li et al., 2020) as a case study for class-imbalanced token classification. Naturally, the "outside" or non-entity class dominates the dataset. In the following section, we introduce a region-balanced calibration error. We then introduce region-dependent temperature scaling, a calibration method that further reduces error over traditional temperature scaling, across various NER scenarios, without additional computation.
|
| 33 |
+
|
| 34 |
+
# 2 Region-balanced expected calibration error
|
| 35 |
+
|
| 36 |
+
The most popular calibration error metric is the expected calibration error (ECE) (Naeini et al., 2015). A test set is partitioned into certainty bins, each
|
| 37 |
+
|
| 38 |
+
Good calibration across certainty regions ECE $= 0.016$ RBECE $= 0.016$
|
| 39 |
+

|
| 40 |
+
(a) Sample reliability diagram for the case of consistently good certainty estimates across all regions.
|
| 41 |
+
|
| 42 |
+
Low-quality mid-certainty estimates ECE $= 0.016$ RBECE $= 0.115$
|
| 43 |
+
Figure 1: Reliability diagrams contrasting two cases with equal ECE values. Both cases have the same support distribution (yellow), where $90\%$ of all samples have an estimated certainty above 0.95. In each bin, the confidence (blue) is defined as the mean certainty of samples in the bin (i.e. the predicted certainty). The accuracy (red) is the proportion of samples with a correct prediction (i.e. the actual certainty). The calibration error per bin is the difference in predicted and actual certainty. In case (a), calibration error is consistently low across all certainty regions. In case (b), calibration error is high across the mid-certainty regions. However, because of the dominant support in the highest certainty bin, this error is undetected by the ECE measure.
|
| 44 |
+

|
| 45 |
+
(b) Sample reliability diagram for the case of low-quality certainty estimates in the mid-certainty region.
|
| 46 |
+
|
| 47 |
+
containing samples with a certainty score $h$ within the bin boundaries. The uncalibrated certainty $h$ for a given sample is simply the output probability associated with the predicted class for that sample. Within each bin, we compare the actual and predicted certainty:
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
E C E = \sum_ {i} \frac {n _ {i}}{N} | \operatorname {a c c} (B _ {i}) - \operatorname {c o n f} (B _ {i}) | \tag {1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
where $\operatorname{conf}(B_i)$ is the predicted confidence score (the mean $h$ of samples in bin $B_i$ ), and $\operatorname{acc}(B_i)$ is the actual accuracy (proportion of correct predictions in bin $B_i$ ). Each bin error is weighted by the bin support, where $n_i$ is the number of samples in $B_i$ . If a very high proportion of all samples have a high certainty estimate, only the final bin error has a non-negligible contribution to the overall ECE. Refer to Figure 1 for an illustrated example.
|
| 54 |
+
|
| 55 |
+
One extension of ECE is to find bin partitions adaptively (Nixon et al., 2020), such that each bin contains an equal number of samples, and each bin contributes equally to the overall error. The result is that many more bins exist in the high certainty region, each of which are narrower in width. Essentially, adaptive-ECE reports the exact same error quantity as ECE in theory, but estimates the quantity using a finer-toothed comb. Neither metric is informative on lower or mid-certainty regions if
|
| 56 |
+
|
| 57 |
+
support is dominated by a high-certainty class.
|
| 58 |
+
|
| 59 |
+
Maximum expected calibration error (MECE) (Naeini et al., 2015) partially tells the story of low-certainty regions by reporting the maximum bin error. However, MECE is overly sensitive to outlier bins. For example, if a single sample happens to fall in the $0 - 5\%$ certainty bin, and it has the correct predicted class, we have $\mathrm{MECE} > .95$ , which is clearly an unusable characterization of the calibration error as a whole.
|
| 60 |
+
|
| 61 |
+
Here we consider Region-balanced ECE (RB- ECE) as a way to characterize calibration error weighted evenly across certainty regions. Simply,
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
R B E C E = \frac {1}{| \Theta |} \sum_ {B _ {i} \in \Theta} | \operatorname {a c c} \left(B _ {i}\right) - \operatorname {c o n f} \left(B _ {i}\right) |. \tag {2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
The error in each bin $B_{i}$ contributes to the error equally, subject to some threshold support requirement $n_i > \theta$ (to ensure $\mathrm{acc}(B_i)$ is well-defined). The set of bins that meet this requirement is denoted by $\Theta$ .
|
| 68 |
+
|
| 69 |
+
Alternative threshold requirements such as variance in $\mathrm{conf}(B_i)$ vs. bin size could be explored in the future. Another possible extension is custom bin-weighting according to a certainty region of interest for your application (e.g. for human-in-the-loop systems with an intervention criterion).
|
| 70 |
+
|
| 71 |
+
# 3 Region-dependent temperature scaling
|
| 72 |
+
|
| 73 |
+
The idea underlying all calibration methods is generally to modulate overconfident predictions. In traditional temperature scaling (TS), a higher temperature means stronger modulation. Temperature is taken to be a constant, meaning all samples are treated with the same modulation strength.
|
| 74 |
+
|
| 75 |
+
The idea underlying region-dependent temperature scaling (RD-TS) is simply that the most confident predictions likely need greater modulation than less confident predictions, and therefore temperature should depend on the uncalibrated certainty. If we consider the hypothetical limit of a $0\%$ confidence score, it is intuitive that this does not need any modulation. To investigate this idea empirically, we apply TS to subsets of the OntoNotes dataset, partitioned according to uncalibrated confidence scores. For each confidence region, the ideal temperature is shown in Figure 2. As expected, temperature increases as a function of confidence. A linear fit sufficiently describes the dependence. Within uncertainty, the intercept is equal to the expected value of $1(T(h = 0) = 1$ , corresponding to no modulation).
|
| 76 |
+
|
| 77 |
+
To apply RD-TS, uncalibrated logits $\vec{a}$ are scaled as $\vec{q} = \vec{a} / T(h)$ to obtain calibrated logits $\vec{q}$ . Temperature is now a function of confidence $T(h) = mh + 1$ , where $h = \max(\text{softmax}(\vec{a}))$ is the probability estimate for the predicted class on each sample. The slope $m$ is the single parameter controlling modulation strength.
|
| 78 |
+
|
| 79 |
+
To estimate $m$ , one could repeat temperature scaling on multiple data subsets, collect data points, and fit the slope as in Figure 2. However, this method increases computational overhead. Instead, let us estimate $m$ from the original TS constant $T_{0}$ and some knowledge of the validation dataset which was used to compute $T_{0}$ . Each sample in the validation set has an ideal temperature, here taken to be in the form $T_{i} = mh_{i} + 1$ . Assuming each sample contributed to the found $T_{0}$ equally, $T_{0} = \frac{1}{N}\sum_{i}^{N}(mh_{i} + 1)$ . Given access to the validation set, this sum can be computed exactly to find $m$ . However, we can further approximate the sum by loosely assuming that the data has a high proportion of samples (say $\approx 90\%$ ) with very high certainty estimates (say $\approx .99$ on average). Then the sum is dominated by the first leading term, $T_{0} \approx .9(.99m + 1)$ . This quick sketch is sufficient to achieve good error reduction over the baseline TS method. The numerical exactness is
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
Figure 2: The OntoNotes 5.0 validation set is split into 14 bins according to uncalibrated confidence scores $h$ . For each subset, regular temperature scaling is applied to find the ideal $T_{0}$ as a function of average confidence. Blue: Linear regression fit of empirical data $(m = .402 \pm .108, b = .943 \pm .073$ with a $95\%$ confidence interval). Red: Region-dependent temperature scaling parameter $T(h)$ as determined by our protocol (see points 1-3). Both methods produce equivalent results within the uncertainty.
|
| 83 |
+
|
| 84 |
+
not too important, but rather the general signature of a high proportion of high-certainty samples is sufficient. We take this further approximation to gain the advantage that nothing specifically needs to be known about the calibration dataset. I.e. If a large pre-trained model has been calibrated on a large or private dataset, and the corresponding temperature $T_{0}$ is known, RD-TS can be applied to your model outputs without access to the calibration data or further computation.
|
| 85 |
+
|
| 86 |
+
In summary, the RD-TS method is performed as follows:
|
| 87 |
+
|
| 88 |
+
1. Perform regular temperature scaling to obtain $T_{0}$ , or obtain a previously published $T_{0}$ for your model.
|
| 89 |
+
2. Find the linear dependence parameter $m = (T_0 - .9) / .89$ .
|
| 90 |
+
3. Apply calibration to logits $\vec{a}$ as $\vec{q} = \vec{a} / T(h)$ , $T = mh + 1$ .
|
| 91 |
+
|
| 92 |
+
RD-TS is a simple extension of temperature scaling which requires no additional training. Like temperature scaling, RD-TS cannot change the predicted class or model accuracy (unlike some other generalizations, vector and matrix scaling).
|
| 93 |
+
|
| 94 |
+
<table><tr><td>Scenario</td><td>Uncal.</td><td>TS</td><td>VS</td><td>MS</td><td>WTS</td><td>RD-TS</td></tr><tr><td>Classic</td><td>.09328</td><td>.02543 (T0=1.28)</td><td>.07040</td><td>.06940</td><td>.05236</td><td>.02151 (m = .426)</td></tr><tr><td>Rare & emerging</td><td>.09878</td><td>.05777 (T0=1.39)</td><td>.07490</td><td>.04932</td><td>.11559</td><td>.03549 (m = .550)</td></tr><tr><td>Fine-grained</td><td>.05333</td><td>.02179 (T0=1.12)</td><td>.03440</td><td>.04628</td><td>.03278</td><td>.01263 (m = .243)</td></tr><tr><td>Specialized</td><td>.07088</td><td>.04147 (T0=1.29)</td><td>.03844</td><td>.03590</td><td>.03820</td><td>.02781 (m = .439)</td></tr><tr><td>Sparse training</td><td>.09683</td><td>.07820 (T0=1.10)</td><td>.11653</td><td>.09528</td><td>.06279</td><td>.04110 (m = .229)</td></tr><tr><td>Differing sources</td><td>.05730</td><td>.05960 (T0=1.09)</td><td>.10824</td><td>.08470</td><td>.05551</td><td>.04019 (m = .214)</td></tr></table>
|
| 95 |
+
|
| 96 |
+
Table 1: Region-balanced expected calibration error (RBCE); refer to eq. 2.
|
| 97 |
+
|
| 98 |
+
<table><tr><td>Scenario</td><td>Uncal.</td><td>TS</td><td>VS</td><td>MS</td><td>WTS</td><td>RD-TS</td></tr><tr><td>Classic</td><td>.02001</td><td>.00862 (T0=1.28)</td><td>.01359</td><td>.01083</td><td>.00962</td><td>.00155 (m = .426)</td></tr><tr><td>Rare & emerging</td><td>.04278</td><td>.02323 (T0=1.39)</td><td>.02585</td><td>.01580</td><td>.04712</td><td>.00949 (m = .550)</td></tr><tr><td>Fine-grained</td><td>.02287</td><td>.00783 (T0=1.12)</td><td>.01587</td><td>.01786</td><td>.01462</td><td>.00839 (m = .243)</td></tr><tr><td>Specialized</td><td>.01555</td><td>.00617 (T0=1.29)</td><td>.00608</td><td>.00573</td><td>.00631</td><td>.00651 (m = .439)</td></tr><tr><td>Sparse training</td><td>.03267</td><td>.02190 (T0=1.10)</td><td>.03113</td><td>.02599</td><td>.01645</td><td>.01798 (m = .229)</td></tr><tr><td>Differing sources</td><td>.00950</td><td>.00723 (T0=1.09)</td><td>.01211</td><td>.01344</td><td>.01020</td><td>.00383 (m = .214)</td></tr></table>
|
| 99 |
+
|
| 100 |
+
Table 2: Expected calibration error (ECE); refer to eq. 1.
|
| 101 |
+
|
| 102 |
+
<table><tr><td>Dataset</td><td>h|(P=.9)</td><td>P|(h=.99)</td></tr><tr><td>OntoNotes</td><td>.998</td><td>.964</td></tr><tr><td>W-NUT 17</td><td>.997</td><td>.953</td></tr><tr><td>Few-nerd</td><td>.972</td><td>.801</td></tr><tr><td>BC2GM</td><td>.997</td><td>.968</td></tr><tr><td>OntoNotes (tc)</td><td>.999</td><td>.978</td></tr></table>
|
| 103 |
+
|
| 104 |
+
Table 3: The mean certainty $h$ of the top .9 most certain samples, $h \mid \left( {P = {.9}}\right)$ ,and the proportion of samples we need to take such that the mean certainty is .99, $P \mid \left( {h = }\right.$ .99). All datasets refer to the corresponding validation set, which is used for calibration. As shown, all datasets have the general signature of a high proportion of high-certainty samples, yet the exact numerical values can deviate from our sketch.
|
| 105 |
+
|
| 106 |
+
# 4 Experimental results
|
| 107 |
+
|
| 108 |
+
# 4.1 Baseline methods
|
| 109 |
+
|
| 110 |
+
As RD-TS is a simple extension of regular temperature scaling, we focus comparison on similar post-training parametric calibration methods:
|
| 111 |
+
|
| 112 |
+
Temperature scaling (TS): Uncalibrated logits $\vec{a}$ are scaled by a single constant $T_{0}$ (as $\vec{q} = \vec{a} / T_{0}$ ) before softmax is applied to obtain calibrated probability estimates over all classes (Guo et al., 2017).
|
| 113 |
+
|
| 114 |
+
Vector (generalized Platt) scaling (VS): A generalization of TS such that logits are scaled by $2k$ learned parameters, $\vec{q} = \vec{v} \circ \vec{a} + \vec{b}$ , where $k$ is the number of classes (Platt, 1999; Niculescu-Mizil and Caruana, 2005; Guo et al., 2017).
|
| 115 |
+
|
| 116 |
+
Matrix scaling (MS): A further generalized linear
|
| 117 |
+
|
| 118 |
+
transformation such that logits are scaled by $k^2 + k$ learned parameters, $\vec{q} = M\vec{a} + \vec{b}$ (Guo et al., 2017).
|
| 119 |
+
|
| 120 |
+
Weighted temperature scaling (WTS): TS using a class-weighted NLL loss during convergence (Obadinma et al., 2021).
|
| 121 |
+
|
| 122 |
+
# 4.2 Datasets
|
| 123 |
+
|
| 124 |
+
We take the NER task as a case study. Datasets represent several important scenarios in token classification settings more broadly:
|
| 125 |
+
|
| 126 |
+
Classic: The OntoNotes 5.0 NER dataset (Weischedel et al., 2013) represents a baseline "classic" scenario involving plentiful training and calibration data from robust sources.
|
| 127 |
+
|
| 128 |
+
Rare and emerging named entities: The W-NUT NER dataset $^{1}$ (Derczynski et al., 2017) is gathered from noisy social media data which contains difficult entities (e.g. "kktny") due to informal and evolving language.
|
| 129 |
+
|
| 130 |
+
Fine-grained and few-shot: Few-nerd² (Ding et al., 2021) is a challenging few-shot NER dataset with 66 fine-grained entity types (e.g. "art-film").
|
| 131 |
+
|
| 132 |
+
Specialized language: The BioCreative II Gene Mention Recognition (BC2GM) dataset<sup>3</sup> (Smith et al., 2008) is composed of scientific text where named entities are gene mentions.
|
| 133 |
+
|
| 134 |
+
Sparse training data: OntoNotes telephone call data is used for training while the full OntoNotes dataset is used for calibration and evaluation. The telephone call data subset is a sparse representation since it is very heavily skewed to the nonentity outside class, and entity mentions are concentrated on "person" and "location", compared to the full OntoNotes dataset (generally containing much richer entity mentions from news sources).
|
| 135 |
+
|
| 136 |
+
Differing language sources: OntoNotes broadcast news data is used for training, and telephone call data is used for calibration and evaluation. Broadcast news language is professional and grammatically correct. Telephone call language is casual, fragmented and incoherent at times.
|
| 137 |
+
|
| 138 |
+
# 4.3 Implementation notes
|
| 139 |
+
|
| 140 |
+
All NER models use DistilBERT $^4$ (Sanh et al., 2019) as the base pre-trained model, fine-tuned for NER using the train dataset for each scenario as described above. Further details and performance on the NER task are provided in Appendix A.
|
| 141 |
+
|
| 142 |
+
Calibration is performed using the uncalibrated logits of the associated validation set as model inputs. Calibration parameters are learned by minimizing the NLL (or weighted NLL) loss for 50 epochs (using SGD with 0.01 learning rate, and 0.9 momentum). Calibration error is computed on the associated test set. To compute both ECE (eq. 1) and RBECE (eq. 2), the number of bins is set to 20. To compute RBECE, the threshold for support per bin is set to $\theta = 40$ . The code needed to reproduce these results is made publicly available<sup>5</sup>. All datasets are publicly available with preset train/validation/test data splits.
|
| 143 |
+
|
| 144 |
+
# 4.4 Results
|
| 145 |
+
|
| 146 |
+
Experimental results are summarized in Tables 1 and 2. When low and mid-certainty regions are taken into account by the RBECE, calibration error is larger than previously thought (as reported by ECE). In all scenarios, RD-TS produces the smallest RBECE (in many cases quite substantially). Additionally, RD-TS improves the traditional ECE in the majority of scenarios. The results show that RD-TS is an effective extension of TS across a range of temperature $(T_0)$ values.
|
| 147 |
+
|
| 148 |
+
Recall in Section 3, we sketch a way to estimate the modulation parameter $m$ , and this approxima
|
| 149 |
+
|
| 150 |
+
tion follows from assuming that a high proportion of all samples in the calibration set (say $\approx .9$ ) have a high certainty estimate (say $\approx .99$ on average). We claim that the numerical exactness of these values is not too important (and therefore RD-TS outperforms TS across a range of datasets). This claim is supported empirically (Table 3).
|
| 151 |
+
|
| 152 |
+
# 5 Discussion and Conclusion
|
| 153 |
+
|
| 154 |
+
Good quality mid-range certainty estimates are essential for productive human-model interactions. Despite this, existing calibration error measures can be insensitive to all but the highest certainty regions. We propose a region-balanced error metric to probe this unreported information. When low and mid-certainty regions are taken into account, greater calibration errors are revealed.
|
| 155 |
+
|
| 156 |
+
Further, we explore the idea of a certainty-dependent temperature. While previous generalizations of TS, such as vector and matrix scaling, allow certainty dependence by increasing the number of learned parameters, these methods are generally outperformed by TS (Guo et al., 2017). Rather than allowing a complicated certainty dependence, we enforce a simple linear dependence (motivated by intuition and an empirical example) without introducing any learnable parameters. Unlike vector and matrix scaling, RD-TS cannot change the relative ranking of logits, and therefore model accuracy is retained (in single-label settings). One line of future work could be to apply RD-TS on top of weighted temperature scaling, a method known to decrease variance in calibration error among classes (Obadinma et al., 2021). Another line of work would be to investigate whether improved certainty estimates can increase model accuracy (in multi-label settings where predictions are applied by meeting a certainty threshold), especially in out-of-domain problems.
|
| 157 |
+
|
| 158 |
+
Finally, it is important to note that our discussion of a region-balanced error measure, as well as our sketch derivation of the RD-TS method, have been generally applicable to any problem with a dominant proportion of high-certainty predictions. This situation does arise in any token classification problem with a dominant "easy" class, as is the case in NER, however this situation can equally occur in class-balanced situations. Therefore, region-dependent temperature scaling can find utility beyond NER, token classification, or class-imbalanced situations.
|
| 159 |
+
|
| 160 |
+
# Ethical Considerations
|
| 161 |
+
|
| 162 |
+
We proposed a novel method to calibrate class-imbalanced token classifiers, and demonstrated the method for NER models. This calibration method is a step toward responsible use of AI by offering a measure of reliability, but also has risks that should be considered from an ethical point of view. Calibrated scores are a measure of transparency, and users can interpret a well-calibrated model better. However, all transparency methods expose AI systems to malicious attacks by providing more information about the internal workings of the system. This risk should be taken into account in sensitive tasks, e.g. when an NER model is used to extract personally identifiable information for privacy reasons. Also, users should be warned that a low calibration error does not guarantee robustness in out-of-domain settings. Therefore, in the case of safety-critical tasks such as medical applications of NER, a low calibration error should be interpreted with caution.
|
| 163 |
+
|
| 164 |
+
Further, low calibration errors should not be used to justify inherently unethical tasks or those out of the scope of the capabilities of NLP technologies. Every task should be evaluated in terms of feasibility and ethical use regardless of reliability and transparency of trained models. It is also important to keep in mind that a well-calibrated model can become miscalibrated as the data changes, and continuous calibration is needed to deal with the ever-changing nature of language.
|
| 165 |
+
|
| 166 |
+
# References
|
| 167 |
+
|
| 168 |
+
Morris H. DeGroot and Stephen E. Fienberg. 1983. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society. Series D (The Statistician), 32(1/2):12-22.
|
| 169 |
+
Leon Derczynski, Eric Nichols, Marieke van Erp, and Nut Limsopatham. 2017. Results of the WNUT2017 shared task on novel and emerging entity recognition. In Proceedings of the 3rd Workshop on Noisy User-generated Text, pages 140-147, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 170 |
+
Ning Ding, Guangwei Xu, Yulin Chen, Xiaobin Wang, Xu Han, Pengjun Xie, Haitao Zheng, and Zhiyuan Liu. 2021. Few-NERD: A few-shot named entity recognition dataset. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3198-3213, Online. Association for Computational Linguistics.
|
| 171 |
+
|
| 172 |
+
Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference- 6: A brief history. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics.
|
| 173 |
+
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, page 1321-1330. JMLR.org.
|
| 174 |
+
Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. 2021. Calibration of neural networks using splines. In International Conference on Learning Representations.
|
| 175 |
+
J. Li, A. Sun, J. Han, and C. Li. 2020. A survey on deep learning for named entity recognition. IEEE Transactions on Knowledge & Data Engineering, (01):1-1.
|
| 176 |
+
Rafael Müller, Simon Kornblith, and Geoffrey E Hinton. 2019. When does label smoothing help? In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 177 |
+
Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15, page 2901-2907. AAAI Press.
|
| 178 |
+
Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Proceedings of the 22nd International Conference on Machine Learning, ICML '05, page 625-632, New York, NY, USA. Association for Computing Machinery.
|
| 179 |
+
Jeremy Nixon, Mike Dusenberry, Ghassen Jerfel, Timothy Nguyen, Jeremiah Liu, Linchuan Zhang, and Dustin Tran. 2020. Measuring calibration in deep learning.
|
| 180 |
+
Stephen Obadinma, Hongyu Guo, and Xiaodan Zhu. 2021. Class-wise calibration: A case study on COVID-19 hate speech. Proceedings of the Canadian Conference on Artificial Intelligence. https://caiac.pubpub.org/pub/vd3v9vby.
|
| 181 |
+
John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In ADVANCES IN LARGE MARGIN CLASSIFIERS, pages 61-74. MIT Press.
|
| 182 |
+
Victor Sanh, Lysandre Debut, Julien Chaumont, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
|
| 183 |
+
Larry Smith, Lorraine K. Tanabe, Rie Johnson nee Ando, Cheng-Ju Kuo, I.-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph M. Friedrich, Kuzman Ganchev, Manabu Torii, Hong-fang Liu, Barry Haddow, Craig A. Struble, Richard J. Povinelli, Andreas Vlachos, William A. Baumgartner,
|
| 184 |
+
|
| 185 |
+
Lawrence Hunter, Bob Carpenter, Richard Tzong-Han Tsai, Hong-Jie Dai, Feng Liu, Yifei Chen, Chengjie Sun, Sophia Katrenko, Pieter Adrians, Christian Blaschke, Rafael Torres, Mariana Neves, Preslav Nakov, Anna Divoli, Manuel Maña-López, Jacinto Mata, and W. John Wilbur. 2008. Overview of BioCreative II gene mention recognition. Genome Biol., 9(2):1-19.
|
| 186 |
+
|
| 187 |
+
Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0.
|
| 188 |
+
|
| 189 |
+
Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2145-2158, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 190 |
+
|
| 191 |
+
Bianca Zadrozny and Charles Elkan. 2001. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, page 609-616, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
|
| 192 |
+
|
| 193 |
+
# A NER performance
|
| 194 |
+
|
| 195 |
+
NER models were obtained by fine-tuning DistilBERT, using the default configuration, for 3 epochs (with learning rate of 2e-5, and weight decay of 0.01). The performance of all NER models is provided in Table A.1 for reference.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>Dataset</td><td>P</td><td>R</td><td>F</td><td>A</td></tr><tr><td>OntoNotes</td><td>.778</td><td>.621</td><td>.691</td><td>.976</td></tr><tr><td>W-NUT 17</td><td>.543</td><td>.234</td><td>.327</td><td>.938</td></tr><tr><td>Few-nerd</td><td>.639</td><td>.679</td><td>.659</td><td>.906</td></tr><tr><td>BC2GM</td><td>.802</td><td>.844</td><td>.822</td><td>.965</td></tr><tr><td>OntoNotes (bc)</td><td>.711</td><td>.753</td><td>.732</td><td>.973</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table A.1: For all datasets that were used to train an NER model, we report the precision (P), recall (R), $F$ -score (F) and accuracy (A) of the model on the corresponding test set.
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6f45fd788aae9bfe72f7157549872b5739df0f19c32e0935a246bc5b5799d93f
|
| 3 |
+
size 263918
|
regiondependenttemperaturescalingforcertaintycalibrationandapplicationtoclassimbalancedtokenclassification/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ece39a3b8f0b3f1d298f956a55aa1f42819992191f4d3149b74cf72c980a0d3b
|
| 3 |
+
size 251283
|
rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af15a6a96ee967f2d9577320b1323ba1dbe0f17f9f5e8d5d99662733dc9c7640
|
| 3 |
+
size 73974
|
rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:831dcd3c924c1f75adb8fb57486d0abd9821a9521bfd6f220156572258414018
|
| 3 |
+
size 93618
|
rethinkingandrefiningthedistinctmetric/3ccf4849-8e48-4b2b-a928-ebaeddc30422_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a117f1737f3a53baa47b358f56b9b2e8b442612343f7b2a62444eac9829fd28b
|
| 3 |
+
size 553647
|
rethinkingandrefiningthedistinctmetric/full.md
ADDED
|
@@ -0,0 +1,382 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Rethinking and Refining the Distinct Metric
|
| 2 |
+
|
| 3 |
+
Siyang Liu $^{1,2*}$ , Sahand Sabour $^{1*}$ , Yinhe Zheng $^{1,3}$ , Pei Ke $^{1}$ , Xiaoyan Zhu $^{1}$ , Minlie Huang $^{1\dagger}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>The CoAI group, DCST, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China.
|
| 6 |
+
|
| 7 |
+
$^{2}$ Kuaishou, Beijing, China. $^{3}$ Lingxin AI, Beijing, China.
|
| 8 |
+
|
| 9 |
+
liusyang641@gmail.com, Sahandfer@gmail.com, zhengyinhel@163.com
|
| 10 |
+
|
| 11 |
+
kepei1106@outlook.com, {zxy-dcs,aihuang}@tsinghua.edu.cn
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Distinct- $n$ score(Li et al., 2016) is a widely used automatic metric for evaluating diversity in language generation tasks. However, we observed that the original approach for calculating distinct scores has evident biases that tend to assign higher penalties to longer sequences. We refine the calculation of distinct scores by scaling the number of distinct tokens based on their expectations. We provide both empirical and theoretical evidence to show that our method effectively removes the biases existing in the original distinct score. Our experiments show that our proposed metric, Expectation-Adjusted Distinct (EAD), correlates better with human judgment in evaluating response diversity. To foster future research, we provide an example implementation at https://github.com/lsy641/Expectation-Adjusted-Distinct.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
The diversity of generated texts is an important evaluation aspect for dialogue generation models since most dialogue models tend to produce general and trivial responses (e.g. "I don't know" or "Me too") (Li et al., 2016; Zhao et al., 2017). Several metrics have been proposed to evaluate the text diversity, and the Distinct score (Li et al., 2016) is the most widely applied metric due to its intuitive nature and convenient calculation. It has become a de facto standard to report the Distinct score to compare the performance of different models in terms of response diversity (Liu et al., 2016; Fan et al., 2018; Sabour et al., 2022; Wu et al., 2021c; Zhou et al., 2021; Wu et al., 2021b; Zhang et al., 2020; Zheng et al., 2020; Wang et al., 2020; Liu et al., 2021). Most previous works follow the initial approach of Li et al. (2016) to calculate the Distinct score, i.e., dividing the number of unique tokens
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Distinct (original) and Expectation-Adjusted Distinct (new) scores against different sample lengths. In the figure, "natural" means that text sets are sampled from a real corpus while "designated" means that the sets are sampled from a designated distribution. See details in Section 2.
|
| 23 |
+
|
| 24 |
+
(n-grams) by that of all tokens (n-grams). However, although reported to be effective, we surprisingly find that this naive approach tends to introduce a higher penalty for longer texts and lead to inaccurate evaluation of text diversity.
|
| 25 |
+
|
| 26 |
+
We argue that the scaling factor of Distinct requires a comprehensive discussion for two reasons. First, prior research in non-computational linguistics has demonstrated the shortcomings of Distinct's scaling approach (Malvern et al., 2004). We found that early applications of Distinct exist in psycholinguistics, where researchers leveraged this metric to assess the language diversity of children with communication disorders (Chotlos, 1944). Their research showed that as a child speaks more words, Distinct experiences an adverse decline since each extra word that the child utters adds to the total number of words, yet it would only increase the number of distinct words if the word had not been used before (Malvern et al., 2004; Chotlos, 1944). Second, we also discovered an uncommon decline of this metric on both a natural corpus and a designated distribution sampler when the total num
|
| 27 |
+
|
| 28 |
+
ber of words increases. As illustrated in Figure 1, the original Distinct cannot produce a stable value and experiences a sharp decrease with increasing utterance length in both natural and designated distributions. However, as a qualified metric needs to support quantitative comparison among different methods, its value should stay invariant when the distribution of the words appearing is determined. This result is consistent with the findings of psychologists, indicating an unfair penalty does exist in such a scaling method.
|
| 29 |
+
|
| 30 |
+
Our contributions are summarized as follows:
|
| 31 |
+
|
| 32 |
+
1. We investigate the performance of the original Distinct and demonstrate that this metric is not sufficiently fair due to its scaling method. We also highlight the risks of using this metric for evaluating response diversity.
|
| 33 |
+
2. We propose Expectation-Adjusted Distinct (EAD), an improved version of Distinct based on that the scaling factor should be the expectation of the number of distinct tokens instead.
|
| 34 |
+
3. Human evaluation shows that our metric correlates better with human judgments. We further discuss the drawbacks of this metric and suggest its feasible applications in practice.
|
| 35 |
+
|
| 36 |
+
# 2 Preliminary Discussion about Original Distinct
|
| 37 |
+
|
| 38 |
+
To demonstrate the shortcoming of the original Distinct, we illustrated the normalised Distinct scores on two types of texts at different lengths (Figure 1). The first type of text is sampled from an artificially designated distribution while the other is sampled from a natural language corpus. In detail, we adopted $\mathbb{P}(X = k) = \int_0^v\frac{\lambda^ke^{-\lambda}}{vk!} d\lambda$ as our designated distribution, where $v$ is vocabulary size. In our experiments, we use BERT's vocabulary's size $(v = 30522)$ (Devlin et al., 2019). In addition, we leveraged OpenSubtitles<sup>1</sup> as our natural language corpus. For each length, we sampled 2000 sentences as a set and calculated scores of each set.
|
| 39 |
+
|
| 40 |
+
As shown in Figure 1, We observe that the original Distinct scores decrease sharply with increasing utterance length in both distributions. We can observe that given the same distribution of words (original-designated), lengthier texts will get lower scores than shorter texts. We highlighted this problem because it is extremely simple for models to control the length of texts by using decoding tricks, e.g. adjusting the penalty coefficient (Vijayakumar
|
| 41 |
+
|
| 42 |
+
et al., 2016). In such cases, it might seem that a model has outperformed other models on this metric. However, as shown by our experiments, it is not reasonable to assume that this model generates more diverse responses. The same observation can be made for the natural language corpus (original-designated). As language distribution is more complex than what we are able to formulate, we depicted the performance of the original Distinct on 6 famous datasets in Appendix. These cases indicate that the original Distinct is not a suitable metric for evaluating diversity.
|
| 43 |
+
|
| 44 |
+
# 3 Improving Original Distinct
|
| 45 |
+
|
| 46 |
+
# 3.1 Formula Derivation
|
| 47 |
+
|
| 48 |
+
The original Distinct score (Li et al., 2016) is measured as Distinct = N/C, where N is the number of distinct tokens and C is the total number of tokens. To improve the original scaling method, we propose that the scaling factor should be the expectation of the distinct words in the set of generated responses. Hence, the calculation becomes
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
E A D = \frac {N}{\mathbb {E} [ \hat {N} ]}. \tag {1}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
Supposing a set of generated responses $R$ with size $S$ to be evaluated, we let $l_{k,i}$ be the $i^{\text{th}}$ token of $k^{\text{th}}$ response in $R$ and $t_k$ be the length of $k^{\text{th}}$ response. The expectation $\mathbf{E}[\hat{N}]$ for $\hat{N}$ distinct words to appear in $R$ would be
|
| 55 |
+
|
| 56 |
+
$$
|
| 57 |
+
\begin{array}{l} \mathbb {E} [ \hat {N} ] = \mathbb {E} \left[ \sum_ {j} ^ {V} \bigvee_ {i, k} ^ {i = t _ {k}, k = S} \mathbb {1} _ {l _ {k, i} = u _ {j}} \right] \tag {2} \\ = \sum_ {j} ^ {V} \mathbb {P} \left(\left\{\bigvee_ {i, k} ^ {i = t _ {k}, k = S} \mathbb {1} _ {l _ {k, i} = u _ {j}} \right\} = 1\right) \\ = \sum_ {j} ^ {V} (1 - \prod_ {k} ^ {S} \mathbb {P} (l _ {t _ {k}} \neq u _ {j}, \dots , l _ {1} \neq u _ {j})), \\ \end{array}
|
| 58 |
+
$$
|
| 59 |
+
|
| 60 |
+
where $V$ is the vocabulary size, and $\{u_1,\dots,u_V\}$ is the set of all tokens in the vocabulary.
|
| 61 |
+
|
| 62 |
+
As shown in Equation 2, the calculation requires us to know $\mathbb{P}(l_{t_k} \neq u_j, l_{t_{k-1}} \neq u_j, \dots, l_1 \neq u_j)$ . Though current models can easily estimate the probability of a word appearing in a sequence, it is hard to calculate the probability of each word that never appears in any position of the sequence. Thus, there is no efficient way to calculate
|
| 63 |
+
|
| 64 |
+
$\mathbb{P}(l_{k,t}\neq u_j,\dots,l_{k,1}\neq u_j))$ . In addition, different language distributions have different $\mathbb{P}$ , which leads to different expectations and make the metric less general. Thus, we measure the upper bound of response diversity (i.e. a set of generated responses where each token appears with equal probability) to calculate this expectation. We hypothesize that the scaling effect of the upper bound is approximately proportional to that of other sets of generated responses; therefore, it can replace the original scaling factor.
|
| 65 |
+
|
| 66 |
+
As mentioned above, we hypothesize
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\mathbb {E} \left[ \hat {N} \right] \asymp \mathbb {E} \left[ N _ {\text {u p p e r}} ^ {\hat {}} \right],
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\mathbb{E}\left[N_{\text{upper}}\right]$ can be calculated as
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\begin{array}{l} \mathbb {E} \left[ N _ {\text {u p p e r}} ^ {\hat {\mathbf {\Pi}}} \right] = \sum_ {j} ^ {V} (1 - \prod_ {k} ^ {S} \prod_ {i} ^ {t _ {k}} \mathbb {P} \left(l _ {k, i} \neq u _ {j}\right)) \\ = V \left[ 1 - \left(\frac {V - 1}{V}\right) ^ {C} \right]. \tag {3} \\ \end{array}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
Thus, the EAD score is calculated as:
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
E A D = \frac {N}{V \left[ 1 - \left(\frac {V - 1}{V}\right) ^ {C} \right]}. \tag {4}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
We discuss more details on the formula's properties and the vocabulary size in the Appendix.
|
| 85 |
+
|
| 86 |
+
# 3.2 Experimental Verification
|
| 87 |
+
|
| 88 |
+
# 3.2.1 Evaluation Approach
|
| 89 |
+
|
| 90 |
+
We collect responses from ten dialogue generation methods as reported by Wang et al. (2021), and compare EAD with the original uni-gram Distinct (Li et al., 2016). More details of these ten methods can be find in Appendix.
|
| 91 |
+
|
| 92 |
+
We follow previous works (Tao et al., 2018; Sella et al., 2020) to evaluate the correlation of each automatic metric with human judgments. Specifically, the Pearson, Spearman, and Kendall's Tau correlation coefficients are reported. Pearson's correlation estimates linear correlation while Spearman's and Kendall's correlations estimate monotonic correlation, with Kendall's correlation being usually more insensitive to abnormal values. We used $\mathrm{SciPy}^2$ for correlation calculation and significance test
|
| 93 |
+
|
| 94 |
+
# 3.2.2 Datasets
|
| 95 |
+
|
| 96 |
+
Our experiments use two open-domain dialog generation benchmark datasets: DailyDialog(Li et al., 2017), a high-quality dialog dataset collected from daily conversations, and OpenSubtitles<sup>3</sup>, which contains dialogs collected from movie subtitles (see Table 1 for more details). We follow the data processing procedures reported by Wang et al. (2021).
|
| 97 |
+
|
| 98 |
+
<table><tr><td></td><td>Train</td><td>Val</td><td>Test</td></tr><tr><td>DailyDialog</td><td>65.8K</td><td>6.13K</td><td>5.80K</td></tr><tr><td>OpenSubtitles</td><td>1.14M</td><td>20.0K</td><td>10.0K</td></tr></table>
|
| 99 |
+
|
| 100 |
+
Table 1: Dataset Statistics
|
| 101 |
+
|
| 102 |
+
# 3.2.3 Preliminary Observations
|
| 103 |
+
|
| 104 |
+
Based on the obtained results (check Table 2), it can be observed that Expectation-Adjusted Distinct has a clear edge over the original Distinct: first, the contrast between diversity of generated responses for different methods is highlighted more effectively by EAD (e.g. though AdaLab gets the highest diversity score using Distinct (3.96), its difference from other methods is not as evident as its EAD score (9.63)); second, contrary to Distinct, EAD provides a more accurate evaluation of response diversity. For instance, the Distinct scores for CP and UL are both 2.35 while responses generated by UL are found to be more diverse than CP using EAD $(5.35 > 5.08)$ . Given that the average length of responses generated by FL is larger than CP, Distinct's bias towards models that generate shorter sentences becomes evident. These observations are consistent for both datasets.
|
| 105 |
+
|
| 106 |
+
# 3.2.4 Correlation Results
|
| 107 |
+
|
| 108 |
+
We recruited crowdsourcing workers to evaluate the diversity of the selected methods<sup>4</sup>. For each method, we randomly sampled 100 subsets of 15 responses from their set of generated responses. Response sets of all methods, given the same query set, were packaged together as an evaluation set. We asked each crowdsourcing worker to assign a diversity score to every response group in the evaluation set. Each group was evaluated by at least 3 workers. For ensuring the quality of our annotations, we calculated the score of each set as the average of workers' scores and filtered out workers whose scores had an insufficient correlation with
|
| 109 |
+
|
| 110 |
+
<table><tr><td rowspan="2">Method</td><td colspan="4">DailyDialog</td><td colspan="4">OpenSubtitles</td></tr><tr><td>Avg Length</td><td>Distinct</td><td>EAD</td><td>Human</td><td>Avg Length</td><td>Distinct</td><td>EAD</td><td>Human</td></tr><tr><td>FL(2017)</td><td>9.33</td><td>2.38</td><td>5.09</td><td>5.18</td><td>8.56</td><td>3.19</td><td>9.51</td><td>4.91</td></tr><tr><td>NL(2020)</td><td>9.99</td><td>1.66</td><td>3.70</td><td>4.54</td><td>8.40</td><td>3.24</td><td>9.52</td><td>5.02</td></tr><tr><td>CP(2017)</td><td>8.67</td><td>2.35</td><td>4.80</td><td>5.08</td><td>8.74</td><td>3.11</td><td>9.44</td><td>5.20</td></tr><tr><td>LS(2016)</td><td>8.50</td><td>1.48</td><td>2.98</td><td>5.28</td><td>9.04</td><td>2.77</td><td>8.64</td><td>5.04</td></tr><tr><td>D2GPo(2019)</td><td>9.15</td><td>1.26</td><td>2.65</td><td>4.92</td><td>8.77</td><td>2.07</td><td>6.32</td><td>4.89</td></tr><tr><td>CE(2020)</td><td>8.29</td><td>1.67</td><td>3.31</td><td>4.14</td><td>9.21</td><td>2.55</td><td>8.08</td><td>4.95</td></tr><tr><td>F²(2020)</td><td>8.71</td><td>1.40</td><td>2.87</td><td>4.88</td><td>8.60</td><td>2.89</td><td>8.67</td><td>4.52</td></tr><tr><td>UL(2019)</td><td>9.93</td><td>2.35</td><td>5.23</td><td>5.35</td><td>8.09</td><td>2.84</td><td>8.10</td><td>5.00</td></tr><tr><td>Face(2019)</td><td>10.62</td><td>1.63</td><td>3.79</td><td>5.26</td><td>9.11</td><td>3.31</td><td>10.41</td><td>5.31</td></tr><tr><td>AdaLab(2021)</td><td>11.30</td><td>3.96</td><td>9.63</td><td>5.92</td><td>8.12</td><td>4.78</td><td>13.68</td><td>5.32</td></tr><tr><td>Pearson</td><td>-</td><td>0.67‡</td><td>0.70‡</td><td>1.00</td><td>-</td><td>0.56†</td><td>0.60†</td><td>1.00</td></tr><tr><td>Spearman</td><td>-</td><td>0.42†</td><td>0.62†</td><td>1.00</td><td>-</td><td>0.62†</td><td>0.65‡</td><td>1.00</td></tr><tr><td>Kendall's Tau</td><td>-</td><td>0.27</td><td>0.47†</td><td>1.00</td><td>-</td><td>0.51‡</td><td>0.56‡</td><td>1.00</td></tr></table>
|
| 111 |
+
|
| 112 |
+
Table 2: Results of automatic and human evaluation on corpus-level diversity methods. Pearson/Spearman/Kendall's Tau indicates the Pearson/Spearman/Kendall's Tau correlation respectively. The correlation scores marked with $\dagger$ (i.e., $p$ -value $< 0.1$ ) and $\ddagger$ (i.e., $p$ -value $< 0.05$ ) indicate the result significantly correlates with human judgments.
|
| 113 |
+
|
| 114 |
+
the average (Pearson Correlation $< 0.65$ ). We acknowledge that building a scoring standard for annotating language diversity is challenging. Hence, we did not require our workers to give an absolute score for each set. Instead, we asked them to highlight the contrast between different sets by scoring values that linearly reflect the response diversity difference between the sets. For instance, the two sets of scores $\{1,2,2\}$ and $\{2,5,5\}$ show the same evaluation since the same contrast is shown. We then normalized the scores to the [0-10] range.
|
| 115 |
+
|
| 116 |
+
Then, we calculated the correlation between the Distinct scores with the crowdsourced values for all the methods. The results are provided in Table 2. The evaluation results indicate that our proposed EAD is more consistent with human judgments for measuring response diversity, as it shows the highest correlation with human evaluations among all correlation metrics (Pearson/ Spearson/ Kendall's Tau) on both datasets.
|
| 117 |
+
|
| 118 |
+
# 4 EAD in Practice
|
| 119 |
+
|
| 120 |
+
As EAD is based on the idealized assumption that does not take language distribution into account, we further discuss this problem and propose a potential practical way of Expectation-Adjusted Distinct in real situations. Before applying EAD, it is necessary to explore the relationship between score and text length (Figure 1) and check the performance of EAD on the training data. To our knowledge, if the training data is from large-scale open-domain sources such as OpenSubtitles and Reddit, EAD can maintain its value on different
|
| 121 |
+
|
| 122 |
+
lengths. Hence, it can be directly used for evaluating models trained on these datasets. However, we found our experiments on datasets such as Twitter showed a decline in $EAD$ on lengthier texts. This is probably because input length limitations on these platforms (e.g. 280 words on Twitter), which induces users to say as much information as possible within a shorter length. In these situations, it is unfair to use $EAD$ to evaluate methods that tend to generate lengthier texts.
|
| 123 |
+
|
| 124 |
+
# 5 Related Work
|
| 125 |
+
|
| 126 |
+
Li et al. (2016) proposed Distinct, calculated as the number of distinct tokens divided by the total number of tokens. This automatic metric is designed to evaluate the diversity of texts, and it has been widely used in developing various text generation tasks, such as dialogue generation (Wu et al., 2021a; Zheng et al., 2021a,b, 2019) or story generation (Guan et al., 2021). However, as we showed in Figure 1, it is an unfair indicator as it is affected by the sample length. This causes a bias against models which tend to generate longer sentences.
|
| 127 |
+
|
| 128 |
+
There exist other metrics for evaluating diversity but none are as widely-used as Distinct (Zhu et al., 2018; Xu et al., 2018). Specifically, Self-BLEU proposed by Zhu et al. (2018) is extremely time-consuming as its computation complexity is $O(n^{2})$ , where $n$ denoted the size of the test set.
|
| 129 |
+
|
| 130 |
+
# 6 Conclusion
|
| 131 |
+
|
| 132 |
+
In this paper, we present an improved variation of the Distinct metric, which is a widely-used measure
|
| 133 |
+
|
| 134 |
+
for evaluating response diversity in dialog systems. We provide the theoretical formulation and empirical evaluation of our proposed metric (Expectation-Adjusted Distinct). The results demonstrated that Expectation-Adjusted Distinct has a higher correlation with human evaluation in comparison with other metrics. The proposed metric is not limited to dialogue generation models but also suitable to evaluate text generation tasks where diversity matters.
|
| 135 |
+
|
| 136 |
+
# 7 Acknowledgements
|
| 137 |
+
|
| 138 |
+
This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005. We were grateful to Dr. Xiangxiang Xu at MIT for his help in mathematical formulation.
|
| 139 |
+
|
| 140 |
+
# References
|
| 141 |
+
|
| 142 |
+
Hengyi Cai, Hongshen Chen, Yonghao Song, Cheng Zhang, Xiaofang Zhao, and Dawei Yin. 2020. Data manipulation: Towards effective instance learning for neural dialogue generation via learning to augment and reweight. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6334-6343, Online. Association for Computational Linguistics.
|
| 143 |
+
Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing Liu. 2020. Distilling knowledge learned in BERT for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7893-7905, Online. Association for Computational Linguistics.
|
| 144 |
+
Byung-Ju Choi, Jimin Hong, David Park, and Sang Wan Lee. 2020. F^2-softmax: Diversifying neural text generation via frequency factorized softmax. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9167-9182, Online. Association for Computational Linguistics.
|
| 145 |
+
John W. Chotlos. 1944. Iv. a statistical and comparative analysis of individual written language samples. Psychological Monographs, 56(2):75-111.
|
| 146 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages
|
| 147 |
+
|
| 148 |
+
4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 149 |
+
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.
|
| 150 |
+
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898.
|
| 151 |
+
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinglang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tür, and Amazon Alexa AI. 2019. Topical-chat: Towards knowledge-grounded open-domain conversations. In INTERSPEECH, pages 1891-1895.
|
| 152 |
+
Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2021. Openmeva: A benchmark for evaluating open-ended story generation metrics. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6394-6407.
|
| 153 |
+
Tianxing He and James Glass. 2020. Negative training for neural dialogue response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2044-2058, Online. Association for Computational Linguistics.
|
| 154 |
+
Shaojie Jiang, Pengjie Ren, Christof Monz, and Maarten de Rijke. 2019. Improving neural response diversity with frequency-aware cross-entropy loss. In *The World Wide Web Conference*, pages 2879–2885.
|
| 155 |
+
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In *ICLR (Poster)*.
|
| 156 |
+
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, pages 110-119.
|
| 157 |
+
Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995, Taipei, Taiwan. Asian Federation of Natural Language Processing.
|
| 158 |
+
Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, and Hai Zhao. 2019. Data-dependent gaussian prior objective for language generation. In International Conference on Learning Representations.
|
| 159 |
+
|
| 160 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988.
|
| 161 |
+
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP.
|
| 162 |
+
Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3469-3483, Online. Association for Computational Linguistics.
|
| 163 |
+
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. arXiv preprint arXiv:1506.08909.
|
| 164 |
+
David Malvern, Brian Richards, Ngoni Chipere, and Pilar Durán. 2004. Lexical diversity and language development. Springer.
|
| 165 |
+
Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton. 2017. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548.
|
| 166 |
+
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207.
|
| 167 |
+
Alan Ritter, Colin Cherry, and William B Dolan. 2010. Unsupervised modeling of twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 172-180.
|
| 168 |
+
Sahand Sabour, Chujie Zheng, and Minlie Huang. 2022. Cem: Commonsense-aware empathetic response generation. In 36th AAAI Conference on Artificial Intelligence, AAAI 2022.
|
| 169 |
+
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.
|
| 170 |
+
Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742.
|
| 171 |
+
|
| 172 |
+
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826.
|
| 173 |
+
Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. RUBER: an unsupervised method for automatic evaluation of open-domain dialog systems. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 722-729.
|
| 174 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 6000-6010.
|
| 175 |
+
Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424.
|
| 176 |
+
Yida Wang, Pei Ke, Yinhe Zheng, Kaili Huang, Yong Jiang, Xiaoyan Zhu, and Minlie Huang. 2020. A large-scale chinese short-text conversation dataset. In Natural Language Processing and Chinese Computing - 9th CCF International Conference, volume 12430, pages 91-103.
|
| 177 |
+
Yida Wang, Yinhe Zheng, Yong Jiang, and Minlie Huang. 2021. Diversifying dialog generation via adaptive label smoothing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 3507-3520.
|
| 178 |
+
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Dinan, Kyunghyun Cho, and Jason Weston. 2019. Neural text generation with unlikelihood training. In International Conference on Learning Representations.
|
| 179 |
+
Chen Henry Wu, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021a. Transferable persona-grounded dialogues via grounded minimal edits. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2368-2382.
|
| 180 |
+
Chen Henry Wu, Yinhe Zheng, Yida Wang, Zhenyu Yang, and Minlie Huang. 2021b. Semantic-enhanced explainable finetuning for open-domain dialogues. arXiv preprint arXiv:2106.03065.
|
| 181 |
+
Yuwei Wu, Xuezhe Ma, and Diyi Yang. 2021c. Personalized response generation via generative split memory network. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1956-1970.
|
| 182 |
+
|
| 183 |
+
Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, and Lei Li. 2021. Vocabulary learning via optimal transport for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7361-7373, Online. Association for Computational Linguistics.
|
| 184 |
+
|
| 185 |
+
Zhen Xu, Nan Jiang, Bingquan Liu, Wenge Rong, Bowen Wu, Baoxun Wang, Zhuoran Wang, and Xiaolong Wang. 2018. LSDSCC: a Large Scale Domain-Specific Conversational Corpus for Response Generation with Diversity Oriented Evaluation Metrics. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 2070-2080, Stroudsburg, PA, USA. Association for Computational Linguistics.
|
| 186 |
+
|
| 187 |
+
Rongsheng Zhang, Yinhe Zheng, Jianzhi Shao, Xiaoxi Mao, Yadong Xi, and Minlie Huang. 2020. Dialogue distillation: Open-domain dialogue augmentation using unpaired data. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3449-3460.
|
| 188 |
+
|
| 189 |
+
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.
|
| 190 |
+
|
| 191 |
+
Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654-664.
|
| 192 |
+
|
| 193 |
+
Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue generation with diversified traits. arXiv preprint arXiv:1901.09672.
|
| 194 |
+
|
| 195 |
+
Yinhe Zheng, Guanyi Chen, Xin Liu, and Ke Lin. 2021a. Mmchat: Multi-modal chat dataset on social media. arXiv preprint arXiv:2108.07154.
|
| 196 |
+
|
| 197 |
+
Yinhe Zheng,zikai Chen, Rongsheng Zhang, Shilei Huang, Xiaoxi Mao, and Minlie Huang. 2021b. Stylized dialogue response generation using stylized unpaired texts. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 14558-14567.
|
| 198 |
+
|
| 199 |
+
Yinhe Zheng, Rongsheng Zhang, Minlie Huang, and Xiaoxi Mao. 2020. A pre-training based personalized dialogue generation model with persona-sparse data. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 9693-9700.
|
| 200 |
+
|
| 201 |
+
Hao Zhou, Pei Ke, Zheng Zhang, Yuxian Gu, Yinhe Zheng, Chujie Zheng, Yida Wang, Chen Henry Wu, Hao Sun, Xiaocong Yang, et al. 2021. Eva: An open-domain chinese dialogue system with
|
| 202 |
+
|
| 203 |
+
large-scale generative pre-training. arXiv preprint arXiv:2108.01547.
|
| 204 |
+
|
| 205 |
+
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. 41st International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2018, pages 1097-1100.
|
| 206 |
+
|
| 207 |
+
# A Comparison on More Datasets
|
| 208 |
+
|
| 209 |
+
To demonstrate the shortcomings of the original Distint metric, we illustrate original Distinct on 6 datasets: Persona-chat (Zhang et al., 2018), Ubuntu Dialog Corpus (Lowe et al., 2015), DailyDialog, Topic-Chat (Gopalakrishnan et al., 2019), Empathetic Dialogs (Rashkin et al., 2018), Wizard of Wikipedia (Dinan et al., 2018), Reddit (Serban et al., 2015), and Twitter (Ritter et al., 2010) (Figure 1). It can be observed that with an increasing sample length, the original Distinct score tends to follow a linear decline while the proposed metric maintains its consistency.
|
| 210 |
+
|
| 211 |
+
# B Property Discussion
|
| 212 |
+
|
| 213 |
+
Formula Property 1. Expectation-Adjusted Distinct increases faster as $C$ is increasing, but its incremental rate converges to $\frac{1}{V}$ , as shown by its derivative below:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\frac {\mathrm {d} E A D}{\mathrm {d} N} = \frac {1}{V \left[ 1 - \left(\frac {V - 1}{V}\right) ^ {C} \right]} \tag {5}
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
\lim _ {C \rightarrow + \infty} \frac {\mathrm {d} E A D}{\mathrm {d} N} = \frac {1}{V} \tag {6}
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
whereas in the original Distinct, we have
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\frac {\mathrm {d} \text {D i s t i n c t}}{\mathrm {d} N} = \frac {1}{C} \tag {7}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
We can see from the original metric that the bigger $C$ is, the slower the original Distinct increases. It is the reason why this metric is not fair to those models that tend to generate longer sentences.
|
| 230 |
+
|
| 231 |
+
Formula Property 2. Expectation-Adjusted Distinct converges to $\frac{N}{V} (\leq 1)$ as $C$ increases.
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
\begin{array}{l} \lim _ {C \rightarrow + \infty} E A D = \lim _ {C \rightarrow + \infty} \frac {N}{V \left[ 1 - \left(\frac {V - 1}{V}\right) ^ {C} \right]} (8) \\ = \frac {N}{V} < = 1, (9) \\ \end{array}
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+

|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 2: Original scores against different sample lengths. The dotted lines are the actual curves for each score while the lines are slope-intercept graphs of the curves. Each score is calculated based on 10 sets of 2000 randomly sampled responses with the same certain length.
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+
where $\frac{N}{V[1 - (\frac{V - 1}{V})^C]} \in [0, +\infty]$ . Theoretically, Expectation-Adjusted Distinct can have values larger than 1 (e.g. when $N = V$ ), which is an extremely rare case in practice: as we utilized the upper bound for measuring the expectation, it is exceptionally hard for $N$ to obtain an equal value to or an even greater value than $\mathbf{E}(N_{\text{upper}})$ .
|
| 255 |
+
|
| 256 |
+
# C Details of Human Evaluation
|
| 257 |
+
|
| 258 |
+
Our created human evaluation interface is provided in Figure 3.
|
| 259 |
+
|
| 260 |
+
# D How to Determine Vocabulary Size
|
| 261 |
+
|
| 262 |
+
As we discussed the properties of Expectation-Adjusted Distinct, vocabulary size makes little impact on changing its value when it has reached a large number (usually more than 30000), so it is not necessary to measure an exact value. To compare different methods, it is recommended to use a common vocabulary size, (such as BERT's 30522) (Devlin et al., 2019). It is also reasonable to calculate the vocabulary size of a dataset by NLTK tokenizer, when research focuses on a specific dataset. For non-english corpora, we recommend researchers to determine a vocabulary size following Xu et al. (2021).
|
| 263 |
+
|
| 264 |
+
# E Details of Evaluated Methods
|
| 265 |
+
|
| 266 |
+
Wang et al. (2021) proposed a novel adaptive label smoothing method for diversified response gener
|
| 267 |
+
|
| 268 |
+
ation. Their experiments were conducted on the DailyDialog and OpenSubtitles datasets, using 9 recent methods for diverse response generation as their baselines (similar to what we demonstrated in our paper). Wang et al. (2021) used a transformer-based sequence-to-sequence model (Vaswani et al., 2017) as the backbone of their model, and most of their hyper-parameters follow (Cai et al., 2020). In addition, both the encoder and the decoder contain 6 transformer layers with 8 attention heads, and the hidden size is set to 512. BERT's WordPiece tokenizer (Devlin et al., 2019) and Adam optimizer (Kingma and Ba, 2015) are used for training their models with random initialization and a learning rate of 1e-4.
|
| 269 |
+
|
| 270 |
+
# Evaluating Diversity of Ten Sentence Sets – Manual Evaluation
|
| 271 |
+
|
| 272 |
+
# Task Description
|
| 273 |
+
|
| 274 |
+
There are ten sentence sets from ten different generative models. You should analyze all the sets and evaluate the diversity of each sentence set by comparing it to others.
|
| 275 |
+
|
| 276 |
+
# You should know:
|
| 277 |
+
|
| 278 |
+
i. Lexical diversity can be measured by the extent of using various different words in a sentence set. For example, set A ("a d e v s", "g e d h e") is more diverse than set B ("a b c d e", "e d c a b") because set A contains more unique (distinct) words.
|
| 279 |
+
ii. Though i., please not give your score by counting the number of distinct words for each set because as a sentence is longer, it is harder to increase a distinct word than a shorter sentence. You should evaluate the diversity based on your common sense -- whether this sentence at its length is really diverse.
|
| 280 |
+
iii. You can give each set a score from 1 to 50, where 50 means the highest lexical diversity and 1 means the lowest lexical diversity. For example, you evaluate the lexical diversity of 3 set, A, B and C, and the result is A>B>C. You can give A the highest score (e.g. 40), give B a mediate score (e.g. 35), and give C the lowest score (e.g. 20).
|
| 281 |
+
iv. The absolute score that you give each set is not important; however, the difference between scores should reflect the extent of diversity difference between the sentence sets. For example, if you give A->5, B->9, C->10, that means the difference between A and B (5-9) is much more than that the difference between B and C (9-10). Hence, we can see that A is much less diverse than the others. You can see that the same conclusion could be made if you had scored these three sets as a->10 b->18 c->20.
|
| 282 |
+
|
| 283 |
+
# Notes
|
| 284 |
+
|
| 285 |
+
- Every case is reviewed by more than 5 people. If the rank of the sets that you gave is much different from the results from other workers, we will carefully review your performance again to decide if your task should be accepted. Please ensure that you take it seriously.
|
| 286 |
+
|
| 287 |
+
Assignment : evaluate the diversity of each sentence set by comparing it to others.
|
| 288 |
+
|
| 289 |
+
# Set 1:
|
| 290 |
+
|
| 291 |
+
1.there's no way to nail them.
|
| 292 |
+
2.i' il be back in a minute.
|
| 293 |
+
3.though,he replied,`i'm gon na be able to make a wish.
|
| 294 |
+
4.we 're going to go to the forest.
|
| 295 |
+
5.i don't care.
|
| 296 |
+
6.we got a little problem.
|
| 297 |
+
7.i'il be there.
|
| 298 |
+
8.i ' il ride him .
|
| 299 |
+
9.how could it be?
|
| 300 |
+
10.i'm not afraid.
|
| 301 |
+
11.i mean, i was trained to get him out of prison.
|
| 302 |
+
12.i'm gon na get you out of here.
|
| 303 |
+
13.i'm here to see you.
|
| 304 |
+
14.i don't know.
|
| 305 |
+
15.i got to get to the embassy.
|
| 306 |
+
|
| 307 |
+
On a scale of 1-50, how much lexical diversity score do you think this set gets?
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
Figure 3: Interface of Human Evaluation
|
| 311 |
+
|
| 312 |
+
# Set 2:
|
| 313 |
+
|
| 314 |
+
1.the judges will be here by the next day.
|
| 315 |
+
|
| 316 |
+
2.i ' ll just go to the movies .
|
| 317 |
+
|
| 318 |
+
3.so, she'd be happy to be able to communicate with her.
|
| 319 |
+
|
| 320 |
+
4.we have to go.
|
| 321 |
+
|
| 322 |
+
5.i ' il give you $ 50 .
|
| 323 |
+
|
| 324 |
+
6.we got a problem .
|
| 325 |
+
|
| 326 |
+
7.we 'il be all right .
|
| 327 |
+
|
| 328 |
+
8.i 'il bet he will.
|
| 329 |
+
|
| 330 |
+
9. how could he have been involved with the computer?
|
| 331 |
+
|
| 332 |
+
10.i'm not sure.
|
| 333 |
+
|
| 334 |
+
11.but i was still alive.
|
| 335 |
+
|
| 336 |
+
12.i'm not finished.
|
| 337 |
+
|
| 338 |
+
13.i'm here to see you.
|
| 339 |
+
|
| 340 |
+
14.she was at the scene.
|
| 341 |
+
|
| 342 |
+
15.i 'il take care of it .
|
| 343 |
+
|
| 344 |
+
On a scale of 1-50, how much lexical diversity score do you think this set gets?
|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
|
| 348 |
+
# Set 3:
|
| 349 |
+
|
| 350 |
+
1.and they will show up to you, and you will be back in a few minutes.
|
| 351 |
+
|
| 352 |
+
2.i'm not sure.
|
| 353 |
+
|
| 354 |
+
3.the word is kateina, to have seen the kates.
|
| 355 |
+
|
| 356 |
+
4.we have to go to war.
|
| 357 |
+
|
| 358 |
+
5.i 'il take it .
|
| 359 |
+
|
| 360 |
+
6.we ' re in the same area .
|
| 361 |
+
|
| 362 |
+
7.i'm gon na have some fun.
|
| 363 |
+
|
| 364 |
+
8.i ' m sure he ' il have a horse .
|
| 365 |
+
|
| 366 |
+
9. what kind of files?
|
| 367 |
+
|
| 368 |
+
10.i ' m not a bad person .
|
| 369 |
+
|
| 370 |
+
11.i thank you, mr. bond.
|
| 371 |
+
|
| 372 |
+
12.i'm not sure i'm not gon na do it.
|
| 373 |
+
|
| 374 |
+
13.i'm here to see you.
|
| 375 |
+
|
| 376 |
+
14.they're not in charge of this investigation.
|
| 377 |
+
|
| 378 |
+
15.i'm going to kill you all.
|
| 379 |
+
|
| 380 |
+
On a scale of 1-50, how much lexical diversity score do you think this set gets?
|
| 381 |
+
|
| 382 |
+

|
rethinkingandrefiningthedistinctmetric/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8447ee347de8c048a1526d2fbf9f5f27e3d44668812941e40be2658bbb6aa714
|
| 3 |
+
size 224728
|
rethinkingandrefiningthedistinctmetric/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:27147a999d9e3e6476f73160539a4ea64fd6507db671ba21f6b716e57d111f5e
|
| 3 |
+
size 374273
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b69db6a252bbfff2027b3bbbea4b4aab08216ee9617e5124669f903214d6f39c
|
| 3 |
+
size 70485
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:af8cca8f627bfba29d5c58af35db006cfb273073b402450032f5c883137d1448
|
| 3 |
+
size 83949
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/0d8aa261-6251-4c23-b072-dd0a32ea9383_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8c928a99255cc0a24b77381288e9a4930a3cb490dcf98197508179775b8b6d45
|
| 3 |
+
size 844029
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/full.md
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Revisiting the Compositional Generalization Abilities of Neural Sequence Models
|
| 2 |
+
|
| 3 |
+

|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Compositional generalization is a fundamental trait in humans, allowing us to effortlessly combine known phrases to form novel sentences. Recent works have claimed that standard seq-to-seq models severely lack the ability to compositionally generalize. In this paper, we focus on one-shot primitive generalization as introduced by the popular SCAN benchmark. We demonstrate that modifying the training distribution in simple and intuitive ways enables standard seq-to-seq models to achieve near-perfect generalization performance, thereby showing that their compositional generalization abilities were previously underestimated. We perform detailed empirical analysis of this phenomenon. Our results indicate that the generalization performance of models is highly sensitive to the characteristics of the training data which should be carefully considered while designing such benchmarks in future.
|
| 8 |
+
|
| 9 |
+
# 1 Introduction
|
| 10 |
+
|
| 11 |
+
According to the principle of compositionality, the meaning of a complex expression (e.g., a sentence) is determined by the meaning of its individual constituents and how they are combined. Humans can effectively recombine known parts to form new sentences that they have never encountered before. Despite the unprecedented achievements of standard seq-to-seq networks such as LSTMs and Transformers in NLP tasks, previous work has suggested that they are severely limited in their ability to generalize compositionally (Lake and Baroni, 2018; Furrer et al., 2020).
|
| 12 |
+
|
| 13 |
+
Problem Statement. Our work relates to a central challenge posed by compositional generalization datasets such as SCAN (Lake and Baroni, 2018) and Colors (Lake et al., 2019), which we refer to as one-shot primitive generalization: The dataset consists of input-output sentence pairs (e.g. 'walk twice $\rightarrow$ WALK WALK'); input sentences
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: Overview of the SCAN generalization task (left) and our approach (right) that enables standard neural sequence models to generalize compositionally.
|
| 17 |
+
|
| 18 |
+
are formed from primitive words ('walk') and function words ('twice') and are generated by a context-free grammar (CFG); output sentences are obtained by applying an interpretation function. Crucially, there is a systematic difference between the train and test splits<sup>1</sup>: While the former has a single example of an isolated primitive (e.g., the primitive definition 'jump $\rightarrow$ JUMP' in SCAN), the latter consists of compositional sentences with this isolated primitive (e.g. 'jump twice $\rightarrow$ JUMP JUMP'). See Fig. 1 (left) for an overview of the task.
|
| 19 |
+
|
| 20 |
+
A model with the right inductive bias should generalize on the test data after having seen compositional expressions with other primitives during training. The need for such inductive bias is justified via psychological experiments (Lake et al., 2019) indicating that humans do have the ability to
|
| 21 |
+
|
| 22 |
+
generalize on such tasks. Previous works have suggested that seq-to-seq models lack the appropriate inductive bias necessary to generalize on this task since they achieve near-zero accuracies on both SCAN and Colors benchmarks. This has led to the development of many specialized architectures (Li et al., 2019; Gordon et al., 2020; Chen et al., 2020; Akyurek and Andreas, 2021), learning procedures (Lake, 2019; Conklin et al., 2021) and data augmentation methods (Andreas, 2020; Guo et al., 2020) to solve the task.
|
| 23 |
+
|
| 24 |
+
Contributions. The primary claim of our paper is that, contrary to prior belief, neural sequence models such as Transformers and RNNs do have an inductive bias $^2$ to generalize compositionally which can be enabled using the right supervision. (i) We show that by making simple and intuitive changes to the training data distribution, standard seq-to-seq models can achieve high generalization performance even with a training set of size less than $20\%$ of the original training set. In particular, if we incorporated examples with more novel primitives in the training set without necessarily increasing the size of the training set (see right part of Fig. 1), then the generalization performance of standard seq-to-seq models improves and reaches near-perfect score after a certain point. Our results also exemplify the importance of the training distribution apart from architectural changes and demonstrate that providing the right supervision can significantly improve the generalization abilities of the models. (ii) We investigate the potential cause behind the improvement in generalization performance and observe that the embedding of the isolated primitive becomes more similar to other primitives when the training set has higher number of primitives and their use cases. (iii) To understand the phenomenon better, we characterize the effect of different training distributions and model capacities. Our results show that the parameters of the experimental setting play a crucial role while evaluating the generalization abilities of models.
|
| 25 |
+
|
| 26 |
+
# 2 Enabling Generalization by Providing the Right Supervision
|
| 27 |
+
|
| 28 |
+
Setup. We focus on the SCAN and Colors datasets.3 Both these datasets have exactly one isolated primitive. We refer to all other primitives
|
| 29 |
+
|
| 30 |
+

|
| 31 |
+
Figure 2: Generalization performance $(\uparrow)$ on SCAN and Colors improves with higher number of example primitives in the training set.
|
| 32 |
+
|
| 33 |
+
(i.e., those that are also composed with other words to form sentences in the training set) as example primitives. Both the SCAN and Colors training sets have exactly three example primitives. The training set of SCAN has 13.2k examples while the test set has 7.7k examples. Colors has just 14 training examples and 8 test examples. More details on implementation and datasets can be found in Appendix A & B. Our source code is available at https://github.com/arkilpatel/Compositional-Generalization-Seq2Seq.
|
| 34 |
+
|
| 35 |
+
Adding More Primitives. We modify the training set such that the number of distinct example primitives present in the dataset is higher. To do so, we add new primitives to the language which are simply random words (e.g., 'swim', 'clap', etc.) that have the same semantics and follow the same grammar rules as other existing primitives (see Fig. 1 (right) for illustration). These new primitives act as example primitives in our training set. For SCAN, we control the size of the training set such that it is at most the size of the original dataset.4 To generate the training set, we randomly sample the examples from the new grammar and discard all compositional sentences with the isolated primitive. For each example primitive and the isolated primitive, a primitive definition (such as 'walk $\rightarrow$ WALK') is also added to the training set. The test set is untouched and remains the same.
|
| 36 |
+
|
| 37 |
+
Main Observation. Fig. 2 shows the generalization performance of Transformer and LSTM based seq-to-seq models. We observe that there is a clear trend of improvement in compositional gen
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
Figure 3: Measuring the distance of embedding of isolated primitive with embeddings of example primitives for learned Transformer and LSTM models as we increase the number of example primitives in SCAN.
|
| 41 |
+
|
| 42 |
+
eralization as we increase the number of example primitives and their use cases. It is surprising to see that on SCAN, Transformers perform on par with some recently proposed specialized architectures (Li et al., 2019; Gordon et al., 2020) and even better than certain architectures (Rusin et al., 2019).
|
| 43 |
+
|
| 44 |
+
Implication. Since the training set still contains only one non-compositional example with the isolated primitive<sup>5</sup> and the test set is untouched, one-shot primitive generalization setting is preserved. Hence our results clearly show that standard neural sequence models have 'some' inductive bias required to generalize on such out-of-distribution tasks even if it is not as strong as that of specialized architectures designed primarily to solve these tasks. Our results are in contradiction to previously suggested limitations of standard seq-to-seq models in terms of primitive generalization (Lake and Baroni, 2018; Furrer et al., 2020; Baroni, 2020). While it is important to develop architectures with better compositional generalization abilities, we wish to highlight that synthetic benchmarks such as SCAN require a model with very strong inductive biases and tend to underestimate the generalization abilities of baseline models.
|
| 45 |
+
|
| 46 |
+
While we have shown that these models can generalize from one-shot exposure to primitive definitions, our results also hold for the more general case where the one-shot exposure of the primitive is in a sentence (e.g. 'jump twice $\rightarrow$ JUMP JUMP'). More details regarding these experiments can be found in Appendix D.
|
| 47 |
+
|
| 48 |
+
Prior Work. Note that our work is unrelated to previous works that propose data augmentation
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
(a) No extra primitives
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
(b) 10 extra primitives
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
(c) 20 extra primitives
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
(d) 50 extra primitives
|
| 61 |
+
Figure 4: Visualizing the $t$ -SNE reduced embeddings of isolated primitive (▲), example primitives (■) and nonprimitives (○) from a learned Transformer model as we increase number of example primitives in SCAN.
|
| 62 |
+
|
| 63 |
+
approaches for compositional generalization tasks (Andreas, 2020; Guo et al., 2020; Akyurek et al., 2021). (1) The datasets created by some of these augmentation methods do not preserve the systematic differences between train and test sets, while our datasets do. (2) The objective of these works was to devise a method to improve compositional generalization performance whereas the focus of our work is not to develop a general method; rather we want show that baseline seq-to-seq models are capable of generalizing compositionally even without breaking systematicity. (3) These methods add additional data resulting in datasets of larger sizes whereas we control for data size.
|
| 64 |
+
|
| 65 |
+
# 2.1 Analyzing the Embedding of the Isolated Primitive
|
| 66 |
+
|
| 67 |
+
Our results raise the question: Why do Transformers and LSTMs generalize better when the training data has more example primitives? Compositional generalization in our setting requires a model to learn to apply the same rules to the isolated primitive as it does to the other example primitives. Thus, we analyze the change in the learned embedding of the isolated primitive (such as 'jump') with respect to other primitives in different settings.
|
| 68 |
+
|
| 69 |
+
In particular, we compare the average distance with other primitives before and after adding certain number of primitives to training data (this is the same setting that was explained earlier in this section). We find that as we increase the number of example primitives in the training set, the em
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
(a) Other Distributions
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
(b) Uniform Distribution
|
| 76 |
+
Figure 5: Measuring the generalization performance of Transformer on different types of training set distributions of the SCAN dataset.
|
| 77 |
+
|
| 78 |
+
bedding of the isolated primitive gets closer to the example primitives (Fig. 3) in terms of Euclidean, Manhattan and Cosine distances. If the embedding of the isolated primitive is closer to the embeddings of the other primitives, then the model is more likely to operate over it in a similar fashion and apply the same rules as it does over the other primitives.
|
| 79 |
+
|
| 80 |
+
This phenomenon is also illustrated in $t$ -SNE plots (Fig. 4) of the learned embeddings where the embedding of the isolated primitive seems closer to the embeddings of the example primitives when there are more example primitives in the dataset. Hence, a possible reason behind improved generalization performance could be the difference in the learned embeddings. Additional results with the LSTM model and Colors dataset can be found in Appendix E.1.
|
| 81 |
+
|
| 82 |
+
# 3 Exploring the Impact of the Parameters of the Experimental Setup
|
| 83 |
+
|
| 84 |
+
# 3.1 Impact of Training Distributions
|
| 85 |
+
|
| 86 |
+
In this section, we analyze the influence of different training distributions on the generalization perfor
|
| 87 |
+
|
| 88 |
+
mance of the model. In the previous experiments, the data generating distribution was uniform over all possible samples. Here, we alter the training data distribution by varying the number of examples for each example primitive. The test set remains unchanged and there will still be only one non-compositional example of the isolated primitive (i.e., the primitive definition) in the training set. We experiment with linearly, quadratically and exponentially increasing probability distribution functions. For instance, in the quadratically increasing case, a training set with 10 example primitives will have one example primitive with 1 compositional example, the next one with 4 compositional examples, another one with 9 compositional examples and so on.<sup>8</sup> Similarly, in the exponentially increasing case (which we also call 'skewed'), $10\%$ example primitives have 500 compositional examples each, $30\%$ have 10 compositional examples each and the remaining have just one compositional example each in the training set. The general idea is that all the example primitives do not have equal representation in the training data. Upon training the models on different distributions, we observed that the models generalize well even with fewer number of example primitives when their distribution is linearly or quadratically increasing (Fig. 5a). On the other hand models struggle to generalize when the distribution is skewed. In that case, most primitives appear in only one or very few compositional sentences in the training data. The failure to generalize on such data implies that extra primitives must be added as part of multiple compositional sentences; just adding the primitive definition or a single example for each example primitive does not help the model to leverage it.
|
| 89 |
+
|
| 90 |
+
We then try to characterize the relationship between the number of example primitives and the amount of data required for the model to generalize well on the test data, when the example primitives are uniformly distributed. We create different training sets by varying the total number of example primitives, #primitives; for each example primitive, we draw #examples number of samples uniformly from the CFG. Fig. 5b shows the generalization performance of Transformers for each of these training sets. The size of each training set is the product of the row and column values (#primitives $\times$ #examples). As expected, the
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
Figure 6: Measuring the generalization performance of a Transformer of varying capacity across increasing number of primitives in the SCAN training set.
|
| 94 |
+
|
| 95 |
+
upper-right triangle has higher scores indicating that the sample requirement decreases as we add more primitives to the dataset. Surprisingly, the top-left cell indicates that Transformers can achieve high performance even with 2k training examples which is less than $20\%$ of the original SCAN training set. Additional results with the LSTM model can be found in Appendix E.2.
|
| 96 |
+
|
| 97 |
+
# 3.1.1 Understanding Transferability
|
| 98 |
+
|
| 99 |
+
We wish to check whether the inductive bias that is enabled when a model is trained on more number of example primitives can be transferred to a scenario where the number of example primitives is limited. We create a pretraining set with 50 example primitives uniformly distributed, each of them having 200 examples. The finetuning set is the original SCAN training set and the test set is the original SCAN test set. The model is first trained from scratch on the pretraining set and then finetuned on the finetuning set.
|
| 100 |
+
|
| 101 |
+
We find that if we allow all the parameters of the Transformer model to be updated during the finetuning phase on the original SCAN training set, then the model generalizes very poorly. On the other hand, when we freeze the weights of the encoder and decoder after the pretraining phase, and only allow the embedding and output layers to be updated, then the model generalizes near-perfectly on the test set. Our hypothesis is that in the latter setting, the task becomes simpler for the model since it only has to align the embeddings of the newly seen primitives in the finetuning phase with the embeddings of the primitives seen during the pretraining phase. This experiment also indicates that the previously learned rules during pretraining can help a model to compositionally generalize on novel primitives.
|
| 102 |
+
|
| 103 |
+
# 3.2 Impact of Model Capacity
|
| 104 |
+
|
| 105 |
+
We analyze the relationship between the model capacity and the number of example primitives in the training set. We vary the number of primitives as per the description in Section 2. We evaluate the generalization performance of the models while gradually increasing the number of parameters by increasing the size of its embeddings and intermediate representations. For each experiment, we exhaustively finetune the rest of the hyperparameters (e.g., dropout, learning rate, batch size, etc.) to select the best model. Looking at Fig. 6, we observe a general trend in which the model starts to overfit and has poor generalization performance as we increase the model size. Note that all these model configurations are able to achieve near-perfect accuracies on the SCAN random split that does not test for compositional generalization. This shows that carefully controlling the model size is important for achieving compositional generalization. On such small datasets, larger models might simply memorize the input-output mappings in the training set. Indeed, such memorization has been cited as a potential reason to explain why models fail at compositional generalization (Conklin et al., 2021). We also find that as we increase the number of example primitives, the models are less susceptible to overfitting and achieve relatively better generalization performance. Additional results with the LSTM model and Colors dataset can be found in Appendix E.3.
|
| 106 |
+
|
| 107 |
+
# 4 Conclusion
|
| 108 |
+
|
| 109 |
+
While it is essential to make progress in building architectures with better compositional generalization abilities, we showed that the generalization performance of standard seq-to-seq models (often used as baselines) is underestimated. A broader implication of our experiments is that although systematicity must be preserved when designing such benchmarks, it is imperative to carefully explore different parameters associated with the experimental setup to draw robust conclusions about a model's generalization abilities.
|
| 110 |
+
|
| 111 |
+
# Acknowledgements
|
| 112 |
+
|
| 113 |
+
We thank the anonymous reviewers for their constructive comments. We would also like to thank Kabir Ahuja, Zihuiwen Ye and our colleagues at Microsoft Research India for their valuable feedback and helpful discussions.
|
| 114 |
+
|
| 115 |
+
# References
|
| 116 |
+
|
| 117 |
+
Ekin Akyurek, Afra Feyza Akyurek, and Jacob Andreas. 2021. Learning to recombine and resample data for compositional generalization. In International Conference on Learning Representations.
|
| 118 |
+
Ekin Akyurek and Jacob Andreas. 2021. Lexicon learning for few shot sequence modeling. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4934-4946, Online. Association for Computational Linguistics.
|
| 119 |
+
Jacob Andreas. 2020. Good-enough compositional data augmentation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7556-7566, Online. Association for Computational Linguistics.
|
| 120 |
+
Marco Baroni. 2020. Linguistic generalization and compositionality in modern artificial neural networks. Philosophical Transactions of the Royal Society B, 375(1791):20190307.
|
| 121 |
+
Paul Bloom. 2000. How Children Learn the Meanings of Words. MIT Press.
|
| 122 |
+
Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, and Denny Zhou. 2020. Compositional generalization via neural-symbolic stack machines. In Advances in Neural Information Processing Systems, volume 33, pages 1690-1701. Curran Associates, Inc.
|
| 123 |
+
Henry Conklin, Bailin Wang, Kenny Smith, and Ivan Titov. 2021. Meta-learning to compositionally generalize. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3322-3335, Online. Association for Computational Linguistics.
|
| 124 |
+
Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 619-634, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 125 |
+
Daniel Furrer, Marc van Zee, Nathan Scales, and Nathanael Scharli. 2020. Compositional generalization in semantic parsing: Pre-training vs. specialized architectures.
|
| 126 |
+
Jonathan Gordon, David Lopez-Paz, Marco Baroni, and Diane Bouchacourt. 2020. Permutation equivariant models for compositional generalization in language. In International Conference on Learning Representations.
|
| 127 |
+
Demi Guo, Yoon Kim, and Alexander Rush. 2020. Sequence-level mixed sample data augmentation. In
|
| 128 |
+
|
| 129 |
+
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5547-5552, Online. Association for Computational Linguistics.
|
| 130 |
+
Najoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9087-9105, Online. Association for Computational Linguistics.
|
| 131 |
+
Brenden Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2873-2882. PMLR.
|
| 132 |
+
Brenden M Lake. 2019. Compositional generalization through meta sequence-to-sequence learning. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 133 |
+
Brenden M. Lake, Tal Linzen, and Marco Baroni. 2019. Human few-shot learning of compositional instructions.
|
| 134 |
+
Yuanpeng Li, Liang Zhao, Jianyu Wang, and Joel Hestness. 2019. Compositional generalization for primitive substitutions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4293-4302, Hong Kong, China. Association for Computational Linguistics.
|
| 135 |
+
Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, and Zachary Fisher. 2021. Making transformers solve compositional tasks.
|
| 136 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
|
| 137 |
+
Jake Russian, Jason Jo, Randall C. O'Reilly, and Yoshua Bengio. 2019. Compositional generalization in a deep seq2seq model by separating syntax and semantics.
|
| 138 |
+
Ning Shi, Boxin Wang, Wei Wang, Xiangyu Liu, Rong Zhang, Hui Xue, Xinbing Wang, and Zhouhan Lin. 2021. From scan to real data: Systematic generalization via meaningful learning.
|
| 139 |
+
|
| 140 |
+
# A Implementation Details
|
| 141 |
+
|
| 142 |
+
We use 8 NVIDIA Tesla P100 GPUs each with 16 GB memory to run our experiments. All models are implemented in PyTorch (Paszke et al., 2019). We do not use any pretrained models and all embeddings are learnt from scratch. Parameters are updated using Adam optimization. All results are an average of 5 different runs with random seeds. The dataset-specific hyperparameters used for each model are shown in Table 1.
|
| 143 |
+
|
| 144 |
+
# B Primitive Generalization Datasets
|
| 145 |
+
|
| 146 |
+
In this paper, we show results on three datasets that evaluate primitive generalization.
|
| 147 |
+
|
| 148 |
+
SCAN (Lake and Baroni, 2018) is a supervised sequence-to-sequence semantic parsing task wherein the natural language input command has to be transformed to the corresponding set of actions. The complete dataset consists of all the commands (a total of 20,910) generated by a phrase-structure grammar and the corresponding sequence of actions, produced according to a semantic interpretation function. The benchmark consists of 4 splits: random, add jump, turn left and length. We work on the 'add jump' split which was designed to test primitive generalization. In this split, the test set (size: 7706) is made up of all the compositional sentences with the primitive 'jump' (which we refer to as the isolated primitive). The train set (size: $13,204^{9}$ ) has just one example of the isolated primitive (i.e. the primitive definition 'jump $\rightarrow$ JUMP') and other examples demonstrating the definitions and compositions of the three other primitives (which we refer to as the example primitives). Table 2 illustrates the task.
|
| 149 |
+
|
| 150 |
+
Colors (Lake et al., 2019) is a sequence-to-sequence task that was designed to measure human inductive biases. Apart from the challenge of primitive generalization, this dataset poses an additional challenge of low-resource learning for neural sequence models. The train set has just 14 examples that are either primitive definitions of the four primitives or examples with compositions of the three example primitives and three operations (concatenation, repetition and wrapping). The test set has 8 examples<sup>10</sup> with compositions of the isolated
|
| 151 |
+
|
| 152 |
+
<table><tr><td colspan="2">TRAIN</td><td colspan="2">TEST</td></tr><tr><td>INPUT</td><td>OUTPUT</td><td>INPUT</td><td>OUTPUT</td></tr><tr><td>dax</td><td>f</td><td>zup fep</td><td>y y y</td></tr><tr><td>lug</td><td>b</td><td>zup kiki dax</td><td>y y</td></tr><tr><td>wif</td><td>g</td><td>wif kiki zup</td><td>y g</td></tr><tr><td>zup</td><td>y</td><td>zup blicket lug</td><td>y b y</td></tr><tr><td>lug fep</td><td>b b b</td><td>dax blicket zup</td><td>y y y</td></tr><tr><td>dax fep</td><td>r r r</td><td>wif kiki zup fep</td><td>y y y g</td></tr><tr><td>lug blicket wif</td><td>b g b</td><td>zup fep kiki lug</td><td>b y y y</td></tr><tr><td>wif blicket dax</td><td>g r g</td><td>lug kiki wif blicket zup</td><td>g y g b</td></tr><tr><td>lug kiki wif</td><td>g b</td><td></td><td></td></tr><tr><td>dax kiki lug</td><td>b r</td><td></td><td></td></tr><tr><td>lug fep kiki wif</td><td>g b b b</td><td></td><td></td></tr><tr><td>wif kiki dax blicket lug</td><td>r d r g</td><td></td><td></td></tr><tr><td>lug kiki wif fep</td><td>g g g b</td><td></td><td></td></tr><tr><td>wif blicket dax kiki lug</td><td>b g r g</td><td></td><td></td></tr></table>
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
Figure 7: The primitive generalization task in Colors<sup>11</sup>. Note that the test set does not contain the two length generalization examples.
|
| 156 |
+
Figure 8: Decrease in generalization performance on our COGS primitive generalization test set with a decrease in the percentage of example primitives and their use cases present in the train set.
|
| 157 |
+
|
| 158 |
+
primitive (‘zup’). Fig. 7 illustrates the task.
|
| 159 |
+
|
| 160 |
+
COGS (Kim and Linzen, 2020) is a semantic parsing task of mapping English natural language sentences to their corresponding logical forms. Apart from primitive generalization, COGS also evaluates other types of systematic generalization such generalizing to higher depths or generalizing to novel syntactic structures. The size of the train set is 24,155 and that of the test set is 21,000.
|
| 161 |
+
|
| 162 |
+
# C Removing Primitives Hurts Generalization on COGS
|
| 163 |
+
|
| 164 |
+
Unlike SCAN and Colors, both of which have a single isolated primitive and only 3 example primitives, COGS has 3 isolated primitives - a verb, a common noun and a proper noun which are supported by 80 verbs, 40 common nouns and 20 proper nouns as example primitives. We hypothesize that this high number of example primitives might be one of the reasons behind the high performance of Transformers on COGS (Csordás et al.,
|
| 165 |
+
|
| 166 |
+
evaluate length generalization. Since we focus only on primitive generalization, we do not evaluate on these.
|
| 167 |
+
|
| 168 |
+
<table><tr><td rowspan="2">Hyperparameters</td><td colspan="2">SCAN</td><td colspan="2">COLORS</td><td>COGS</td></tr><tr><td>Transformer</td><td>LSTM</td><td>Transformer</td><td>LSTM</td><td>Transformer</td></tr><tr><td>Embedding Size</td><td>[64, 128, 256]</td><td>[64, 128, 256]</td><td>[16, 32, 64]</td><td>[16, 32, 64]</td><td>[384, 512]</td></tr><tr><td>Hidden/FFN Size</td><td>[256, 512]</td><td>[64, 128]</td><td>[16, 32, 64]</td><td>[16, 32, 64]</td><td>[512, 1024]</td></tr><tr><td>Heads</td><td>[2, 4]</td><td>N/A</td><td>[4, 8]</td><td>N/A</td><td>[2, 4]</td></tr><tr><td>Number of Layers</td><td>[2, 3]</td><td>[1, 2]</td><td>[2, 3]</td><td>[1, 2]</td><td>[2, 3]</td></tr><tr><td>Learning Rate</td><td>[3e-4, 5e-4, 8e-4]</td><td>[5e-3, 8e-3, 1e-2]</td><td>[8e-4, 1e-3]</td><td>[5e-3, 8e-3, 1e-2]</td><td>[3e-4, 5e-4, 8e-4]</td></tr><tr><td>Batch Size</td><td>[128, 256]</td><td>[128, 256]</td><td>[1, 2]</td><td>[1, 2]</td><td>[128, 256]</td></tr><tr><td>Dropout</td><td>[0.1, 0.2]</td><td>[0.1, 0.2]</td><td>[0.1, 0.2]</td><td>[0.1, 0.2]</td><td>[0.1, 0.2]</td></tr><tr><td>Epochs</td><td>150</td><td>150</td><td>150</td><td>150</td><td>150</td></tr><tr><td>Avg Time/Epoch</td><td>30</td><td>40</td><td>2</td><td>3</td><td>60</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 1: Different hyperparameters and the values considered for each of them in the models. The best hyperparameters for each model for all the datasets (with maximum number of primitives of all the settings studied in this paper) are highlighted in bold. Average Time/Epoch is measured in seconds.
|
| 171 |
+
|
| 172 |
+
<table><tr><td colspan="2">TRAIN:</td></tr><tr><td>jump</td><td>JUMP</td></tr><tr><td>run after run left</td><td>LTURN RUN RUN</td></tr><tr><td>run</td><td>RUN</td></tr><tr><td>look left twice</td><td>LTURN LOOK LTURN LOOK</td></tr><tr><td colspan="2">TEST:</td></tr><tr><td>jump twice after look</td><td>LOOK JUMP JUMP</td></tr><tr><td>turn left and jump</td><td>LTURN JUMP</td></tr><tr><td>jump right twice</td><td>RTURN JUMP RTURN JUMP</td></tr></table>
|
| 173 |
+
|
| 174 |
+
2021; Ontañón et al., 2021), as far as primitive generalization is concerned.
|
| 175 |
+
|
| 176 |
+
To validate our hypothesis, we systematically reduce the number of example primitives in COGS and evaluate the model. The test set of COGS focusing on primitive generalization consists of 5000 examples. If we directly start removing the primitives from the train set, we risk having out-of-vocabulary tokens in the test set. Hence we select a portion of the test set of size 1218 which excludes 129 example primitives. We will hold this test set fixed and vary the percentage of the 129 example primitives to be inserted in the train set. For each example primitive, samples are drawn uniformly from the original COGS train set. Note that even though the number of example primitives and their use cases will vary in the train set, we control the total train set size to be always 2500 for fair evaluation.
|
| 177 |
+
|
| 178 |
+
The results of our experiment can be seen in Fig. 8. We see a clear trend of decrease in generalization performance as we decrease the number of example primitives and their use cases. This is in tandem with the results shown in Section 2 and further validates the idea that providing more example primitives and their use cases helps neural
|
| 179 |
+
|
| 180 |
+
Table 2: An illustration of the primitive generalization task in SCAN.
|
| 181 |
+
|
| 182 |
+
<table><tr><td>COMPLEXITY</td><td>SENTENCE</td></tr><tr><td>1</td><td>jump twice</td></tr><tr><td>2</td><td>jump thrice and look</td></tr><tr><td>3</td><td>run twice after jump opposite left</td></tr><tr><td>4</td><td>jump around left and walk opposite left twice</td></tr></table>
|
| 183 |
+
|
| 184 |
+
Table 3: Sentences of varying complexities featuring the isolated primitive 'jump'.
|
| 185 |
+
|
| 186 |
+
sequence models generalize on the primitive generalization task. Our results help explain that the gap in performance of neural sequence models on primitive generalization tasks in COGS and primitive generalization tasks in SCAN or Colors is at least partially caused by the difference in the number of example primitives and their use cases in these datasets.
|
| 187 |
+
|
| 188 |
+
# D Implicit Word Learning
|
| 189 |
+
|
| 190 |
+
Drawing analogy from human vocabulary acquisition (Bloom, 2000), our primitive generalization setting corresponds to the case when a child is explicitly explained the meaning of a word. But children can learn word meaning from implicit usage. In our setting this would translate to using a primitive in a more complex construction, say 'jump twice $\rightarrow$ JUMP JUMP' instead of the original 'jump $\rightarrow$ JUMP'. It would be interesting to evaluate how well seq-to-seq models learn the meanings of words from a single sentence and whether they learn to use that word compositionally with other words.
|
| 191 |
+
|
| 192 |
+
We consider the 'add jump' split in SCAN. Instead of providing the 'jump $\rightarrow$ JUMP' primitive definition in the train set, we provide one compositional sentence featuring 'jump'. We vary the complexity of this sentence as shown in Table 3. Similar to the case of providing only the primitive definition, we observe that models are unable to
|
| 193 |
+
|
| 194 |
+

|
| 195 |
+
(a) No extra primitives
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
(b) 10 extra primitives
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
(c) 20 extra primitives
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 9: Visualizing the $t$ -SNE reduced embeddings of isolated primitive (▲), example primitives (■) and non-primitives (●) from a learned LSTM model as we increase the number of example primitives in the Colors train set.
|
| 205 |
+
Figure 10: Measuring the similarity of the embedding of isolated primitive with the embeddings of example primitives for learned Transformer and LSTM models as we increase the number of example primitives in the Colors train set.
|
| 206 |
+
|
| 207 |
+
generalize and achieve near-zero accuracies.
|
| 208 |
+
|
| 209 |
+
We now wish to see whether the presence of more number of primitives and their sentences in the train set helps a model generalize in this scenario (like it did for primitive definitions as shown in Section 2). We consider the setup of having 100 primitives and their sentences in the train set (Section 2) apart from the one compositional sentence with the word 'jump'. We find that models are able to achieve near-perfect generalization accuracies.
|
| 210 |
+
|
| 211 |
+
This shows that our idea holds more generally: Adding more primitives and their sentences helps a model effectively learn the meaning of a new primitive, whether specified explicitly via a primitive definition or implicitly in a sentence.
|
| 212 |
+
|
| 213 |
+
# E Details of Experimental Setups and Other Results
|
| 214 |
+
|
| 215 |
+
# E.1 Embedding of Isolated Primitive
|
| 216 |
+
|
| 217 |
+
We scale the embedding vectors to unit $L2$ -norm for calculating the euclidean distance and unit $L1$ -norm for calculating the Manhattan distance. For Colors dataset as well, we compare the average distance with other primitives before and after adding
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
(a) Other Distributions
|
| 221 |
+
|
| 222 |
+

|
| 223 |
+
(b) Uniform Distribution
|
| 224 |
+
Figure 11: Measuring the generalization performance of LSTM on different types of train set distributions of the SCAN dataset.
|
| 225 |
+
|
| 226 |
+
primitives to the training data. We again find that as we increase the number of example primitives in the training set, the embedding of the isolated primitive ('zup') gets closer to the example primitives (refer to Fig. 10) in terms of Euclidean, Manhattan and Cosine Distances.
|
| 227 |
+
|
| 228 |
+
We additionally show the t-SNE plots of the learned embeddings for the LSTM model on the Colors dataset (Fig. 9).
|
| 229 |
+
|
| 230 |
+
# E.2 Impact of Training Distributions
|
| 231 |
+
|
| 232 |
+
In Section 3.1, we showed results of the Transformer model on various train set distributions of the SCAN dataset. We also experimented with the LSTM model, the results of which can be found in Fig. 11. We see the same trend as we saw for Transformers.
|
| 233 |
+
|
| 234 |
+

|
| 235 |
+
Figure 12: Measuring the generalization performance of an LSTM of varying capacity across increasing number of primitives in the Colors train set.
|
| 236 |
+
|
| 237 |
+

|
| 238 |
+
Figure 13: Generalization performance on SCAN across different runs with random seeds.
|
| 239 |
+
|
| 240 |
+
# E.3 Impact of Model Capacity
|
| 241 |
+
|
| 242 |
+
In Section 3.2, we showed results of varying sizes of Transformers trained on datasets with different number of example primitives. We also experimented with the LSTM model, the results of which on the Colors dataset can be found in Fig. 12. We see the same trend as we saw for Transformers.
|
| 243 |
+
|
| 244 |
+
# E.4 Variance Across Different Runs
|
| 245 |
+
|
| 246 |
+
We plot the generalization accuracies of the Transformer and LSTM models on SCAN and Colors datasets over 5 different runs with random seeds in Fig. 13-14. Both models displayed a high degree of variance in generalization performance on both datasets. It is interesting to see that the variance decreases with increasing number of primitives.
|
| 247 |
+
|
| 248 |
+
# E.5 Evaluation on Multiple Isolated Primitives
|
| 249 |
+
|
| 250 |
+
Our results are valid not just when there is a single isolated primitive, but even when there are multiple isolated primitives that are used compositionally at test time. While we believe that this holds trivially due to the symmetry of the setup, for completeness, we provide empirical evidence. We consider the setting on SCAN in which the train set has a total of
|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
Figure 14: Generalization performance on Colors across different runs with random seeds.
|
| 254 |
+
|
| 255 |
+
100 example primitives uniformly distributed. To this train set, in addition to the primitive definition of 'jump' (i.e., 'jump $\rightarrow$ JUMP'), we add 9 other primitive definitions of newly introduced isolated primitives. Thus, while the size of the train set in this setting was 13185, the size of the new train set is 13194. We then extract templates from the original SCAN test set and exhaustively populate these templates with the 10 isolated primitives. Hence, while the size of the original test set was 7706, the size of the new test set is 77060.
|
| 256 |
+
|
| 257 |
+
We evaluated Transformers on this data. The best model achieved $94.5\%$ accuracy on the complete test set, thereby showing that our methodology and results are valid even when there are multiple isolated primitives in the dataset at the same time.
|
| 258 |
+
|
| 259 |
+
# F A Note on Other Data Augmentation Methods
|
| 260 |
+
|
| 261 |
+
Applying data augmentation methods such as GECA (Andreas, 2020) on SCAN will lead to addition of training examples in which the input sentences are compositions of the isolated primitive 'jump'. This breaks the systematicity of the setup. While such automatic data augmentation approaches are important resources for enabling compositional generalization, a model that performs well on this modified split cannot be considered to be able to generalize compositionally.
|
| 262 |
+
|
| 263 |
+
Shi et al. (2021) proposed a data augmentation method based on the theory of meaningful learning. Similar to our work, they also augment the train set by adding more primitives (e.g. 'jump_0', 'jump_1', ..., 'jump_n'). However, compared to our work, their setup is completely different: The new primitives that they add to the train set are all still mapped to the output token of an example prim
|
| 264 |
+
|
| 265 |
+
itive 'jump', which is 'JUMP' (i.e. 'jump_0 $\rightarrow$ JUMP', ..., 'jump_n $\rightarrow$ JUMP'). Their train set has examples showing compositions of 'jump' while their test set evaluates for novel compositions of the newly added primitives. We argue that their setup cannot be considered one-shot primitive generalization since now the model can see the output token 'JUMP' in composition with other words. We claim that this familiarity with the output token enables a model to generalize well on the test data even if the newly added primitives are only presented one-shot in the train set. Indeed, Lake and Baroni (2018) also suggested that the reason why models are able to do well on the 'turn left' split of SCAN is because the train set consists of many examples that have the output token 'LTURN' used compositionally.
|
| 266 |
+
|
| 267 |
+
To validate our claim, we propose a simple experiment. In the original SCAN 'add jump' split, we map 'jump $\rightarrow$ WALK' instead of 'jump $\rightarrow$ JUMP' for all examples (primitive definition as well as compositional sentences) in both the train and test sets. In this setup, even though the input word 'jump' is seen only once at train time, it's mapping 'WALK' is used compositionally in many examples. On evaluating a Transformer model on this split, we found that it achieves a near-perfect accuracy. This shows that providing compositional examples with the output token of the isolated primitive not only breaks systematicity, but is the reason behind the high performance of models in that setting.
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:87f05bd70dfbf4643fbf6d1d73ce59d0e4d5e81ada32a1098513ee6b8edbc0c7
|
| 3 |
+
size 603129
|
revisitingthecompositionalgeneralizationabilitiesofneuralsequencemodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa1c02bc08c89ea1d164e084f91c5eccd8f47dc1e04840275c3b288db6d2a52c
|
| 3 |
+
size 304866
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b09d8bea6a45d89750e6db7f1303dbd079681a5b068657837ebb35f4a9525515
|
| 3 |
+
size 41091
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b0cb0aa6cff55496f95c14ec4eb2f214bef57d3be8ee2bee5f89b9fa58c6ca48
|
| 3 |
+
size 51294
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/fce9ff67-cd70-49d0-bdaf-89916b6ab5bc_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:45a4b73fbf486ae1a39877a25dd430ca2588ece40fe97ffb454c46db6e072581
|
| 3 |
+
size 293332
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/full.md
ADDED
|
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Rewarding Semantic Similarity under Optimized Alignments for AMR-to-Text Generation
|
| 2 |
+
|
| 3 |
+
Lisa Jin and Daniel Gildea
|
| 4 |
+
Department of Computer Science
|
| 5 |
+
University of Rochester
|
| 6 |
+
Rochester, NY 14627
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
A common way to combat exposure bias is by applying scores from evaluation metrics as rewards in reinforcement learning (RL). Metrics leveraging contextualized embeddings appear more flexible than those that match n-grams and thus ideal as training rewards. Yet metrics such as BERTSCORE greedily align candidate and reference tokens, which can give system outputs excess credit relative to a reference. Past systems using such semantic similarity rewards further suffer from repetitive outputs and overfitting. To address these issues, we propose metrics that replace the greedy alignments in BERTSCORE with optimized ones. Our model optimizing discrete alignment metrics consistently outperforms cross-entropy and BLEU reward baselines on AMR-to-text generation. Additionally, we find that this model enjoys stable training relative to a non-RL setting.
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
Automatic evaluation metrics often score natural language generation (NLG) system outputs based on how well they lexically align to human-annotated references. In tasks such as machine translation and summarization, these metrics may unfairly penalize outputs that express the correct semantics despite a lower n-gram overlap with reference strings. As a result, models overfitting to certain token-level patterns may dominate those generating more creatively (e.g., through synonyms or varied sentence structure).
|
| 15 |
+
|
| 16 |
+
NLG systems are typically trained to maximize likelihood of a single set of references. Conditioning models on gold prefixes shields them from their own predictions during training—an issue known as exposure bias. Adding reinforcement learning (RL) objectives (Ranzato et al., 2016; Edunov et al., 2018) can aid exploration by giving a model feedback on sequences sampled from its own distribution. However, it is common practice to use automatic evaluation scores like BLEU (Papineni
|
| 17 |
+
|
| 18 |
+
et al., 2002) and ROUGE (Lin and Hovy, 2002) as sequence-level rewards. This results in the same lack of semantic signal described earlier.
|
| 19 |
+
|
| 20 |
+
Instead of hinging evaluation on hard n-gram overlap, recent metrics (Zhang et al., 2019; Zhao et al., 2019) rely on vector similarity between contextualized subword embeddings to make more semantically faithful judgments. BERTSCORE, in particular $F_{\mathrm{BERT}}$ , computes a token-level F1 score based on greedy alignment of similar embeddings. With their strength in offline evaluation, it is natural to ask how these embeddings-based metrics can help provide more realistic training feedback.
|
| 21 |
+
|
| 22 |
+
Past approaches to train models with semantic similarity scores include both non-differentiable and differentiable objectives. Wieting et al. (2019) separately train paraphrastic sentence embeddings that provide semantic similarity rewards to a neural machine translation (NMT) system. Rewards were included in a mixed minimum risk and maximum likelihood training phase. Besides an embedding training overhead, the model needed a length penalty term to limit repetitive outputs. Li et al. (2019) adopt a similar fine-tuning approach using an RL objective with $F_{\mathrm{BERT}}$ for abstractive summarization. While their models were less repetitive, their news domain corpora may have been a natural match for BERT embeddings. Finally, Jauregi Unanue et al. (2021) also propose to optimize $F_{\mathrm{BERT}}$ but with fully differentiable training objectives in NMT. Yet their models overfit after only a few epochs and scored lower in BLEU at the cost of higher $F_{\mathrm{BERT}}$ . We hypothesize that metrics employing external pretrained vectors may suffer from domain mismatch with downstream data. This can hurt the accuracy of semantic similarity scores computed during training.
|
| 23 |
+
|
| 24 |
+
In this work, we focus on text generation from Abstract Meaning Representations (AMRs, Banarescu et al., 2013), sentence-level semantic graphs that are rooted, directed, and acyclic. This
|
| 25 |
+
|
| 26 |
+
task's models may especially benefit from an emphasis on semantic rather than lexical similarity. It also provides a challenging setting to evaluate overfitting given the relatively small corpus size.
|
| 27 |
+
|
| 28 |
+
In our analysis of $F_{\mathrm{BERT}}$ rewards, we note that $F_{\mathrm{BERT}}$ could worsen repetition and incomplete outputs in NLG systems. Due to its greedy token alignment, $F_{\mathrm{BERT}}$ precision may assign extra credit to a reference token 'retrieved' multiple times. In response, we contribute the following.
|
| 29 |
+
|
| 30 |
+
- We introduce metrics that apply discrete and continuous alignments to BERTSCORE, mitigating the pitfalls of greedy alignment.
|
| 31 |
+
- For text generation from AMR, we are the first to train on RL objectives with embeddings-based evaluation metrics.
|
| 32 |
+
- As RL rewards, we compute BERTSCORE-based metrics on a model's own token representations rather than BERT embeddings. This is more memory-efficient and does not overfit relative to pure cross-entropy training.
|
| 33 |
+
|
| 34 |
+
# 2 Greedy Token Alignment
|
| 35 |
+
|
| 36 |
+
The main insight behind BERTSCORE and related metrics is to align hypothesis and reference tokens using their pairwise vector similarity scores. These alignments are later used to weight the contribution of token-level similarity scores towards a final sequence-level score. Concretely, given $(\hat{\mathbf{y}}_1,\dots ,\hat{\mathbf{y}}_m)$ and $(\mathbf{y}_1,\dots ,\mathbf{y}_k)$ hypothesis and reference token embeddings, precision in $F_{\mathrm{BERT}}$ is
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
P _ {\mathrm {B E R T}} = \frac {1}{m} \sum_ {\hat {y} _ {i} \in \hat {y}} \max _ {y _ {j} \in y} \cos (\hat {\mathbf {y}} _ {i}, \mathbf {y} _ {j}),
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
where $\cos (\hat{\mathbf{y}},\mathbf{y}) = \hat{\mathbf{y}}^{\top}\mathbf{y} / \| \hat{\mathbf{y}}\| \| \mathbf{y}\|$ denotes cosine similarity. Each hypothesis token $\hat{y}_i$ is greedily aligned to the reference token $y_{j}$ with the highest corresponding embedding cosine similarity. Unlike in BLEU, $P_{\mathrm{BERT}}$ does not clip the number of times $\hat{y}_i$ can align to a unique $y_{j}$ by its count in $y$ . As such, a hypothesis will get excess credit by repeating a reference token beyond this count. While the authors claim greedy alignments have little effect on BERTSCORE evaluation performance, they perform poorly relative to metrics based on optimized alignments in our experiments.
|
| 43 |
+
|
| 44 |
+
# 3 Optimized Token Alignment
|
| 45 |
+
|
| 46 |
+
Aligning tokens between hypothesis and reference can be seen as an assignment problem, where a token pair $(\hat{y}_i, y_j)$ is highly weighted if it incurs low cost (i.e., distance).
|
| 47 |
+
|
| 48 |
+
Here, we describe discrete token matching (one-to-one) and soft alignment (one-to-many). For the latter, we extract alignments from the earth mover's distance (EMD, Villani, 2009; Peyre and Cuturi, 2019) transport matrix. We weight pairwise token similarities as in $F_{\mathrm{BERT}}$ using each of these two alignments to provide metrics $F_{\mathrm{DISC}}$ and $F_{\mathrm{CONT}}$ .
|
| 49 |
+
|
| 50 |
+
# 3.1 Discrete word matching
|
| 51 |
+
|
| 52 |
+
To avoid the issues with greedy alignment in $P_{\mathrm{BERT}}$ , we can extract one-to-one alignments between the two sequences. Let $C \in \mathbb{R}^{m \times k}$ denote the pairwise cosine distance matrix such that $C_{ij} = 1 - \cos(\hat{\mathbf{y}}_i, \mathbf{y}_j)$ . For notational clarity, let $\widetilde{C} = 1 - C$ . We wish to find alignments
|
| 53 |
+
|
| 54 |
+
$$
|
| 55 |
+
T ^ {d} = \underset {T \in \{0, 1 \} ^ {m \times k}} {\arg \min } \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {k} T _ {i j} C _ {i j}, \tag {1}
|
| 56 |
+
$$
|
| 57 |
+
|
| 58 |
+
such that no element in $\mathbf{h} = T\mathbf{1}_k$ and $\mathbf{r} = T^{\top}\mathbf{1}_m$ exceeds one. In other words, each $\hat{y}_i$ can align to at most one $y_j$ (exactly one when $m = k$ ), and vice versa. This linear sum assignment problem can be solved in low-order polynomial time (Crouse, 2016), making it suitable for use during training.
|
| 59 |
+
|
| 60 |
+
Metric The updated precision is found as
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
P _ {\mathrm {D I S C}} = \frac {1}{m} \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {k} T _ {i j} ^ {d} \widetilde {C} _ {i j}. \tag {2}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
Recall $R_{\mathrm{DISC}}$ takes an analogous form and is combined with $P_{\mathrm{DISC}}$ to produce an F1 score, $F_{\mathrm{DISC}}$ .
|
| 67 |
+
|
| 68 |
+
# 3.2 Continuous word alignment
|
| 69 |
+
|
| 70 |
+
We also experiment with soft alignments, where weights in $T$ are continuous. In the case of $P_{\mathrm{BERT}}$ , one-to-many alignments between each hypothesis token $\hat{y}_i$ and those in $\{y_j\}_{j\in [k]}$ are permitted.
|
| 71 |
+
|
| 72 |
+
Inspired by work applying EMD to semantic text similarity (Kusner et al., 2015; Clark et al., 2019), we frame alignment as minimizing the transportation cost between token embeddings from the hypothesis and reference distributions. The amount of token-level mass to transport between the two distributions is $\mathbf{h}$ and $\mathbf{r}$ , respectively. Instead of
|
| 73 |
+
|
| 74 |
+
assigning IDF as the mass per token (Zhao et al., 2019), we use the norm of its embedding (i.e., $\| \mathbf{y}\|$ Yokoi et al., 2020) for simplicity.
|
| 75 |
+
|
| 76 |
+
The EMD, or optimal transport, problem is
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
T ^ {c} = \underset {T \in \mathbb {R} _ {\geq 0} ^ {m \times k}} {\arg \min } \sum_ {i = 1} ^ {m} \sum_ {j = 1} ^ {k} T _ {i j} C _ {i j}, \tag {3}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
s.t. $\mathbf{h} = T\mathbf{1}_k$ $\mathbf{r} = T^{\top}\mathbf{1}_{m}$
|
| 83 |
+
|
| 84 |
+
Intuitively, if we view $T_{ij}$ as the joint probability of aligning $\hat{y}_i$ with $y_j$ , the row and column sums are marginals (Cuturi, 2013).
|
| 85 |
+
|
| 86 |
+
Metric To compute $F_{\mathrm{CONT}}$ , we normalize the alignment weights such that the rows of $T$ sum to one for precision, and the columns for recall.
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
P _ {\mathrm {C O N T}} = \frac {1}{m} \sum_ {i = 1} ^ {m} \frac {1}{h _ {i}} \sum_ {j = 1} ^ {k} T _ {i j} ^ {c} \widetilde {C} _ {i j}, \tag {4}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
R _ {\text {C O N T}} = \frac {1}{k} \sum_ {j = 1} ^ {k} \frac {1}{r _ {j}} \sum_ {i = 1} ^ {m} T _ {i j} ^ {c} \widetilde {C} _ {i j} \tag {5}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
# 4 Semantic Similarity Rewards
|
| 97 |
+
|
| 98 |
+
We propose to fine-tune on our optimized F1 metrics, applying a weighted average of cross-entropy and RL objectives. Given source sequence $x$ (e.g., a linearized AMR), the former is computed as
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\mathcal {L} _ {e} = - \sum_ {i = 1} ^ {k} \log p (y _ {i} \mid y _ {< i}, x).
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
To encourage close evaluation scores between sampled $\bar{y}$ and reference $y$ , the RL objective is
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathcal {L} _ {r} = \left(\Delta (\bar {y} _ {g}, y) - \Delta (\bar {y}, y)\right) \sum_ {i = 1} ^ {k} \log p (\bar {y} _ {i} \mid \bar {y} _ {< i}, x),
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $\Delta$ is the chosen evaluation metric and $\bar{y}_g$ is a greedily decoded baseline relative to $\bar{y}$ . This baseline helps reduce variance in REINFORCE (Williams, 1992). The combined cross-entropy and RL loss is
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathcal {L} = \lambda \mathcal {L} _ {r} + (1 - \lambda) \mathcal {L} _ {e},
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $\lambda$ is empirically set to 0.3.
|
| 117 |
+
|
| 118 |
+
# 5 Experiments
|
| 119 |
+
|
| 120 |
+
We examine the performance of our proposed metrics as RL rewards on AMR-to-text generation.
|
| 121 |
+
|
| 122 |
+
<table><tr><td></td><td>BLEU</td><td>METEOR</td><td>CHRF</td><td>BLEURT</td></tr><tr><td>XENT</td><td>36.37</td><td>39.94</td><td>65.68</td><td>56.30</td></tr><tr><td>BL-R</td><td>37.06</td><td>40.30</td><td>66.19</td><td>56.08</td></tr><tr><td>FBERT</td><td>36.06</td><td>39.85</td><td>65.23</td><td>55.45</td></tr><tr><td>FCONT</td><td>36.91</td><td>40.34</td><td>66.07</td><td>55.96</td></tr><tr><td>FDISC</td><td>37.65</td><td>40.61</td><td>66.55</td><td>57.01</td></tr></table>
|
| 123 |
+
|
| 124 |
+

|
| 125 |
+
Table 1: Results on the AMR2017T10 test set.
|
| 126 |
+
Figure 1: Development set BLEU during fine-tuning.
|
| 127 |
+
|
| 128 |
+
# 5.1 Setup
|
| 129 |
+
|
| 130 |
+
Dataset The LDC2017T10 dataset that we experiment on contains $\sim 36\mathrm{K}$ training and $\sim 1.4\mathrm{K}$ each of development and test AMR-sentence pairs. To leverage strong pre-trained language models, the AMRs are linearized as in Ribeiro et al. (2021).
|
| 131 |
+
|
| 132 |
+
Evaluation We report results in terms of BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), CHRF (Popovic, 2015), and BLEURT (Sellam et al., 2020). Only the latter metric makes use of pre-trained contextualized embeddings.
|
| 133 |
+
|
| 134 |
+
Baselines For all experiments, we fine-tune the small capacity T5 model (Raffel et al., 2020) from Ribeiro et al. (2021). The model has 60M parameters and features a Transformer-based encoder and decoder. We compare our $F_{\mathrm{DISC}}$ and $F_{\mathrm{CONT}}$ metrics for RL-based training against three baseline approaches. XENT is a pure cross-entropy objective. For RL-based approaches, we include a BLEU reward (BL-R) and one with $F_{\mathrm{BERT}}$ —computed on the lowest level token embeddings in T5. $^{1}$ The $\lambda$ scaling factor for the RL objective is set to 0.3 across all RL-based experiments.
|
| 135 |
+
|
| 136 |
+
Implementation details Adam (Kingma and Ba, 2015) is used to optimize the model with an initial
|
| 137 |
+
|
| 138 |
+
<table><tr><td rowspan="4">(1)</td><td>REF</td><td>There are 12 teams totally participating in the competition.</td></tr><tr><td>XENT</td><td>The competition was part of a total of 12 teams.</td></tr><tr><td>FBERT</td><td>The competition is part of a total of 12 teams.</td></tr><tr><td>FDISC</td><td>The total of 12 teams participated in competition.</td></tr><tr><td rowspan="4">(2)</td><td>REF</td><td>Raymond zilinskas stated that in the worst case the bacteria would be defrosted from minus 70 degrees and it would be a real mess to clean up afterward because it would not be known for certain whether all the bacteria was dead.</td></tr><tr><td>XENT</td><td>Raymond Zilinskas stated that the bacterium was defrost in the worst case and that afterward cleaning up was a real mess because there is certainly no known cause of death for all the bacteriums.</td></tr><tr><td>FBERT</td><td>Raymond Zilinskas stated that the bacterium was defrosting in the worst case and the afterward cleaning up was a real mess because the bacterium was certainly not known to die of all the bacteriums.</td></tr><tr><td>FDISC</td><td>Raymond Zilinskas stated that the bacterium was defrost in the worst case and the afterward cleaning up was a real mess because the bacterium was certainly not known to have all died.</td></tr></table>
|
| 139 |
+
|
| 140 |
+
Table 2: Model-generated examples from three of the five explored systems.
|
| 141 |
+
|
| 142 |
+
learning rate of $1 \cdot 10^{-4}$ and a batch size of 16. Following Ribeiro et al. (2021), we use a linearly decreasing schedule for the learning rate and no warm-up. Since Ribeiro et al. (2021) do not release their training methodology, we train until validation BLEU does not increase for three epochs—an approach found in previous work fine-tuning T5 for AMR-to-text generation (Hoyle et al., 2021). We use $\mathrm{SciPy}^2$ and the Python Optimal Transport library to solve Eqs. 1 and 3.
|
| 143 |
+
|
| 144 |
+
# 5.2 Results
|
| 145 |
+
|
| 146 |
+
Table 1 shows that $F_{\mathrm{DISC}}$ achieves the highest scores on all metrics, surpassing $F_{\mathrm{CONT}}$ as well. It scores higher than XENT by 1.28 BLEU and 0.71 BLEURT points. Although BL-R was specially trained to optimize BLEU, $F_{\mathrm{DISC}}$ still outperforms it by over half a point on that metric.
|
| 147 |
+
|
| 148 |
+
There is a clear hierarchy among the approaches based on F1 score, with $F_{\mathrm{DISC}}$ above $F_{\mathrm{CONT}}$ , followed by $F_{\mathrm{BERT}}$ at the bottom. This dynamic suggests that the optimized alignments may provide higher quality reward signals during training.
|
| 149 |
+
|
| 150 |
+
We note that although $F_{\mathrm{CONT}}$ performed comparably to BL-R, it could exploit tensor operations and was far faster to compute than BLEU. On the other hand, $F_{\mathrm{BERT}}$ achieved significantly lower scores than BL-R. As noted in §2, perhaps the clipped precision counts in BLEU gave BL-R an advantage over the greedy nature of $F_{\mathrm{BERT}}$ .
|
| 151 |
+
|
| 152 |
+
# 5.3 Analysis
|
| 153 |
+
|
| 154 |
+
Training stability As shown in Fig. 1, $F_{\mathrm{DISC}}$ continues to improve on validation BLEU long after XENT overfits at epoch 18. This runs counter to the expectation of unstable RL-based training.
|
| 155 |
+
|
| 156 |
+
It is also interesting that while $F_{\mathrm{CONT}}$ validation performance looks fairly low relative to BL-R, it achieves similar scores at test time. This may be due to irrelevant differences between the validation and test sets, however.
|
| 157 |
+
|
| 158 |
+
Manual inspection Table 2 lists a few examples of model outputs for detailed analysis. In example (1), both XENT and $F_{\mathrm{BERT}}$ make the error of predicting "part" instead of "participating". Only $F_{\mathrm{DISC}}$ approaches the meaning of the reference. This may be a side-effect of weighting lexical over semantic similarity in the former two systems. In (2), $F_{\mathrm{BERT}}$ repeats the word "bacterium", while XENT takes an anthropomorphic view of the bacterium. The repetition may be a result of $F_{\mathrm{BERT}}$ rewarding multiple instances of the same token by mistake during greedy alignment.
|
| 159 |
+
|
| 160 |
+
# 6 Conclusion
|
| 161 |
+
|
| 162 |
+
This paper proposes new F1 score metrics based on optimized rather than greedy alignments between predicted and reference tokens. Instead of letting hypotheses align to reference tokens without regard to their frequencies (and vice versa), we extract alignments as a constrained optimization problem. In the discrete case, we treat alignment as a matching problem between hypothesis and reference tokens. In the continuous case, we find alignments that minimize earth mover's distance between the two token embedding distributions.
|
| 163 |
+
|
| 164 |
+
We apply new metrics as rewards during RL-based training for AMR-to-text generation, with $F_{\mathrm{DISC}}$ outperforming both a cross-entropy baseline and one optimizing BLEU rewards. Despite being computed on a downstream model's token embeddings, the metrics still provide informative rewards during training without signs of overfitting.
|
| 165 |
+
|
| 166 |
+
Acknowledgments Research supported by NSF awards IIS-1813823 and CCF-1934962.
|
| 167 |
+
|
| 168 |
+
# References
|
| 169 |
+
|
| 170 |
+
Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186.
|
| 171 |
+
Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72.
|
| 172 |
+
Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. Sentence mover's similarity: Automatic evaluation for multi-sentence texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2748-2760.
|
| 173 |
+
David F. Crouse. 2016. On implementing 2D rectangular assignment algorithms. IEEE Transactions on Aerospace and Electronic Systems, 52(4):1679-1696.
|
| 174 |
+
Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in Neural Information Processing Systems, 26:2292-2300.
|
| 175 |
+
Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 355-364.
|
| 176 |
+
Alexander Miserlis Hoyle, Ana Marasovic, and Noah A Smith. 2021. Promoting graph awareness in linearized graph-to-text generation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 944-956.
|
| 177 |
+
Inigo Jauregi Unanue, Jacob Parnell, and Massimo Piccardi. 2021. BERTune: Fine-tuning neural machine translation with BERTScore. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 915-924.
|
| 178 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR-15).
|
| 179 |
+
Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In Proceedings of the 32nd International Conference on Machine Learning, pages 957-966.
|
| 180 |
+
|
| 181 |
+
Siyao Li, Deren Lei, Pengda Qin, and William Yang Wang. 2019. Deep reinforcement learning with distributional semantic rewards for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6038-6044.
|
| 182 |
+
Chin-Yew Lin and Eduard Hovy. 2002. Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization, pages 45-51.
|
| 183 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318.
|
| 184 |
+
Gabriel Peyré and Marco Cuturi. 2019. Computational optimal transport: With applications to data science. Foundations and Trends in Machine Learning, 11(5-6):355-607.
|
| 185 |
+
Maja Popovic. 2015. *chrF: character n-gram F-score for automatic MT evaluation*. In *Proceedings of the Tenth Workshop on Statistical Machine Translation*, pages 392–395.
|
| 186 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 187 |
+
Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations.
|
| 188 |
+
Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2021. Investigating pretrained language models for graph-to-text generation. In Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI, pages 211-227.
|
| 189 |
+
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892.
|
| 190 |
+
Cédric Villani. 2009. Optimal Transport: Old and New. Springer, Berlin.
|
| 191 |
+
John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, and Graham Neubig. 2019. Beyond BLEU: Training neural machine translation with semantic similarity. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4344-4355.
|
| 192 |
+
|
| 193 |
+
Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229-256.
|
| 194 |
+
Sho Yokoi, Ryo Takahashi, Reina Akama, Jun Suzuki, and Kentaro Inui. 2020. Word rotator's distance. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2944-2960.
|
| 195 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT. In International Conference on Learning Representations.
|
| 196 |
+
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563-578.
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7a17a80f0240fcb0caaef28ac23fc8188f2023a850641e76bbed949489931d4f
|
| 3 |
+
size 208446
|
rewardingsemanticsimilarityunderoptimizedalignmentsforamrtotextgeneration/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1464e56d417ad99392cc5ae48751c9937755d6fcdc27569adf77e9f130a3c581
|
| 3 |
+
size 236458
|
s4tuningasimplecrosslingualsubnetworktuningmethod/41cebcf5-67b8-4cc5-a098-0cb080a173d1_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:518582339b1bd40b33a2cd30af3dcc705cab5e05daf432caf24432ea57e327fb
|
| 3 |
+
size 52799
|
s4tuningasimplecrosslingualsubnetworktuningmethod/41cebcf5-67b8-4cc5-a098-0cb080a173d1_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8ebcb8ab1806e2751ea0857f692d43d4e502e0d6263ce557227eb19b1ec1b5a
|
| 3 |
+
size 61611
|