| { |
| "File Number": "1032", |
| "Title": "Going Beyond Sentence Embeddings: A Token-Level Matching Algorithm for Calculating Semantic Textual Similarity", |
| "Limitation": "Did you describe the limitations of your work? 6\n3 A2. Did you discuss any potential risks of your work? 6\n3 A3. Do the abstract and introduction summarize the paper’s main claims? 1 and 2\n7 A4. Have you used AI writing assistants when working on this paper? Left blank. B 7 Did you use or create scientific artifacts? Left blank. B1. Did you cite the creators of artifacts you used? No response. B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response. B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response. B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response. B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response. B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response. C 3 Did you run computational experiments? 4\n3 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. 3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4\n3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4\n3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank. D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response. D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response. D3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response. D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response. D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.", |
| "abstractText": "Semantic Textual Similarity (STS) measures the degree to which the underlying semantics of paired sentences are equivalent. State-of-theart methods for STS task use language models to encode sentences into embeddings. However, these embeddings are limited in representing semantics because they mix all the semantic information together in fixed-length vectors, which are difficult to recover and lack explainability. This paper presents a token-level matching inference algorithm, which can be applied on top of any language model to improve its performance on STS task. Our method calculates pairwise token-level similarity and token matching scores, and then aggregates them with pretrained token weights to produce sentence similarity. Experimental results on seven STS datasets show that our method improves the performance of almost all language models, with up to 12.7% gain in Spearman’s correlation. We also demonstrate that our method is highly explainable and computationally efficient.", |
| "1 Introduction": "Measuring the similarity between two sentences is an important task in many natural language processing (NLP) applications. This makes Semantic Textual Similarity (STS) a crucial preliminary step in various domains, such as information retrieval (Wang et al., 2020), machine translation (Castillo and Estrella, 2012), plagiarism detection (Foltỳnek et al., 2019), semantic search (Mangold, 2007), and conversational systems (Santos et al., 2020).\nLarge pretrained language models (Devlin et al., 2018; Liu et al., 2019) have achieved the stateof-the-art performance on STS task (Reimers and Gurevych, 2019; Gao et al., 2021; Chuang et al., 2022). These approaches typically use language models to encode input sentences into embeddings and then calculate STS using similarity metrics such as the cosine function. However, sentence embeddings have limitations in representing sentences,\nas all the information of the sentence is aggregated and mixed together in the fixed-length embedding. This problem is especially pronounced for the STS task, which requires fine-grained, low-level semantic understanding and comparison (Majumder et al., 2016). As a result, methods based on sentence embeddings often have difficulty being well-trained and lack explainability for their predicted results.\nGoing beyond sentence embeddings, we propose a token-level matching algorithm for STS. Our algorithm works in the inference stage, so it can be applied on top of any trained language model to improve its performance. Specifically, given a trained language model (also called base model), we use it to generate token embeddings for the two input sentences and calculate their pairwise token similarity. We then design a novel scoring function to calculate the matching score for each token. The sentence similarity score is calculated by averaging all the token matching scores with token weights, which are learned unsupervisedly from a large corpus. Our method captures fine-grained, token-level information, which is more indicative, robust, and explainable than sentence embeddings.\nWe conducted experiments on seven standard STS datasets using six language models and their variants as base models. Our method is able to improve the performance of almost all existing language models, especially those “poor” ones (up to 12.7% improvement in Spearman’s correlation). Specifically, our model improves SimCSE by 0.8% to 2.2%, and improves ESimCSE by 0.6% to 1.2%, which is the current state-of-the-art model on the STS task. We also demonstrated the explainability of our model by identifying the semantically similar parts between two input sentences.", |
| "2 Related Work": "Existing work on STS can be broadly divided into two categories: lexicon-based and semanticbased. Lexicon-based approaches (Richardson and\n563\nSmeaton, 1995; Niwattanakul et al., 2013; Opitz et al., 2021) calculate the correlation between the character streams of two sentences being compared, which can be applied at the level of characters or words. Semantic-based approaches can be further divided into three categories: word-based methods (Wang et al., 2016), which treat a sentence as a list of words and compare the correlations between words; structure-based methods, which use language tools such as grammar (Lee et al., 2014), part-of-speech (Batanović and Bojić, 2015), and word order (Li et al., 2004) to process sentences and compare their structure; and vector-based methods (Reimers and Gurevych, 2019; Liu et al., 2021; Gao et al., 2021; Wu et al., 2021; Chuang et al., 2022), which calculate sentence embeddings that describe each sentence as a vector and have achieved the state-of-the-art performance on STS.\nOur method is conceptually similar to BERTScore (Zhang et al., 2019), a token-level evaluation metric for text generation. However, there are two significant differences between these two approaches: (1) BERTScore is an evaluation metric, while our method is an algorithm for calculating STS; (2) The key designs for token matching score and token weights are also different.", |
| "3 The Proposed Method": "Given a pair of two sentences s = ⟨t1, t2, · · · , t|s|⟩ and ŝ = ⟨t̂1, t̂2, · · · , t̂|ŝ|⟩ where ti (t̂i) is the i-th token in sentence s (ŝ), our goal is to learn a function f(s, ŝ) ∈ R that calculates the semantic similarity between s and ŝ.\nToken-Level Similarity Matrix We can calculate token embeddings for s and ŝ using any language model, including pretrained language models (Devlin et al., 2018; Liu et al., 2019), or language models specifically finetuned for the STS task (Li et al., 2020; Gao et al., 2021; Chuang et al., 2022). Given sentence s and ŝ, the language model generates the token embedding matrix X ∈ R|s|×d and X̂ ∈ R|ŝ|×d, where each row corresponds to a d-dimension token embedding. The token-level similarity matrix for s and ŝ is then calculated as S = XX̂⊤, in which the entry Sij indicates the similarity between token ti and t̂j .\nToken Matching Score The token matching score measures the likelihood that a given token in one sentence can be matched to a token in the other sentence. This score takes into account two aspects:\n(1) significance. Similar to BERTScore (Zhang et al., 2019), we match a token to its most similar token in the other sentence. For example, the significance score of ti ∈ s is sig(ti) = maxt̂j∈ŝ Sij . (2) uniqueness. It is important to note that a high score for sig(ti) does not necessarily mean that ti can be matched to a certain token in ŝ, but rather that Sij is high for all tj ∈ ŝ. To measure how unique sig(ti) is, we define the uniqueness score of ti as uni(ti) = maxt̂j∈ŝ Sij −2nd-maxt̂j∈ŝSij , i.e., the difference between the maximum and the second maximum value of row Si·. We provide an ablation study on the two parts in our experiments.\nThe token matching score is defined as the sum of the above two scores:\nS(ti) = sig(ti) + uni(ti)\n=2 ·maxt̂j∈ŝSij − 2nd-maxt̂j∈ŝSij . (1)\nSimilarly, for t̂j ∈ ŝ, we have S(t̂j) = 2 · maxti∈sSij − 2nd-maxti∈sSij . Token Weighting Tokens typically have different levels of semantic importance. Previous work (Zhang et al., 2019) uses inverse document frequency (IDF) as token weights, as rare words can be more indicative than common words. However, in many cases, high-frequency words can be semantically important (e.g., “not”) while low-frequency words may be semantically unimportant (e.g., specific numbers). To address the mismatch between token importance and token frequency, we propose learning token weights from plain texts.\nSpecifically, we choose unsupervised SimCSE (Gao et al., 2021) as the training model, which takes an input sentence and predicts itself in a contrastive objective with dropout used as noise. During the training stage of SimCSE, instead of using the lastlayer embedding of CLS token as the sentence embedding, we assign a trainable weight parameter wi for each token i in the vocabulary and calculate the weighted average ∑ i∈switi as the sentence embedding for s, where ti is the last-layer embedding of token i ∈ s. In this way, the token weights w can be trained together with the model parameters of SimCSE on a large unsupervised corpus, which has been shown to be more semantically precise than frequency-based token weights.\nThe final STS score of the input sentences (s, ŝ) is the weighted average of all token scores:\nf(s, ŝ) = ∑ ti∈s S(ti)wti 2 ∑\nti∈swti +\n∑ t̂i∈ŝ S(t̂i)wt̂i 2 ∑\nt̂i∈ŝwt̂i . (2)", |
| "4.1 Evaluation Setup": "We evaluate our method on seven STS datasets: STS 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017), and SICK-Relatedness (Marelli et al., 2014). Each dataset consists of sentence pairs and their corresponding ground-truth similarity scores. We use Spearman’s correlation to evaluate the predicted results of our method and all baseline methods on the test set. Baseline methods include Sentence-BERT (Reimers and Gurevych, 2019), ConSERT (Yan et al., 2021), Mirror-BERT (Liu et al., 2021), SimCSE (Gao et al., 2021), ESimCSE (Wu et al., 2021), and DiffCSE (Chuang et al., 2022). We use the pretrained models released by the authors as our base model, then compare the performance of our method with them, as shown in Table 2. The Sentence-BERT and ConSERT models were downloaded from https://github. com/yym6472/ConSERT, while the other pretrained models can be directly loaded by their names using HuggingFace API. We use the last hidden layer representation of the [CLS] token as the sentence embedding, because it performs much better than the representation after the pooling layer in almost all cases.", |
| "4.2 Main Result": "Table 1 shows the Spearman’s correlation results on the seven STS datasets. In each entry, the number\nbefore “/” is the result of the original model (using the embedding of the CLS token as the sentence embedding), while the number after “/” is the result of applying our method on top of the original model. The last column shows the average absolute gain of our method compared to the baseline method across all tasks.\nOur method can improve the results for almost all models. The improvement is particularly signif-\nicant if the original model does not perform well, e.g., Sentence-BERT (+5.2% and +4.9%) and ConSERT (+4.9% and +3.0%). From another perspective, our method can be seen as a universal booster for language models on the STS task. For example, our method can improve the Spearman’s correlation of all base models to around 80 or even higher on STS-B dataset, regardless of the original performance of the base model. This indicates that even “poor” language models can still generate high-quality token embeddings that preserve token similarity information very well. However, existing language models only use a single embedding to represent a sentence, which mixes all the information of the sentence together and makes it difficult for language models to be well-trained.", |
| "4.3 Ablation Study": "We investigate the impact of different token matching functions and token weights, which are two key components of our method. The base model here is SimCSE-RoBERTabase, but the conclusion is similar for other base models. The results are reported in Table 3. For the token matching function, we find that the performance slightly drops when using only max and substantially drops when using only max−2nd-max . This suggests that significance is more important in measuring token matching scores, while considering uniqueness further improves the performance. For token weights, we observe that IDF weights do not perform well and are even worse than the variant with no token weights. We also evaluate the variant of max + IDF weights, which is the same design as BERTScore (Zhang et al., 2019). Our model outperforms BERTScore by around 2% on both datasets.", |
| "4.4 Running Time Analysis": "We investigate the running time of our method. We set the base model as SimCSE-RoBERTabase or SimCSE-RoBERTalarge, and then run the original inference method and our inference method on STS-B dataset with batch size ranging from 4 to 128 on an Nvidia Tesla P40 GPU. The results, shown in Figure 1, indicate that our method only incurs an average time overhead of 12.9% and 9.1% on the two base models, respectively.", |
| "4.5 Case Study": "As a case study, we consider a sentence pair from STS-B dataset: “a man is performing a card trick” and “a man is doing trick with play cards”, whose ground truth similarity is the highest level. The token similarity matrix for this pair is shown in Figure 2, with a dark/light blue background indicating the first/second highest scores in each row, and bold/black numbers indicating the first/second highest scores in each column.\nExactly matched tokens (“a”, “man”, “is”, and “trick”) receive the highest scores, which are indicated by color dark green. Tokens that cannot be matched (“a”, “with”, and “play”) receive the lowest scores, which are indicated by color light\ngreen. Tokens that are not exactly the same but are semantically equivalent (“performing”-“doing”, “card”-“cards”) receive scores that fall in the middle level. While using only max (i.e. significance) as the token matching score may produce similar result, adding the term max−2nd-max (i.e. uniqueness) improves the reliability and distinguishability of those scores. This is why our model performs slightly better than max, as shown in Table 3.\nAdditionally, Figure 2 demonstrates that pretrained token weights are more accurate than IDF token weights. Some semantically unimportant tokens, such as “a”, “is”, and “with”, are given too much weight when using IDF method, which affects the overall accuracy of the prediction. As a result, the predicted STS using pretrained token weights (0.967) is also more accurate than using IDF token weights (0.959).", |
| "5 Conclusion": "This paper presents a token-level matching algorithm for calculating STS between pairs of sentences. Unlike previous approaches that use pretrained language models to encode sentences into embeddings, our method calculates the pairwise token similarity, and then applies a token matching functions to these scores. The resulting scores are averaged with pretrained token weights to produce the final sentence similarity. Our model consistently improves the performance of existing language models and is also highly explainable, with minimal extra time overhead during inference.\nLimitations\nOur model does not follow existing sentence embedding models that encode sentences into embeddings. Therefore, one limitation of our method is that it is specifically designed for STS task (or more precisely, sentence comparison task) and cannot be easily transferred to other tasks, such as sentence classification.\nAdditionally, our approach incurs a slight extra time overhead of approximately 10%, which may be unacceptable for applications that require high time efficiency.\nOur method only takes into account the semantic comparison of individual tokens, rather than considering the meaning of combinations of tokens or phrases. A possible direction for future work is to incorporate the consideration of compositional semantics, for example by grouping tokens into\nphrases and applying a similar phrase-level matching algorithm.", |
| "A For every submission:": "3 A1. Did you describe the limitations of your work?\n6\n3 A2. Did you discuss any potential risks of your work? 6\n3 A3. Do the abstract and introduction summarize the paper’s main claims? 1 and 2\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 7 Did you use or create scientific artifacts? Left blank.\nB1. Did you cite the creators of artifacts you used? No response.\nB2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response.\nB3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response.\nB4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response.\nB5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response.\nB6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. No response.\nC 3 Did you run computational experiments? 4\n3 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.\n3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? 4\n3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? 4\n3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? 4\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.\nD1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.\nD2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response.\nD3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.\nD4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.\nD5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response." |
| } |