Add Batch 86244a9b-f9e1-4076-a4ad-cef07bdfc34f
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_content_list.json +3 -0
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_model.json +3 -0
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_origin.pdf +3 -0
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/full.md +399 -0
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/images.zip +3 -0
- 3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/layout.json +3 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_content_list.json +3 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_model.json +3 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_origin.pdf +3 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/full.md +308 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/images.zip +3 -0
- 4and7bitlabelingforprojectiveandnonprojectivedependencytrees/layout.json +3 -0
- abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_content_list.json +3 -0
- abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_model.json +3 -0
- abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_origin.pdf +3 -0
- abenchmarkforreasoningwithspatialprepositions/full.md +217 -0
- abenchmarkforreasoningwithspatialprepositions/images.zip +3 -0
- abenchmarkforreasoningwithspatialprepositions/layout.json +3 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_content_list.json +3 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_model.json +3 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_origin.pdf +3 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/full.md +0 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/images.zip +3 -0
- achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/layout.json +3 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_content_list.json +3 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_model.json +3 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_origin.pdf +3 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/full.md +323 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/images.zip +3 -0
- acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/layout.json +3 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_content_list.json +3 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_model.json +3 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_origin.pdf +3 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/full.md +468 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/images.zip +3 -0
- acomprehensiveevaluationofbiomedicalentitylinkingmodels/layout.json +3 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_content_list.json +3 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_model.json +3 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_origin.pdf +3 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/full.md +438 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/images.zip +3 -0
- adeeperautoregressiveapproachtononconvergentdiscourseparsing/layout.json +3 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_content_list.json +3 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_model.json +3 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_origin.pdf +3 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/full.md +378 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/images.zip +3 -0
- adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/layout.json +3 -0
- adiachronicperspectiveonusertrustinaiunderuncertainty/9e11161d-a922-4d24-b432-d5ebe718bf88_content_list.json +3 -0
- adiachronicperspectiveonusertrustinaiunderuncertainty/9e11161d-a922-4d24-b432-d5ebe718bf88_model.json +3 -0
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:89d35a29a90c8f12ffc93a9dc2c62536780c18056841ad3263d2cd3144b6af8f
|
| 3 |
+
size 93449
|
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cc6d10fdf5c9a0348e7f7686df83c0c77ba48f2efdf12e7424e1887560c08f33
|
| 3 |
+
size 112845
|
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/8ce34976-f6dd-4c0e-b16c-725a08557f5d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:513bc571572ae8d152b7fde853df085c77b8212bd167a3ce5c335a34c64a181c
|
| 3 |
+
size 7818163
|
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/full.md
ADDED
|
@@ -0,0 +1,399 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding
|
| 2 |
+
|
| 3 |
+
Zehan Wang $^{1*}$ Haifeng Huang $^{1*}$ Yang Zhao $^{2}$ Linjun Li $^{1}$ Xize Cheng $^{1}$ Yichen Zhu $^{1}$ Aoxiong Yin $^{1}$ Zhou Zhao $^{1\dagger}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Zhejiang University $^{2}$ ByteDance $\{wangzehan01, huanghaifeng\} @zju.edu.cn$
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description. Typically, the sentences describing the target object tend to provide information about its relative relation between other objects and its position within the whole scene. In this work, we propose a relation-aware one-stage framework, named 3D Relative Position-aware Network (3DRP-Net), which can effectively capture the relative spatial relationships between objects and enhance object attributes. Specifically, 1) we propose a 3D Relative Position Multi-head Attention (3DRP-MA) module to analyze relative relations from different directions in the context of object pairs, which helps the model to focus on the specific object relations mentioned in the sentence. 2) We designed a soft-labeling strategy to alleviate the spatial ambiguity caused by redundant points, which further stabilizes and enhances the learning process through a constant and discriminative distribution. Extensive experiments conducted on three benchmarks (i.e., ScanRefer and $\mathrm{Nr3D / Sr3D}$ ) demonstrate that our method outperforms all the state-of-the-art methods in general.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Visual grounding aims to localize the desired objects based on the given natural language description. With the rapid development and wide applications of 3D vision (Xia et al., 2018; Savva et al., 2019; Zhu et al., 2020; Wang et al., 2019) in recent years, 3D visual grounding task has received more and more attention. Compared to the well-studied 2D visual grounding (Yang et al., 2019; Kamath et al., 2021; Yang et al., 2022; Li and Sigal, 2021; Deng et al., 2021; Plummer et al., 2015; Kazemzadeh et al., 2014), the input sparse point clouds in the 3D visual grounding task are more
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: 3D visual grounding is the task of grounding a description in a 3D scene. In the sentences, all the words indicating the relative positions of the target object are bolded. Notice that relative position relations between objects are crucial for distinguishing the target object, and the relative position-related descriptions in 3D space are complex (e.g., "above", "on the left", "in front of", and "next to", etc.)
|
| 17 |
+
|
| 18 |
+
irregular and more complex in terms of spatial positional relationships, which makes it much more challenging to locate the target object.
|
| 19 |
+
|
| 20 |
+
In the field of 3D visual grounding, the previous methods can be mainly categorized into two groups: the two-stage approaches (Chen et al., 2020; Achlioptas et al., 2020; Zhao et al., 2021b; Yuan et al., 2021; Huang et al., 2022; Cai et al., 2022; Huang et al., 2021; Wang et al., 2023) and the one-stage approaches (Luo et al., 2022). The former ones follow the detection-and-rank paradigm, and thanks to the flexibility of this architecture, they mainly explore the benefits of different object relation modeling methods for discriminating the target object. The latter fuse visual-text features to predict the bounding boxes of the target objects directly, and enhance the object attribute representation by removing the unreliable proposal generation phase.
|
| 21 |
+
|
| 22 |
+
However, these two methods still have limitations. For two-stage methods, the model performance is highly dependent on the quality of the object proposals. However, due to the sparsity and irregularity of the input 3D point cloud, sparse proposals may leave out the target object, while dense proposals will bring redundant computational costs and make the matching stage too
|
| 23 |
+
|
| 24 |
+
complicated to distinguish the target object. As for the one-stage methods, although the existing approach (Luo et al., 2022) achieves better performance, they can not capture the relative spatial relationships between objects, which makes it often fail in samples that rely on relative relation reasoning. As shown in Fig.1, the majority of sentences in 3D visual grounding contain relative spatial relation descriptions. Furthermore, due to the spatial complexity of the 3D scene, there are various relative position-related descriptions from different orientations. To further illustrate that relative position is a general and fundamental issue in 3D visual grounding tasks, we analyze the frequency of relative position words in ScanRefer and $\mathrm{Nr3D / Sr3D}$ , and the results show that at least $90\%$ of the sentences describe the relative position of objects, and most of them contain multiple spatial relations. Detailed statistics can be found in supplementary materials.
|
| 25 |
+
|
| 26 |
+
To alleviate above problems, we propose a one-stage 3D visual grounding framework, named 3D Relative Position-aware Network (3DRP-Net). Our 3DRP-Net combines and enhances the advantages of the two-stage approaches for relations modeling and the one-stage approaches for proposal-free detection while avoiding the shortcomings of both methods. For the relations modeling, we devise a novel 3D Relative Position Multi-head Attention (3DRP-MA) module, which can capture object relations along multiple directions and fully consider the interaction between the relative position and object pairs which is ignored in previous two-stage methods (Yuan et al., 2021; Zhao et al., 2021b; Huang et al., 2021).
|
| 27 |
+
|
| 28 |
+
Specifically, we first extract features from the point cloud and description, and select key points. Then, the language and visual features interacted while considering the relative relations between objects. For the relation modeling, We introduce learnable relative position encoding in different heads of the multi-head attention to capture object pair relations from different orientations. Moreover, in sentences, the relative relations between objects are usually described as "Object 1-Relation-Object $2^{\prime \prime}$ such as "tv is on the tv cabinet" and "curtain is hanging on the window" in Fig.1. The relation is meaningful only in the context of object pairs, thus our relative position encoding would interact with the object pairs' feature, to better capture and focus on the mentioned relations.
|
| 29 |
+
|
| 30 |
+
Besides, as discussed in (Qi et al., 2019), point
|
| 31 |
+
|
| 32 |
+
clouds only capture surface of object, and the 3D object centers are likely to be far away from any point. To accurately reflect the location of objects and learn comprehensive object relation knowledge, we sample multiple key points of each object. However, redundant key points may lead to ambiguity. To achieve disambiguation while promoting a more stable and discriminative learning process, we propose a soft-labeling strategy that uses a constant and discriminative distribution as the target label instead of relying on unstable and polarized hard-label or IoU scores.
|
| 33 |
+
|
| 34 |
+
Our main contributions can be summarized as follows:
|
| 35 |
+
|
| 36 |
+
- We propose a novel single-stage 3D visual grounding model, called 3D Relative Position-aware Network (3DRP-Net), which for the first time captures relative position relationships in the context of object pairs for better spatial relation reasoning.
|
| 37 |
+
- We design a 3D Relative Position Multi-head Attention (3DRP-MA) module for simultaneously modeling spatial relations from different orientations of 3D space. Besides, we devise a soft-labeling strategy to alleviate the ambiguity while further enhancing the discriminative ability of the optimal key point and stabilizing the learning process.
|
| 38 |
+
- Extensive experiments demonstrate the effectiveness of our method. Our 3DRP-Net achieves state-of-the-art performance on three mainstream benchmark datasets ScanRefer, Nr3D, and Sr3D.
|
| 39 |
+
|
| 40 |
+
# 2 Related Work
|
| 41 |
+
|
| 42 |
+
# 2.1 3D Visual Grounding
|
| 43 |
+
|
| 44 |
+
Recent works in 3D visual grounding can be summarized in two categories: two-stage and one-stage methods. We briefly review them in the following. Two-stage Methods. Two-stage approaches follow the detection-and-rank scheme. In the first stage, 3D object proposals are generated by a pretrained 3D object detector (Chen et al., 2020) or with the ground truth (Achlioptas et al., 2020). In the second stage, the best matching proposals would be selected by leveraging the language description. Advanced two-stage methods achieve good performance by better modeling the relationships among objects. Referit3D (Achlioptas et al.,
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
Figure 2: 3DRP-Net is a transformer-based one-stage 3D VG model which takes a 3D point cloud and a description as inputs and outputs the bounding box of the object most relevant to the input expression. In the stacked transformer layer, the 3DRP-MA captures the relative relations between points in the 3D perspective. Specifically, the two self-attention based on 3DRP-MA capture the relative relations between objects, while the cross-attention between key points and seed points enhances the global position information.
|
| 48 |
+
|
| 49 |
+
2020) and TGNN (Huang et al., 2021) make use of the graph neural network (Scarselli et al., 2008) to model the relationships between objects. 3DVG-Transformer (Zhao et al., 2021b) utilize attention mechanisms (Vaswani et al., 2017) to enable interactions between proposals, and the similarity matrix can be adjusted based on the relative Euclidean distances between each pair of proposals.
|
| 50 |
+
|
| 51 |
+
One-stage Methods. One-stage approaches avoid the unstable and time-consuming object proposals generation stage under the detection-and-rank paradigm. The visual features extracted by the backbone are directly and densely fused with the language features, and the fused features are leveraged to predict the bounding boxes and referring scores. 3D-SPS (Luo et al., 2022) first addresses the 3D visual grounding problem by one-stage strategy. It firstly filters out the key points of language-relevant objects and processes inter-model interaction to progressively down-sample the key points.
|
| 52 |
+
|
| 53 |
+
Our work utilizes the advanced one-stage framework and introduces a novel relative relation module to effectively capture the intricate relations between objects, enabling our model to achieve superior performance.
|
| 54 |
+
|
| 55 |
+
# 2.2 Position Encoding in Attention
|
| 56 |
+
|
| 57 |
+
The attention mechanism is the primary component of transformer (Vaswani et al., 2017). Since the attention mechanism is order-independent, infor
|
| 58 |
+
|
| 59 |
+
mation about the position should be injected for each token. In general, there are two mainstream encoding methods: absolute and relative position encoding.
|
| 60 |
+
|
| 61 |
+
Absolute Position Encoding. The original transformer (Vaswani et al., 2017) considers the absolute positions, and the encodings are generated based on the sinusoids of varying frequency. Recent 3D object detection studies also use absolute position encodings. In Group-free (Liu et al., 2021b), the encodings are learned by the center and size of the predicted bounding box, while the Fourier function is used in 3DETR (Misra et al., 2021).
|
| 62 |
+
|
| 63 |
+
Relative Position Encoding. Recently, some advanced works in natural language processing (He et al., 2020; Raffel et al., 2020; Shaw et al., 2018) and image understanding (Liu et al., 2021a; Hu et al., 2019, 2018) generate position encoding based on the relative distance between tokens. Relative relation representations are important for tasks where the relative ordering or distance matters.
|
| 64 |
+
|
| 65 |
+
Our method extends relative position encoding to 3D Euclidean space and enhances relative relation reasoning ability in 3D visual grounding.
|
| 66 |
+
|
| 67 |
+
# 3 Method
|
| 68 |
+
|
| 69 |
+
This section introduces the proposed 3D Relative Position-aware Network (3DRP-Net) for 3D visual grounding. In Sec.3.1, we present an overview of our method. In Sec.3.2, we dive into the techni
|
| 70 |
+
|
| 71 |
+
cal details of the 3D Relative Position Multi-head Attention (3DRP-MA) module and how to comprehensively and efficiently exploit the spatial position relations in the context of object pairs. In Sec.3.3 and Sec.3.4, we introduce our soft-labeling strategy and the training objective function of our method.
|
| 72 |
+
|
| 73 |
+
# 3.1 Overview
|
| 74 |
+
|
| 75 |
+
The 3D visual grounding task aims to find the object most relevant to a given textual query. So there are two inputs in the 3D visual grounding task. One is the 3D point cloud which is represented by the 3D coordinates and auxiliary features (RGB values and normal vectors in our setting) of $N$ points. Another input is a free-form natural language description with $L$ words.
|
| 76 |
+
|
| 77 |
+
The overall architecture of our 3DRP-Net is illustrated in Fig.2. Firstly, we adopt the pretrained PointNet++ (Qi et al., 2017) to sample $S$ seed points and $K$ key points from the input 3d point cloud and extract the $C$ -dimensional enriched points feature. For the language description input, by using a pre-trained language encoder (Radford et al., 2021), we encode the $L$ -length sentences to $D$ -dimensional word features. Secondly, a stack of transformer layers are applied for multimodal fusion. The features of key points are accordingly interacted with language and seed points to group the scene and language information for detection and localization. Our new 3D relative position multi-head attention in each layer enables the model to understand vital relative relations among objects in the context of each object pair. Eventually, we use two standard multi-layer perceptrons to regress the bounding box and predict the referring confidence score based on the feature of each key point. As shown in Fig.2, in the training phase, we generate the target labels of referring scores based on the IoUs of the predicted boxes. During inference, we only select the key point with the highest referring score to regress the target bounding box.
|
| 78 |
+
|
| 79 |
+
# 3.2 3D Relative Position Multi-head Attention
|
| 80 |
+
|
| 81 |
+
When describing an object in 3D space, relations between objects are essential to distinguish objects in the same class. Given the spatial complexity of 3D space and the potentially misleading similar relative positions between different object pairs, a precise and thorough comprehension of the relative position relationships is crucial for 3D visual grounding. However, existing 3D visual grounding methods fail to effectively address complex spa
|
| 82 |
+
|
| 83 |
+
tial reasoning challenges, thereby compromising their performance. To address this limitation, we propose a novel 3D relative position multi-head attention to model object relations in the context of corresponding object pairs within an advanced one-stage framework.
|
| 84 |
+
|
| 85 |
+
# 3.2.1 Relative Position Attention
|
| 86 |
+
|
| 87 |
+
Before detailing our relative position attention, we briefly review the original attention mechanism in (Vaswani et al., 2017). Given an input sequence $x = \{x_{1},\ldots ,x_{n}\}$ of $n$ elements where $x_{i}\in \mathbb{R}^{d_{x}}$ and the output sequence $z = \{z_{1},\dots,z_{n}\}$ with the same length where $z_{i}\in \mathbb{R}^{d_{z}}$ . Taking single-head attention, the output can be formulated as:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
q _ {i} = x _ {i} W ^ {Q}, k _ {j} = x _ {j} W ^ {K}, v _ {i} = x _ {i} W ^ {V} \tag {1}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
a _ {i, j} = \frac {q _ {i} k _ {j} {} ^ {T}}{\sqrt {d}}, z _ {i} = \sum_ {j = 1} ^ {n} \frac {\exp \left(a _ {i , j}\right)}{\sum_ {k = 1} ^ {n} \exp \left(a _ {i , k}\right)} v _ {j} \tag {2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
where $W_{Q}, W_{K}, W_{V} \in \mathbb{R}^{d_{x} \times d_{z}}$ represents the projection matrices, $a_{i,j}$ is the attention weight from element $i$ to $j$ .
|
| 98 |
+
|
| 99 |
+
Based on the original attention mechanism, we propose a novel relative position attention that incorporates relative position encoding between elements. Since the semantic meaning of a relative relation "Object 1-Relation-Object 2" is also highly dependent on the object pairs involved, it is essential for the position encoding to fully interact with object features in order to accurately capture the specific relative relations mentioned in the description. To this end, the attention weight $a_{i,j}$ in our proposed relative position attention is calculated as follows:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
a _ {i, j} = \frac {q _ {i} k _ {j} ^ {T} + q _ {i} r _ {p \left(d _ {i j}\right)} ^ {k} {} ^ {T} + r _ {p \left(d _ {j i}\right)} ^ {q} k _ {j} ^ {T}}{\sqrt {3 d}} \tag {3}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $d_{ij}$ represents the relative distance from element $i$ to element $j$ , while $d_{ji}$ is the opposite. $p(d)\in [0,2k)$ is an index function that maps continuous distance to discrete value, as detailed in Eq.4. $r_{p(\cdot)}^{k},r_{p(\cdot)}^{q}\in \mathbb{R}^{(2k + 1)\times d_{z}}$ is the learnable relative position encoding. Considering a typical object relation expression "Object 1-Relation-Object 2", our attention weight can be understood as a sum of three attention scores on object pairs and relation: Object 1-to-Object 2, Object 1-to-Relation, and Relation-to-Object 2.
|
| 106 |
+
|
| 107 |
+
# 3.2.2 Piecewise Index Function
|
| 108 |
+
|
| 109 |
+
The points in the 3D point cloud are unevenly distributed in a Euclidean space, and the relative distances are continuous. To enhance the relative spatial information and reduce computation costs, we propose to map the continuous 3D relative distances into discrete integers in a finite set. Inspired by (Wu et al., 2021), we use the following piecewise index function:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
p (d) = \left\{ \begin{array}{l l} {[ d ],} & {\mid d \mid \leq \alpha} \\ {s i g n (d) \times m i n (k, [ \alpha + \frac {l n (| d | / \alpha)}{l n (\beta / \alpha)} (k - \alpha) ]),} & {\mid d \mid > \alpha} \end{array} \right. \tag {4}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $[\cdot ]$ is a round operation, $sign(\cdot)$ represents the sign of a number, i.e., returning 1 for positive input, -1 for negative, and 0 for otherwise.
|
| 116 |
+
|
| 117 |
+
Eq.4 performs a fine mapping in the $\alpha$ range. The further over $\alpha$ , the coarser it is, and distances beyond $\beta$ would be mapped to the same value. In the 3D understanding field, many studies (Zhao et al., 2021a; Misra et al., 2021) have demonstrated that neighboring points are much more important than the further ones. Therefore, mapping from continuous space to discrete values by Eq.4 would not lead to much semantic information loss while significantly reducing computational costs.
|
| 118 |
+
|
| 119 |
+
# 3.2.3 Multi-head Attention for 3D Position
|
| 120 |
+
|
| 121 |
+
Till now, our relative position attention module can handle the interaction between object features and relative position information in continuous space. However, points in 3D space have much more complicated spatial relations than pixels in 2D images or words in 1D sentences. As shown in Table 4, relying on a single relative distance metric leads to insufficient and partial capture of inter-object relations. This makes it difficult to distinguish the target object when multiple spatial relations are described in the language expression. Therefore, we attempt to capture object relations from multiple directions. Specifically, we encode the relative distances under x, y, z coordinates, and the Euclidean metric, denoted as $D_x$ , $D_y$ , $D_z$ , and $D_e$ , respectively. These four relative position metrics represent most of object relations in the language description (e.g., $D_x$ for "left, right", $D_y$ for "front, behind", $D_z$ for "top, bottom", $D_e$ for "near, far"). Based on the architecture of multi-head attention, each relative position encoding is injected into the relative position attention module of each head. Such a 3DRP-MA allows the model to jointly attend to information from different relative relations in 3D space.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
Figure 3: Comparison of various labeling strategies.
|
| 125 |
+
|
| 126 |
+
# 3.3 Soft-labeling Strategy
|
| 127 |
+
|
| 128 |
+
Due to the object center are often not contained in the given point clouds, we select multiple key points for each object to better reflect its location. Therefore, as shown in Fig.3, there will be lots of accurately predicted boxes achieving high Intersection over Union (IoU) of target object. Previous methods (Chen et al., 2020; Zhao et al., 2021b; Luo et al., 2022) use one-hot or multi-hot labels to supervise the referring score. The key points whose predicted box has the top $N_{s}$ highest IoU are set to 1, and others are set to 0, which can encourage the model to select the most high-IoU proposals. However, the simple hard-labeling strategy results in two problems: Firstly, proposals with similar and high IoUs may be labeled differently as 1 and 0, which can cause an unstable training phase. Secondly, it becomes difficult to distinguish between optimal and sub-optimal proposals, affecting the model's ability to accurately identify the most accurate proposal.
|
| 129 |
+
|
| 130 |
+
To tackle these issues, we introduce a soft-labeling strategy to smooth the label distribution and encourage the model to effectively distinguish the optimal proposal. To be specific, the soft-labeling function can be calculated as follow:
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
\hat {s} _ {i} = \exp \left(- \frac {\dot {i} ^ {2}}{2 \sigma^ {2}} + 1\right) \tag {5}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
where $i \in \{0, \dots, N_s\}$ represents the $i$ -th highest IoU. We set $\sigma$ as $[N_s / 3]$ to control the smoothness of the distributions. The target label of the keypoint whose predicted box's IoU is $i$ -th highest and greater than 0.25 is set to $\hat{s}_i$ , and others are set to 0.
|
| 137 |
+
|
| 138 |
+
Although this strategy is simple, its role is to do more as one stroke, and the insight it provides is non-trivial.
|
| 139 |
+
|
| 140 |
+
For discriminative ability, the soft-labels enhance the difference between the optimal and sub
|
| 141 |
+
|
| 142 |
+
optimal proposals, which enforces the model to accurately identify the best key point for regressing detection box. In contrast, when hard-labels or IoU scores are used as the target labels, there is little difference between optimal and sub-optimal proposals from the perspective of learning objectives. For stability, compared to hard-labels, our soft-labels can cover a broader range of accurate proposals with a smoother label distribution, and excluding the proposals with low IoU further stabilizes the learning process. Additionally, compared to directly using IoU scores, the constant distribution in soft-labels provides a more stable loss across different samples. For example, if we have two samples with vastly different target objects, such as a large bed and a small chair, the bed sample would have significantly more key points selected, resulting in more proposals of the target object. Using IoU scores as labels would ultimately lead to a much larger loss for the bed sample than the chair sample, which is clearly unreasonable.
|
| 143 |
+
|
| 144 |
+
# 3.4 Training and Inference
|
| 145 |
+
|
| 146 |
+
We apply a multi-task loss function to train our 3DRP-Net in an end-to-end manner.
|
| 147 |
+
|
| 148 |
+
Referring Loss. The Referring loss $L_{ref}$ is calculated between the target labels $\hat{S}$ discussed in Sec.3.3 and predicted referring scores $S$ of $K$ keypoints with focal loss (Lin et al., 2017).
|
| 149 |
+
|
| 150 |
+
Keypoints Sampling Loss. Following the loss used in (Luo et al., 2022), we apply the key points sampling loss $L_{ks}$ to make sure the selected key points are relevant to any object whose category is mentioned in the description.
|
| 151 |
+
|
| 152 |
+
Detection Loss. To supervise the predicted bounding boxes, we use the detection loss $L_{det}$ as an auxiliary loss. Following (Luo et al., 2022), the $L_{det}$ consists of semantic classification loss, objectness binary classification loss, center offset regression loss and bounding box regression loss.
|
| 153 |
+
|
| 154 |
+
Language Classification Loss. Similar to (Chen et al., 2020), We introduce the language classification loss $L_{text}$ to enhance language encoder.
|
| 155 |
+
|
| 156 |
+
Finally, the overall loss function in the training process can be summarized as
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
L = \alpha_ {1} L _ {\text {r e f}} + \alpha_ {2} L _ {k s} + \alpha_ {3} L _ {\text {d e t}} + \alpha_ {4} L _ {\text {t e x t}} \tag {6}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
where the balancing factors $\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}$ are set default as 0.05, 0.8, 5, 0.1, respectively, and the $L_{ref}$ and $L_{det}$ are applied on all decoder stages following the setting in (Qi et al., 2019).
|
| 163 |
+
|
| 164 |
+
# 4 Experiment
|
| 165 |
+
|
| 166 |
+
# 4.1 Datasets and Metrics
|
| 167 |
+
|
| 168 |
+
ScanRefer. The ScanRefer dataset (Chen et al., 2020) annotates 800 scenes with 51,583 language descriptions based on ScanNet dataset (Dai et al., 2017). Following the ScanRefer benchmark, we split the train/val/test set with 36,655, 9,508, and 5,410 samples, respectively.
|
| 169 |
+
|
| 170 |
+
Nr3D/Sr3D. The Nr3D and Sr3D are two subdatasets in ReferIt3D (Achlioptas et al., 2020). They are also annotated on the indoor 3D scene dataset Scannet (Dai et al., 2017). Nr3D contains 41,503 human utterances collected by ReferItGame, and Sr3D contains 83,572 synthetic descriptions generated based on a "target-spatial relationship-anchor object" template.
|
| 171 |
+
|
| 172 |
+
Evaluation Metric. For ScanRefer (Chen et al., 2020), following previous work, we use Acc@mIoU as the evaluation metric, where $m \in \{0.25, 0.5\}$ . This metric represents the ratio of the predicted bounding boxes whose Intersection over Union (IoU) with the ground-truth (GT) bounding boxes is larger than $m$ . For Sr3D and Nr3D (Achlioptas et al., 2020), the ground truth bounding boxes are available, and the model only needs to identify the described object from all the bounding boxes. Therefore, the evaluation metric of these two datasets is accuracy, i.e., the percentage of the correctly selected target object.
|
| 173 |
+
|
| 174 |
+
# 4.2 Quantitative Comparison
|
| 175 |
+
|
| 176 |
+
We compare our 3DRP-Net with other state-of-the-art methods on these three 3D visual grounding benchmarks.
|
| 177 |
+
|
| 178 |
+
ScanRefer. Table 1 shows the performance on ScanRefer. 3DRP-Net outperforms the best two-stage method by $+4.20$ at Acc@0.25 and $+4.40$ at Acc@0.5 and exceeds the best one-stage method by $+2.45$ at Acc@0.25 and $+2.47$ at Acc@0.5. Even when compared to 3DJCG, which utilizes an extra Scan2Cap (Chen et al., 2021) dataset to assist its training, our 3DRP-Net still shows superiority in all metrics. Specifically, for the "Multiple" subset, 3DRP-Net achieves $+2.66$ and $+2.34$ gains when compared with the advanced one-stage model in terms of Acc@0.25 and Acc@0.5, which validates the proposed 3DRP-MA module is powerful for modeling complex relative position relations in 3D space and significantly contributes to distinguishing the described target object from multiple interfering objects.
|
| 179 |
+
|
| 180 |
+
Table 1: Comparisons with state-of-the-art methods on ScanRefer. We highlight the best performance in bold.
|
| 181 |
+
|
| 182 |
+
<table><tr><td rowspan="2" colspan="2">Methods</td><td rowspan="2">Extra</td><td colspan="2">Unique</td><td colspan="2">Multiple</td><td colspan="2">Overall</td></tr><tr><td>Acc@0.25</td><td>Acc@0.5</td><td>Acc@0.25</td><td>Acc@0.5</td><td>Acc@0.25</td><td>Acc@0.5</td></tr><tr><td rowspan="8">Two-stage:</td><td>ScanRefer</td><td>-</td><td>67.64</td><td>46.19</td><td>32.06</td><td>21.26</td><td>38.97</td><td>26.10</td></tr><tr><td>TGNN</td><td>-</td><td>68.61</td><td>56.80</td><td>29.84</td><td>23.18</td><td>37.37</td><td>29.70</td></tr><tr><td>InstanceRefer</td><td>-</td><td>77.45</td><td>66.83</td><td>31.27</td><td>24.77</td><td>40.23</td><td>32.93</td></tr><tr><td>SAT</td><td>2D assist</td><td>73.21</td><td>50.83</td><td>37.64</td><td>25.16</td><td>44.54</td><td>30.14</td></tr><tr><td>3DVG-Transformer</td><td>-</td><td>77.16</td><td>58.47</td><td>38.38</td><td>28.70</td><td>45.90</td><td>34.47</td></tr><tr><td>MVT</td><td>-</td><td>77.67</td><td>66.45</td><td>31.92</td><td>25.26</td><td>40.80</td><td>33.26</td></tr><tr><td>3DJCG</td><td>Scan2Cap</td><td>78.75</td><td>61.30</td><td>40.13</td><td>30.08</td><td>47.62</td><td>36.14</td></tr><tr><td>ViL3DRel</td><td>-</td><td>81.58</td><td>68.62</td><td>40.30</td><td>30.71</td><td>47.94</td><td>37.73</td></tr><tr><td rowspan="2">One-stage:</td><td>3D-SPS</td><td>-</td><td>81.63</td><td>64.77</td><td>39.48</td><td>29.61</td><td>47.65</td><td>36.43</td></tr><tr><td>3DRP-Net (Ours)</td><td>-</td><td>83.13</td><td>67.74</td><td>42.14</td><td>31.95</td><td>50.10</td><td>38.90</td></tr></table>
|
| 183 |
+
|
| 184 |
+
Table 2: Comparisons with state-of-the-art methods on $Nr3D$ and $Sr3D$ . We highlight the best performance in **bold**.
|
| 185 |
+
|
| 186 |
+
<table><tr><td rowspan="2">Method</td><td colspan="5">Nr3D</td><td colspan="5">Sr3D</td></tr><tr><td>Easy</td><td>Hard</td><td>View Dep</td><td>View Indep</td><td>Overall</td><td>Easy</td><td>Hard</td><td>View Dep</td><td>View Indep</td><td>Overall</td></tr><tr><td>ReferIt3DNet</td><td>43.6</td><td>27.9</td><td>32.5</td><td>37.1</td><td>35.6</td><td>44.7</td><td>31.5</td><td>39.2</td><td>40.8</td><td>40.8</td></tr><tr><td>InstanceRefer</td><td>46.0</td><td>31.8</td><td>34.5</td><td>41.9</td><td>38.8</td><td>51.1</td><td>40.5</td><td>45.4</td><td>48.1</td><td>48.0</td></tr><tr><td>3DVG-Transformer</td><td>48.5</td><td>34.8</td><td>34.8</td><td>43.7</td><td>40.8</td><td>54.2</td><td>44.9</td><td>44.6</td><td>51.7</td><td>51.4</td></tr><tr><td>LanguageRefer</td><td>51.0</td><td>36.6</td><td>41.7</td><td>45.0</td><td>43.9</td><td>58.9</td><td>49.3</td><td>49.2</td><td>56.3</td><td>56.0</td></tr><tr><td>SAT</td><td>56.3</td><td>42.4</td><td>46.9</td><td>50.4</td><td>49.2</td><td>61.2</td><td>50.0</td><td>49.2</td><td>58.3</td><td>57.9</td></tr><tr><td>3D-SPS</td><td>58.1</td><td>45.1</td><td>48.0</td><td>53.2</td><td>51.5</td><td>65.4</td><td>56.2</td><td>49.2</td><td>63.2</td><td>62.6</td></tr><tr><td>MVT</td><td>61.3</td><td>49.1</td><td>54.3</td><td>55.4</td><td>55.1</td><td>66.9</td><td>58.8</td><td>58.4</td><td>64.7</td><td>64.5</td></tr><tr><td>ViL3DRel</td><td>70.2</td><td>57.4</td><td>62.0</td><td>64.5</td><td>64.4</td><td>74.9</td><td>67.9</td><td>63.8</td><td>73.2</td><td>72.8</td></tr><tr><td>3DRP-Net(Ours)</td><td>71.4</td><td>59.7</td><td>64.2</td><td>65.2</td><td>65.9</td><td>75.6</td><td>69.5</td><td>65.5</td><td>74.9</td><td>74.1</td></tr></table>
|
| 187 |
+
|
| 188 |
+
Nr3D/Sr3D. Note that the task of $\mathrm{Nr3D / Sr3D}$ is different from ScanRefer, which aims to identify the described target object from all the given groundtruth bounding boxes. Therefore, the soft-labeling strategy and the keypoint sampling module are removed. We only verify the effectiveness of 3DRP-MA on these two datasets. Besides, the data augmentation methods in ViL3DRel (Chen et al., 2022) are also used in our training phase for a fair comparison. The accuracy of our method, together with other state-of-the-art methods, is reported in Table 2. 3DRP-Net achieves the overall accuracy of $65.9\%$ and $74.1\%$ on $\mathrm{Nr3D}$ and $\mathrm{Sr3D}$ , respectively, which outperforms all existing methods by a large margin. In the more challenging "Hard" subset, 3DRP-Net significantly improves the accuracy by $+2.3\%$ in $\mathrm{Nr3D}$ and $+1.6\%$ in $\mathrm{Sr3D}$ , again demonstrating our method is beneficial for distinguishing objects by capturing the relative spatial relations.
|
| 189 |
+
|
| 190 |
+
# 4.3 Ablation Study
|
| 191 |
+
|
| 192 |
+
We conduct ablation studies to investigate the contribution of each component. All the ablation study results are reported on the ScanRefer validation set.
|
| 193 |
+
|
| 194 |
+
Relation Modeling Module. We compared our proposed 3DRP-MA with the relation modules in other 3D visual grounding methods. For fair comparisons, we also introduce distances in x, y, z coordinates and Euclidean space to other relation modules. The results are provided in Table 3, comparing rows 1, 2 and 6, our 3DRP-MA is far superior to the relation modules in 3DVG-Trans and 3DJCG, and the performance improvement mainly comes from the subsets that rely on relative relationship reasoning for localization, namely the "One-Rel" and "Multi-Rel" subsets.
|
| 195 |
+
|
| 196 |
+
Relative Position Encoding. In Sec.3.2.3, we discuss the complexity of relative relations in 3D space and propose four relative position encodings based on relative distance in x,y,z coordinates $(D_{xyz})$ , and the Euclidean metric $(D_e)$ , respectively. From Table 3, both $D_{xyz}$ and $D_e$ can bring significant improvement for subsets that require relative relation reasoning. Row 6 demonstrates that considering relative relations from multiple directions further helps capture comprehensive and sufficient object relations and distinguish the target object from multiple distractors.
|
| 197 |
+
|
| 198 |
+
Table 3: Ablation studies on relation position encoding and different relation modeling modules. None-Rel/One-Rel/Multi-Rel represent subsets that contain zero/one/multiple relation descriptions in the original Multiple set of ScanRefer, and the relative percentage improvements compared to the different settings are marked in green.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Row</td><td>De</td><td>Dxyz</td><td>Rel Module</td><td>Overall</td><td>Multiple</td><td>None-Rel</td><td>One-Rel</td><td>Multi-Rel</td></tr><tr><td>1</td><td>✓</td><td>✓</td><td>3DVG-Transformer</td><td>36.85</td><td>30.16</td><td>34.89(+2.95%)</td><td>32.51(+5.51%)</td><td>28.03(+6.60%)</td></tr><tr><td>2</td><td>✓</td><td>✓</td><td>3DJCG</td><td>36.43</td><td>29.62</td><td>35.51(+1.15%)</td><td>31.87(+7.62%)</td><td>27.35(+9.25%)</td></tr><tr><td>3</td><td>×</td><td>×</td><td>3DRP-MA</td><td>32.74</td><td>26.39</td><td>34.18(+5.09%)</td><td>28.39(+20.82%)</td><td>23.94(+24.81%)</td></tr><tr><td>4</td><td>✓</td><td>×</td><td>3DRP-MA</td><td>36.43</td><td>30.26</td><td>35.47(+1.27%)</td><td>32.54(+5.41%)</td><td>28.10(+6.33%)</td></tr><tr><td>5</td><td>×</td><td>✓</td><td>3DRP-MA</td><td>37.13</td><td>30.56</td><td>35.30(+1.76%)</td><td>32.87(+4.35%)</td><td>28.46(+4.99%)</td></tr><tr><td>6</td><td>✓</td><td>✓</td><td>3DRP-MA</td><td>38.90</td><td>31.91</td><td>35.92</td><td>34.30</td><td>29.88</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 4: Ablation studies on 3DRP-MA in each transformer layer and pair-aware relation attention.
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Row</td><td>O1-R</td><td>R-O2</td><td>SA1</td><td>CA</td><td>SA2</td><td>Acc@0.25</td><td>Acc@0.5</td></tr><tr><td>1</td><td>×</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>48.83</td><td>38.46</td></tr><tr><td>2</td><td>✓</td><td>×</td><td>✓</td><td>✓</td><td>✓</td><td>48.30</td><td>37.56</td></tr><tr><td>3</td><td>✓</td><td>✓</td><td>✓</td><td>×</td><td>×</td><td>46.70</td><td>36.10</td></tr><tr><td>4</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>×</td><td>48.72</td><td>37.59</td></tr><tr><td>5</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>50.10</td><td>38.90</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Pair-aware relation attention. The typical description of a spatial relation can be expressed as "Object 1-Relation-Object 2". Our pair-aware relation attention can be considered as the sum of two scores: Object 1-to-Relation (O1-R) and Relation-to-Object 2 (R-O2). To further verify the superiority of capturing the relation in the context of an object pair, we ablate the two scores, and the results are illustrated in Table 4. From rows 1, 2 and 5, both O1-R and R-O2 terms benefit the 3D visual grounding task by capturing the relative relations, and the joint use of O1-R and R-O2 provides a more comprehensive understanding of spatial relation description and leads to the best performance.
|
| 207 |
+
|
| 208 |
+
3DRP-MA in each layer. We study the effect of each 3DRP-MA module in the transformer layer. $SA_{1}$ , $CA$ and $SA_{2}$ respectively denote whether to replace the self-attention before interacting with seed points, the cross-attention for key points and seed points, and the self-attention before interacting with language. Row 3 to 5 in Table 4 add each 3DRP-MA in turns and the performance is gradually improved to $50.10\%$ and $38.90\%$ .
|
| 209 |
+
|
| 210 |
+
Soft-labeling Strategy. Table 5 presents the performance of different labeling strategies. In hard-labeling, $N_{s}$ represents the number of key points whose IoU is in the top $N_{s}$ and greater than 0.25, which are labeled as 1. In soft-labeling, $N_{s}$ is a hyperparameter in Eq.5, which controls the num
|
| 211 |
+
|
| 212 |
+
Table 5: Ablation studies on the labeling strategies.
|
| 213 |
+
|
| 214 |
+
<table><tr><td>Strategy</td><td>Ns</td><td>Acc@0.25</td><td>Acc@0.5</td></tr><tr><td rowspan="2">IoUs</td><td>Original</td><td>48.20</td><td>38.06</td></tr><tr><td>Linear</td><td>48.82</td><td>37.50</td></tr><tr><td rowspan="3">Hard</td><td>1</td><td>47.36</td><td>37.25</td></tr><tr><td>4</td><td>47.29</td><td>37.68</td></tr><tr><td>8</td><td>47.30</td><td>37.26</td></tr><tr><td rowspan="3">Soft</td><td>12</td><td>49.13</td><td>38.46</td></tr><tr><td>24</td><td>50.10</td><td>38.90</td></tr><tr><td>36</td><td>49.64</td><td>38.55</td></tr></table>
|
| 215 |
+
|
| 216 |
+
ber of soft labels. To further demonstrate that our proposed strategy improves stability and discrimination, we also use IoUs score as a label. The "original" setting directly uses IoUs as a label, while the "linear" setting stretches IoUs linearly to the range of 0 to 1 to enhance discrimination. Compared to hard-labeling and IoUs methods, our soft-labeling strategy improves discrimination and stability. Using the "original" IoUs method lacks discrimination power and stability due to the unbalanced loss on different samples. Even using linear scaling to enhance discrimination power, this instability cannot be eliminated. Our method alleviates these problems with a discriminative constant distribution and shows comprehensive superiority in Table 5.
|
| 217 |
+
|
| 218 |
+
# 5 Conclusion
|
| 219 |
+
|
| 220 |
+
In this paper, we propose a relation-aware one-stage model for 3D visual grounding, referred to as 3D Relative Position-aware Network (3DRP-Net). 3DRP-Net contains novel 3DRP-MA modules to exploit complex 3D relative relations within point clouds. Besides, we devise a soft-labeling strategy to achieve disambiguation while promoting a stable and discriminative learning process. Comprehensive experiments reveal that our 3DRP-Net outperforms other methods.
|
| 221 |
+
|
| 222 |
+
# 6 Limitations
|
| 223 |
+
|
| 224 |
+
The datasets of 3D visual grounding task are all stem from the original ScanNet dataset which brings generalization to other scene types into question. More diverse benchmarks are important for the further development of the field of 3D visual grounding.
|
| 225 |
+
|
| 226 |
+
# Acknowledgments
|
| 227 |
+
|
| 228 |
+
This work was supported in part by National Natural Science Foundation of China under Grant No.62222211, Grant No.61836002 and Grant No.62072397.
|
| 229 |
+
|
| 230 |
+
# References
|
| 231 |
+
|
| 232 |
+
Panos Achlioptas, Ahmed Abdelreehem, Fei Xia, Mohamed Elhoseiny, and Leonidas Guibas. 2020. Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes. In European Conference on Computer Vision, pages 422-440. Springer.
|
| 233 |
+
Daigang Cai, Lichen Zhao, Jing Zhang, Lu Sheng, and Dong Xu. 2022. 3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16464-16473.
|
| 234 |
+
Dave Zhenyu Chen, Angel X Chang, and Matthias Nießner. 2020. Scanrefer: 3d object localization in rgb-d scans using natural language. In European Conference on Computer Vision, pages 202-221. Springer.
|
| 235 |
+
Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, and Ivan Laptev. 2022. Language conditioned spatial relation reasoning for 3d object grounding. arXiv preprint arXiv:2211.09646.
|
| 236 |
+
Zhenyu Chen, Ali Gholami, Matthias Nießner, and Angel X Chang. 2021. Scan2cap: Context-aware dense captioning in rgb-d scans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3193-3203.
|
| 237 |
+
Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5828-5839.
|
| 238 |
+
Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wengang Zhou, and Houqiang Li. 2021. Transvg: End-to-end visual grounding with transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1769-1779.
|
| 239 |
+
|
| 240 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
|
| 241 |
+
Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. 2018. Relation networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3588-3597.
|
| 242 |
+
Han Hu, Zheng Zhang, Zhenda Xie, and Stephen Lin. 2019. Local relation networks for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3464-3473.
|
| 243 |
+
Pin-Hao Huang, Han-Hung Lee, Hwann-Tzong Chen, and Tyng-Luh Liu. 2021. Text-guided graph neural networks for referring 3d instance segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1610-1618.
|
| 244 |
+
Shijia Huang, Yilun Chen, Jiaya Jia, and Liwei Wang. 2022. Multi-view transformer for 3d visual grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15524-15533.
|
| 245 |
+
Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. 2021. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1780-1790.
|
| 246 |
+
Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 787-798.
|
| 247 |
+
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
|
| 248 |
+
Muchen Li and Leonid Sigal. 2021. Referring transformer: A one-step approach to multi-task visual grounding. Advances in Neural Information Processing Systems, 34:19652-19664.
|
| 249 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988.
|
| 250 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021a. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022.
|
| 251 |
+
Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. 2021b. Group-free 3d object detection via transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2949-2958.
|
| 252 |
+
|
| 253 |
+
Junyu Luo, Jiahui Fu, Xianghao Kong, Chen Gao, Haibing Ren, Hao Shen, Huaxia Xia, and Si Liu. 2022. 3d-sps: Single-stage 3d visual grounding via referred point progressive selection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16454-16463.
|
| 254 |
+
Ishan Misra, Rohit Girdhar, and Armand Joulin. 2021. An end-to-end transformer model for 3d object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2906-2917.
|
| 255 |
+
Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer imaged-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.
|
| 256 |
+
Charles R Qi, Or Litany, Kaiming He, and Leonidas J Guibas. 2019. Deep hough voting for 3d object detection in point clouds. In proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9277-9286.
|
| 257 |
+
Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. 2017. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30.
|
| 258 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
|
| 259 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67.
|
| 260 |
+
Junha Roh, Karthik Desingh, Ali Farhadi, and Dieter Fox. 2022. Languagerefer: Spatial-language model for 3d visual grounding. In Conference on Robot Learning, pages 1046-1056. PMLR.
|
| 261 |
+
Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. 2019. Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9339-9347.
|
| 262 |
+
Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2008. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80.
|
| 263 |
+
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155.
|
| 264 |
+
|
| 265 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
|
| 266 |
+
Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. 2019. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6629-6638.
|
| 267 |
+
Zehan Wang, Haifeng Huang, Yang Zhao, Linjun Li, Xize Cheng, Yichen Zhu, Aoxiong Yin, and Zhou Zhao. 2023. Distilling coarse-to-fine semantic matching knowledge for weakly supervised 3d visual grounding. arXiv preprint arXiv:2307.09267.
|
| 268 |
+
Kan Wu, Houwen Peng, Minghao Chen, Jianlong Fu, and Hongyang Chao. 2021. Rethinking and improving relative position encoding for vision transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10033-10041.
|
| 269 |
+
Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. 2018. Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9068-9079.
|
| 270 |
+
Li Yang, Yan Xu, Chunfeng Yuan, Wei Liu, Bing Li, and Weiming Hu. 2022. Improving visual grounding with visual-linguistic verification and iterative reasoning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9499–9508.
|
| 271 |
+
Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo. 2019. A fast and accurate one-stage approach to visual grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4683-4693.
|
| 272 |
+
Zhengyuan Yang, Songyang Zhang, Liwei Wang, and Jiebo Luo. 2021. Sat: 2d semantics assisted training for 3d visual grounding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1856-1866.
|
| 273 |
+
Zhihao Yuan, Xu Yan, Yinghong Liao, Ruimao Zhang, Sheng Wang, Zhen Li, and Shuguang Cui. 2021. Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1791-1800.
|
| 274 |
+
Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. 2021a. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16259-16268.
|
| 275 |
+
|
| 276 |
+
Lichen Zhao, Daigang Cai, Lu Sheng, and Dong Xu. 2021b. 3dvg-transformer: Relation modeling for visual grounding on point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2928-2937.
|
| 277 |
+
|
| 278 |
+
Fengda Zhu, Yi Zhu, Xiaojun Chang, and Xiaodan Liang. 2020. Vision-language navigation with self-supervised auxiliary reasoning tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10012-10022.
|
| 279 |
+
|
| 280 |
+
# A Qualitative Analysis
|
| 281 |
+
|
| 282 |
+
In this section, we provide some visualization results in ScanRefer (Chen et al., 2020) for qualitative analysis.
|
| 283 |
+
|
| 284 |
+
# A.1 Analysis on Success Cases
|
| 285 |
+
|
| 286 |
+
To better understand our 3DRP-Net, we visualize some success cases and comparisons with the other one-stage method (Luo et al., 2022) in Figure 4. From (a,b,c), both 3D-SPS (Luo et al., 2022) and our 3DRP-Net accurately locate the target object when the description does not involve too many relative position relations and there are not many interfering objects in the scene. However, as shown in (d,e,f), when the relative position relation between objects is necessary for distinguishing the target object from multiple objects of the same category, the previous one-stage method 3D-SPS is often confused by distractors. By modeling the relative position in 3D space, our 3DRP-Net is able to fully leverage the relative position descriptions in the sentence for reasoning, which bring more precise localization.
|
| 287 |
+
|
| 288 |
+
# A.2 Analysis on Failure Cases
|
| 289 |
+
|
| 290 |
+
To conduct a comprehensive qualitative evaluation, we further elaborate on the failure cases and discuss them in detail. These reasons for our 3DRP-Net prediction errors can be roughly summarized into three categories:
|
| 291 |
+
|
| 292 |
+
- Ambiguous annotations. Due to the complexity and irregularity of 3D scenes, ambiguous descriptions are difficult to be completely avoided in 3D visual grounding datasets. There may be multiple objects in a scene that match the description, but only one of them is considered correct by the annotation. As shown in the cases (1,2,3) of Figure 5, both ground-truth objects and our predicted objects semantically match the natural language descriptions, but according to the ground truth box annotations, our predictions are completely wrong.
|
| 293 |
+
- Challenging target object. In 3D point clouds, some objects are inherently difficult to identify because of obscured or missing surfaces. In case 4 of Figure 5, the described target object is a cabinet, but the point cloud in the ground truth box is seriously missing,
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
Ground-Truth
|
| 297 |
+
|
| 298 |
+

|
| 299 |
+
|
| 300 |
+

|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+

|
| 309 |
+
3D-SPS
|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+

|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
3DRP-Net(Ours)
|
| 323 |
+
Description
|
| 324 |
+
a). This is a toilet. It is made of porcelain.
|
| 325 |
+
b). A rectangular wooden office table. It is surrounded with black office chairs.
|
| 326 |
+
|
| 327 |
+

|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
c). This bathroom vanity is brown. It is smooth.
|
| 331 |
+
|
| 332 |
+

|
| 333 |
+
d). A green and black chair is pulled out from a desk. There is a keyboard to the left of it.
|
| 334 |
+
e). This chair is facing up and left. It is brown.
|
| 335 |
+
|
| 336 |
+

|
| 337 |
+
|
| 338 |
+

|
| 339 |
+
f). The object is a cabinet. It is directly to your left as you enter through the door.
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
1). This is the long table. There are monitors on top of it.
|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
2). The computer desk has metal legs. It is in the corner of the room
|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
Figure 4: The visualization results of some success cases. The blue/green/red colors indicate the ground truth/correct/incorrect boxes.
|
| 349 |
+
3). This is a wooden desk with L shape. The desk is close to wall.
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
4). This is a white cabinet. It is in the corner of the room.
|
| 353 |
+
Figure 5: The visualization results of some failure cases. The ground-truth boxes are labeled in blue and the incorrectly predicted boxes are marked in red.
|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
5). It is a brown wooden table.
|
| 357 |
+
It is placed to the left of the bed.
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
6). This is a blue chair. It is on the round table right in front of the chairs of the cubicles.
|
| 361 |
+
|
| 362 |
+
which makes it very difficult to identify the cabinet in the scene.
|
| 363 |
+
|
| 364 |
+
- Challenging auxiliary objects. 3D visual grounding task often requires the relations between the target object and auxiliary objects to assist the localization. The challenging auxiliary objects may result in an incorrect prediction. As shown in case 5 of Figure 5, the target table is on "the left of the bed", but the left and right side of a bed are difficult to distinguish, which requires identifying the direction of the bed according to the position of pillows. This reasoning process is too complex for our model, and our prediction actually found the table on the right side of a bed. In
|
| 365 |
+
|
| 366 |
+
case 6, the auxiliary object is "chair of the cubicles", which is challenging for the model to recognize.
|
| 367 |
+
|
| 368 |
+
# B Statistics of Relative Position Words
|
| 369 |
+
|
| 370 |
+
To further illustrate that relative position relation is a general and fundamental issue in 3D visual grounding task, we count some common words representing relative spatial relations in three 3D visual grounding datasets (i.e., ScanRefer (Chen et al., 2020), Nr3D (Achlioptas et al., 2020) and Sr3D (Achlioptas et al., 2020)) in Figure 6 and 7. From Figure 6, in ScanRefer, at least $97\%$ descriptions contain relative position relations, and more than $63\%$ sentences use multiple relative position relations to indicate the target object. Besides, about $90\%$ sentences utilize the relative position words in Nr3D, and almost all the samples in Sr3D require relative position relations between objects for localization. As shown in Figure 7, in ScanRefer and Nr3D, which collected human utterances as descriptions, most of the commonly used relative position words appear in the sentences. This further demonstrates the importance of modeling relative position relations from different perspectives.
|
| 371 |
+
|
| 372 |
+
# C Implementation Details.
|
| 373 |
+
|
| 374 |
+
We adopt the pre-trained PointNet++ (Qi et al., 2017) and the language encoder in CLIP (Radford et al., 2021) to extract the features from point clouds and language descriptions, respectively, while the rest of the network is trained from
|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
Figure 6: Ratio of sentences containing the specific number of relative position words in three 3D visual grounding datasets.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+
scratch. We set the dimension $d$ in all transformer layers to 384. The layer number of the transformer is set to 4. Our model is trained in an end-to-end manner with the AdamW (Kingma and Ba, 2014) optimizer and a batch size of 15 for 36 epochs. The initial learning rates of all transformer layers and the rest of the model are set to $1e - 4$ and $1e - 3$ , and we use the cosine learning rate decay strategy to schedule the learning rates. The seed point number $M$ and keypoint number $M_0$ are set to 1024 and 256. For the soft-labeling strategy, the label number $N_{s}$ is assigned as 24. In the piecewise index function, we set the $\alpha : \beta : \gamma = 1:2:4$ , and the $\beta$ is assigned as 20. When calculating the relative position index, the coordinates of all points are linearly scaled to [0, 100].
|
| 384 |
+
|
| 385 |
+
In the ablation study, we further divided the "Multiple" set of ScanRefer into "Non-Rel/One-Rel/Multi-Rel" subsets according to the number of relational descriptions in the sentences. Specifically, we follow the statistical method in Sec. B to count some common words representing relative spatial relations.
|
| 386 |
+
|
| 387 |
+
# D Prior Methods for Comparison
|
| 388 |
+
|
| 389 |
+
In order to validate the effectiveness of the proposed 3DRP-Net, Sec. 4.2 comprehensively compare it to many previous state-of-the-art methods: 1) ReferIt3DNet (Achlioptas et al., 2020) 2) ScanRefer (Chen et al., 2020); 3) TGNN (Huang et al., 2021); 4) InstanceRefer (Yuan et al., 2021); 5) LanguageRefer (Roh et al., 2022); 6) SAT (Yang et al., 2021); 7) 3DVG-Trans (Zhao et al., 2021b); 8) MVT (Huang et al., 2022); 9) 3D-SPS (Luo et al., 2022); 10) 3DJCG (Cai et al., 2022); 11) ViL3DRel (Chen et al., 2022)
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
(a) ScanRefer
|
| 393 |
+
|
| 394 |
+

|
| 395 |
+
(b) Nr3D
|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
(c) Sr3D
|
| 399 |
+
Figure 7: Frequency of some commonly used relative position words in three 3D visual grounding datasets.
|
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:937af4f6deed6b96cd022a6e626022002d49a583e91d8e5b3e79a93c16233c1d
|
| 3 |
+
size 692647
|
3drpnet3drelativepositionawarenetworkfor3dvisualgrounding/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e6bb97f0d2d6bcb837145686074b30fbeb6c3416e0aa0ed5289bea40716ef4f3
|
| 3 |
+
size 478047
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8446ea78a2d9d051dcffbbce5167bbf2c5d9085afddaed90c488e8609e6e45d2
|
| 3 |
+
size 83744
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db1bd5abec3a86a67443cfd97c7388dc75eede38d57d692c0d66a83ee618716c
|
| 3 |
+
size 95974
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/f0dc7714-3e2d-4539-b5f1-b4652fcb70f7_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:56c2b8268db5fb7ba6b6483b7e0eaeb98657889ec470180525a301b062fffe21
|
| 3 |
+
size 291776
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/full.md
ADDED
|
@@ -0,0 +1,308 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 4 and 7-bit Labeling for Projective and Non-Projective Dependency Trees
|
| 2 |
+
|
| 3 |
+
Carlos Gómez-Rodríguez, Diego Roca and David Vilares
|
| 4 |
+
|
| 5 |
+
Universidade da Coruña, CITIC
|
| 6 |
+
|
| 7 |
+
Departamento de Ciencias de la Computacion y Tecnologias de la Informacion
|
| 8 |
+
|
| 9 |
+
Campus de Elviña s/n, 15071
|
| 10 |
+
|
| 11 |
+
A Coruña, Spain
|
| 12 |
+
|
| 13 |
+
{carlos.gomez, d.rocal, david.vilares}@udc.es
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
We introduce an encoding for syntactic parsing as sequence labeling that can represent any projective dependency tree as a sequence of 4-bit labels, one per word. The bits in each word's label represent (1) whether it is a right or left dependent, (2) whether it is the outermost (left/right) dependent of its parent, (3) whether it has any left children and (4) whether it has any right children. We show that this provides an injective mapping from trees to labels that can be encoded and decoded in linear time. We then define a 7-bit extension that represents an extra plane of arcs, extending the coverage to almost full non-projectivity (over $99.9\%$ empirical arc coverage). Results on a set of diverse treebanks show that our 7-bit encoding obtains substantial accuracy gains over the previously best-performing sequence labeling encodings.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Approaches that cast parsing as sequence labeling have gathered interest as they are simple, fast (Anderson and Gomez-Rodriguez, 2021), highly parallelizable (Amini and Cotterell, 2022) and produce outputs that are easy to feed to other tasks (Wang et al., 2019). Their main ingredient are the encodings that map trees into sequences of one discrete label per word. Thus, various such encodings have been proposed both for constituency (Gomez-Rodriguez and Vilares, 2018; Amini and Cotterell, 2022) and dependency parsing (Strzyz et al., 2019; Lacroix, 2019; Gomez-Rodriguez et al., 2020).
|
| 22 |
+
|
| 23 |
+
Most such encodings have an unbounded label set, whose cardinality grows with sentence length. An exception for constituent parsing is tetratagging (Kitaev and Klein, 2020). For dependency parsing, to our knowledge, no bounded encodings were known. Simultaneously to this work, Amini et al. (2023) have just proposed one: hexatagging, where projective dependency trees are represented by tagging each word with one of a set of 8 tags. $^{1}$
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
Figure 1: A dependency tree and its 4-bit encoding.
|
| 27 |
+
|
| 28 |
+
Contribution We present a bounded sequence-labeling encoding that represents any projective dependency tree with 4 bits (i.e., 16 distinct labels) per word. While this requires one more bit than hexatagging, it is arguably more straightforward, as the bits directly reflect properties of each node in the dependency tree without an intermediate constituent structure, as hexatagging requires. Also, it has a clear relation to existing bracketing encodings, and has a straightforward non-projective extension using 7 bits with almost full non-projective coverage. Empirical results show that our encoding provides more accurate parsers than the existing unbounded bracketing encodings, which had the best previous results among sequence-labeling encodings, although it underperforms hexatagging.
|
| 29 |
+
|
| 30 |
+
# 2 Projective Encoding
|
| 31 |
+
|
| 32 |
+
Let $T_{n}$ be a set of unlabeled dependency trees for sentences of length $n$ . A sequence-labeling encoding defines a function $\Phi_{n}: T_{n} \to L^{n}$ , for a label set $L$ . Thus, each tree for a sentence $w_{1} \ldots w_{n}$ is encoded as a sequence of labels, $l_{1} \ldots l_{n}$ , that assigns a label $l_{i} \in L$ to each word $w_{i}$ .
|
| 33 |
+
|
| 34 |
+
We define the 4-bit projective encoding as an encoding where $T_{n}$ is the set of projective depen
|
| 35 |
+
|
| 36 |
+
dency trees, and we assign to each word $w_{i}$ a label $l_{i} = b_{0}b_{1}b_{2}b_{3}$ , such that $b_{j}$ is a boolean as follows:
|
| 37 |
+
|
| 38 |
+
- $b_{0}$ is true if $w_{i}$ is a right dependent, and false if it is a left dependent. Root nodes are considered right dependents for this purpose (i.e., we assume that they are linked as dependents of a dummy root node $w_{0}$ located to the left).
|
| 39 |
+
- $b_{1}$ is true iff $w_{i}$ is the outermost right (or left) dependent of its parent node.
|
| 40 |
+
- $b_{2}$ (respectively, $b_{3}$ ) is true iff $w_{i}$ has one or more left (right) dependents.
|
| 41 |
+
|
| 42 |
+
All combinations of the four bits are possible, so we have 16 possible labels.
|
| 43 |
+
|
| 44 |
+
For easier visualization and comparison to existing bracketing encodings, we will represent the values of $b_{0}$ as $>$ (right dependent) or $<$ (left dependent), $b_{1}$ as $*$ (true) or blank (false), and $b_{2}$ and $b_{3}$ respectively as $\backslash$ and $/$ (true) or blank (false). We will use these representations with set notation to make claims about a label's bits, e.g. $> * \in l$ means that label $l$ has $b_{0} = 1, b_{1} = 1$ . Figure 1 shows a sample tree encoded with this method.
|
| 45 |
+
|
| 46 |
+
We will now show how to encode and decode trees, and prove that the encoding is a total, injective map from projective trees to label sequences.
|
| 47 |
+
|
| 48 |
+
Encoding and Totality Encoding a tree is trivial: one just needs to traverse each word and apply the definition of each bit to obtain the label. This also means that our encoding from trees to labels is a total function, as the labels are well defined for any dependency tree (and thus, for any projective tree).
|
| 49 |
+
|
| 50 |
+
Decoding and Injectivity Assuming a well-formed sequence of labels, we can decode it to a tree. We can partition the arcs of any tree $t \in T_{n}$ into a subset of left arcs, $t_{l}$ , and a subset of right arcs, $t_{r}$ . We will decode these subsets separately. Algorithm 1 shows how to obtain the arcs of $t_{r}$ .
|
| 51 |
+
|
| 52 |
+
The idea of the algorithm is as follows: we read labels from left to right. When we find a label containing /, we know that the corresponding node will be a source of one or more right arcs. We push it into the stack. When we find a label with >, we know that its node is the target of a right arc, so we link it to the / on top of the stack. Additionally, if the label contains *, the node is a rightmost sibling, so we pop the stack because no more arcs will be
|
| 53 |
+
|
| 54 |
+
Algorithm 1 To decode right arcs in the 4-bit encoding.
|
| 55 |
+
1: function DECODERIGHTARCS $(l_{1}..l_{n})$
|
| 56 |
+
2: s ← empty stack
|
| 57 |
+
3: a ← empty set of arcs
|
| 58 |
+
4: s.push(0) ▷ corresponding to dummy root
|
| 59 |
+
5: for i ← 1 to n do
|
| 60 |
+
6: if $> \in l_{i}$ then
|
| 61 |
+
7: a.addArc(speek() → i)
|
| 62 |
+
8: if * ∈ $l_{i}$ then
|
| 63 |
+
9: s.pop()
|
| 64 |
+
10: end if
|
| 65 |
+
11: end if
|
| 66 |
+
12: if / ∈ $l_{i}$ then
|
| 67 |
+
13: s.push(i)
|
| 68 |
+
14: end if
|
| 69 |
+
15: end for
|
| 70 |
+
16: return a
|
| 71 |
+
17: end function
|
| 72 |
+
|
| 73 |
+
created from the same head. Otherwise, we do not pop as we expect more arcs from the same origin.<sup>3</sup>
|
| 74 |
+
|
| 75 |
+
Intuitively, this lets us generate all the possible non-crossing combinations of right arcs: the stack enforces projectivity (to cover a / label with a dependency we need to remove it from the stack, so crossing arcs from inside the covering dependency to its right are not allowed), and the distinction between $>$ with and without $*$ allows us to link a new node to any of the previous, non-covered nodes.
|
| 76 |
+
|
| 77 |
+
To decode left arcs, we use a symmetric algorithm DecodeLeftArcs (not shown as it is analogous), which traverses the labels from right to left, operating on the elements \ and $<$ rather than / and $>$ ; with the difference that the stack is not initialized with the dummy root node (as the arc originating in it is a right arc). By the same reasoning as above, this algorithm can obtain all the possible non-crossing configurations of left arcs, and hence the mapping is injective. The decoding is trivially linear-time with respect to sequence length.
|
| 78 |
+
|
| 79 |
+
A sketch of an injectivity proof can be based on showing that the set of right arcs generated by Algorithm 1 (and the analogous for left arcs) is the only possible one that meets the conditions of the labels and does not have crossing arcs (hence, we cannot have two projective trees with the same encoding). To prove this, we can show that at each iteration, the arc added by line 7 of Algorithm 1 is the only possible alternative that can lead to a legal projective tree (i.e., that s. peek() is the only possible parent of node $i$ ). This is true because (1)
|
| 80 |
+
|
| 81 |
+
if we choose a parent to the left of speek(), then we cover s Peek() with a dependency, while it has not yet found all of its right dependents (as otherwise it would have been popped from the stack), so a crossing arc will be generated later; (2) if we choose a parent to the right of s peek() and to the left of $i$ , its label must contain / (otherwise, by definition, it could not have right dependents) and not be on the stack (as the stack is always ordered from left to right), so it must have been removed from the stack due to finding all its right dependents, and adding one more would violate the conditions of the encoding; and finally (3) a parent to the right of $i$ cannot be chosen as the algorithm is only considering right arcs. Together with the analogous proof for the symmetric algorithm, we show injectivity.
|
| 82 |
+
|
| 83 |
+
Coverage While we have defined and proved this encoding for projective trees, its coverage is actually larger: it can encode any dependency forest (i.e., does not require connectedness) such that arcs in the same direction do not cross (i.e., it can handle some non-projective structures where arcs only cross in opposite directions, as the process of encoding and decoding left and right arcs is independent). This is just like in the unbounded bracketing encodings of (Strzyz et al., 2019), but this extra coverage is not very large in practice, and we will define a better non-projective extension later.
|
| 84 |
+
|
| 85 |
+
Non-surjectivity Just like other sequence-labeling encodings (Strzyz et al., 2019; Lacroix, 2019; Strzyz et al., 2020, inter alia), ours is not surjective: not every label sequence corresponds to a valid tree, so heuristics are needed to fix cases where the sequence labeling component generates an invalid sequence. This can happen regardless of whether we only consider a tree to be valid if it is projective, or we accept the extra coverage mentioned above. For example, a sequence where the last word is marked as a left child $(<)$ is invalid in either case. Trying to decode invalid label sequences will result in trying to pop an empty stack or leaving material in the stack after finishing Algorithm 1 or its symmetric. In practice, we can
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Figure 2: A non-projective tree and its 7-bit encoding.
|
| 89 |
+
|
| 90 |
+
skip dependency creation when the stack is empty, ignore material left in the stack after decoding, break cycles and (if we require connectedness) attach any unconnected nodes to a neighbor.
|
| 91 |
+
|
| 92 |
+
# 3 Non-Projective Encoding
|
| 93 |
+
|
| 94 |
+
For a wider coverage of non-projective dependency trees (including the overwhelming majority of trees found in treebanks), we use the same technique as defined for unbounded brackets in (Strzyz et al., 2020): we partition dependency trees into two subsets (planes) of arcs (details in Appendix D), and this lets us define a 7-bit non-projective encoding by assigning each word $w_{i}$ a label $l_{i} = (b_{0}\dots b_{6})$ where:
|
| 95 |
+
|
| 96 |
+
- $b_{0}b_{1}$ can take values $< 0$ ( $w_{i}$ is a left dependent in the first plane), $> 0$ (right dependent in the $1^{\text{st}}$ plane), $< 1$ or $> 1$ (same for the $2^{\text{nd}}$ plane).
|
| 97 |
+
- $b_{2}$ is true iff $w_{i}$ is the outermost right (or left) dependent of its parent (regardless of plane). We represent it as $*$ if true or blank if false.
|
| 98 |
+
- $b_{3}$ (respectively, $b_{4}$ ) is true iff $w_{i}$ has one or more left (right) dependents in the first plane. We denote it as $\backslash \emptyset$ ( $/ \emptyset$ ) if true, blank if false.
|
| 99 |
+
- $b_{5}$ and $b_{6}$ are analogous to $b_{3}$ and $b_{4}$ , but in the second plane, represented as $\backslash 1$ or /1.
|
| 100 |
+
|
| 101 |
+
Every 7-bit combination is possible, leading to 128 distinct labels. Figure 2 shows an example of a non-projective tree represented with this encoding.
|
| 102 |
+
|
| 103 |
+
The encoding is able to cover every possible dependency tree whose arc set can be partitioned into two subsets (planes), such that arcs with the same direction and plane do not cross.
|
| 104 |
+
|
| 105 |
+
This immediately follows from defining the decoding with a set of four algorithms, two for decoding left and right arcs on the first plane (defined as Algorithm 1 and its symmetric, but considering only the symbols making reference to arcs in the first plane) and other two identical decoding passes for the second plane. With this, injectivity
|
| 106 |
+
|
| 107 |
+
<table><tr><td rowspan="2">Treebank</td><td colspan="2">B</td><td colspan="2">B-2P</td><td colspan="2">4bit</td><td colspan="2">7bit</td></tr><tr><td>L</td><td>C</td><td>L</td><td>C</td><td>L</td><td>C</td><td>L</td><td>C</td></tr><tr><td>PTB</td><td>114</td><td>>99.99</td><td>124</td><td>100</td><td>16</td><td>>99.99</td><td>28</td><td>100</td></tr><tr><td>RussianGSD</td><td>104</td><td>99.76</td><td>166</td><td>>99.99</td><td>16</td><td>99.61</td><td>70</td><td>>99.99</td></tr><tr><td>FinnishTDT</td><td>121</td><td>99.72</td><td>172</td><td>>99.99</td><td>16</td><td>99.35</td><td>65</td><td>>99.99</td></tr><tr><td>Anc-GreekPerseus</td><td>259</td><td>95.81</td><td>527</td><td>99.24</td><td>16</td><td>88.93</td><td>128</td><td>99.24</td></tr><tr><td>ChineseGSD</td><td>101</td><td>99.91</td><td>152</td><td>>99.99</td><td>16</td><td>99.84</td><td>46</td><td>>99.99</td></tr><tr><td>HebrewHTB</td><td>97</td><td>99.98</td><td>125</td><td>100</td><td>16</td><td>99.98</td><td>36</td><td>100</td></tr><tr><td>TamilTBT</td><td>51</td><td>99.94</td><td>58</td><td>100</td><td>16</td><td>99.98</td><td>22</td><td>100</td></tr><tr><td>UyghurUDT</td><td>78</td><td>99.43</td><td>150</td><td>>99.99</td><td>16</td><td>99.85</td><td>58</td><td>>99.99</td></tr><tr><td>WolofwTB</td><td>74</td><td>99.83</td><td>111</td><td>>99.99</td><td>16</td><td>99.06</td><td>46</td><td>>99.99</td></tr><tr><td>EnglishewT</td><td>110</td><td>99.88</td><td>174</td><td>>99.99</td><td>16</td><td>99.75</td><td>63</td><td>>99.99</td></tr><tr><td>Macro average</td><td>110.9</td><td>99.43</td><td>165.9</td><td>99.92</td><td>16</td><td>98.62</td><td>56.2</td><td>99.92</td></tr></table>
|
| 108 |
+
|
| 109 |
+
is shown in the same way as for the 4-bit encoding. Decoding is still linear-time.
|
| 110 |
+
|
| 111 |
+
Note that the set of trees covered by the encoding, described above, is a variant of the set of 2-Planar trees (Yli-Jyvä, 2003; Gómez-Rodríguez and Nivre, 2010), which are trees that can be split into two planes such that arcs within the same plane do not cross, regardless of direction. Compared to 2-Planar trees, and just like the encodings in (Strzyz et al., 2020), our set is extended as it allows arcs with opposite directions to cross within the same plane. However, it also loses some trees because the dummy root arc is also counted when restricting crossings, whereas in 2-Planar trees it is ignored.
|
| 112 |
+
|
| 113 |
+
# 4 Experiments
|
| 114 |
+
|
| 115 |
+
We compare our 4-bit and 7-bit encodings to their unbounded analogs, the bracketing (Strzyz et al., 2019) and 2-planar bracketing encodings (Strzyz et al., 2020) which overall are the best performing in previous work (Muñoz-Ortiz et al., 2021). We use MaChAmp (van der Goot et al., 2021) as a sequence labeling library, with default hyperparameters (Appendix B). We use XLM-RoBERTa (Connieu et al., 2020) followed by two separate one-layered feed-forward networks, one for syntactic labels and another for dependency types. We evaluate on the Penn Treebank Stanford Dependencies 3.3.0 conversion and on UD 2.9: a set of 9 linguistically-diverse treebanks taken from (Anderson and Gómez-Rodríguez, 2020), and a low-resource set of 7 (Anderson et al., 2021). We consider multiple subsets of treebanks as a single subset could be fragile (Alonso-Alonso et al., 2022).
|
| 116 |
+
|
| 117 |
+
Table 1 compares the compactness of the encodings by showing the number of unique syntactic labels needed to encode the (unlabeled) trees in the training set (i.e. the label set of the first task). The new encodings yield clearly smaller label set sizes,
|
| 118 |
+
|
| 119 |
+
Table 1: Number of labels (L) and coverage (C) for each treebank and encoding. B and B-2P are the baselines.
|
| 120 |
+
|
| 121 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>PTB</td><td>94.62</td><td>92.03</td><td>94.72</td><td>94.66</td></tr><tr><td>RussianGSD</td><td>87.84</td><td>87.36</td><td>88.04</td><td>89.58</td></tr><tr><td>FinnishTDT</td><td>92.45</td><td>92.37</td><td>92.19</td><td>92.74</td></tr><tr><td>Anc-GreekPerseus</td><td>71.84</td><td>71.76</td><td>67.63</td><td>75.36</td></tr><tr><td>ChinesEGSD</td><td>85.23</td><td>84.38</td><td>85.36</td><td>85.70</td></tr><tr><td>HebrewHTB</td><td>90.25</td><td>90.21</td><td>90.81</td><td>90.58</td></tr><tr><td>TamilTTB</td><td>63.65</td><td>61.68</td><td>65.16</td><td>65.69</td></tr><tr><td>UyghurUDT</td><td>67.22</td><td>65.49</td><td>67.17</td><td>69.10</td></tr><tr><td>WolofwTF</td><td>75.04</td><td>74.59</td><td>76.24</td><td>75.57</td></tr><tr><td>EnglishewT</td><td>91.03</td><td>91.30</td><td>89.48</td><td>91.78</td></tr><tr><td>Macro average</td><td>81.92</td><td>81.12</td><td>81.68</td><td>83.08</td></tr></table>
|
| 122 |
+
|
| 123 |
+
Table 2: LAS for the linguistically-diverse test sets
|
| 124 |
+
|
| 125 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>BelarusianHSE</td><td>85.21</td><td>86.83</td><td>86.77</td><td>88.23</td></tr><tr><td>GalicianTreeGal</td><td>78.32</td><td>77.94</td><td>81.54</td><td>81.22</td></tr><tr><td>LithuanianHSE</td><td>52.26</td><td>49.53</td><td>55.56</td><td>56.02</td></tr><tr><td>MarathiUFUL</td><td>62.13</td><td>55.19</td><td>66.50</td><td>67.19</td></tr><tr><td>Old-East-SlavicRNC</td><td>64.15</td><td>63.43</td><td>68.96</td><td>68.84</td></tr><tr><td>WelshCCCG</td><td>81.17</td><td>80.91</td><td>82.31</td><td>82.00</td></tr><tr><td>TamilTtB</td><td>63.65</td><td>61.68</td><td>65.16</td><td>65.69</td></tr><tr><td>Macro average</td><td>69.56</td><td>67.93</td><td>72.40</td><td>72.74</td></tr></table>
|
| 126 |
+
|
| 127 |
+
Table 3: LAS for the low-resource test sets
|
| 128 |
+
|
| 129 |
+
as predicted in theory. In particular, the 4-bit encoding always uses its 16 distinct labels. The 7-bit encoding only needs its theoretical maximum of 128 labels for the Ancient Greek treebank (the most non-projective one). On average, it uses around a third as many labels as the 2-planar bracketing encoding, and half as many as the basic bracketing. Regarding coverage, the 7-bit encoding covers over $99.9\%$ of arcs, like the 2-planar bracketing. The 4-bit encoding has lower coverage than basic brackets: both cover all projective trees, but they differ on coverage of non-projectivity (see Appendix C for an explanation of the reasons). More detailed data (e.g. coverage and label set size for low-resource treebanks) is in Appendix A.
|
| 130 |
+
|
| 131 |
+
Table 2 shows the models' performance in terms of LAS. The 4-bit encoding has mixed performance, excelling in highly projective treebanks like the PTB or Hebrew-HTB, but falling behind in non-projective ones like Ancient Greek, which is consistent with the lower non-projective coverage. The 7-bit encoding, however, does not exhibit this problem (given the almost total arc coverage mentioned above) and it outperforms both baselines for every treebank: the basic bracketing by 1.16 and the 2-planar one by 1.96 LAS points on average.[5]
|
| 132 |
+
|
| 133 |
+
If we focus on low-resource corpora (Table 3), label set sparsity is especially relevant so compact-
|
| 134 |
+
|
| 135 |
+
ness further boosts accuracy. The new encodings obtain large improvements, the 7-bit one surpassing the best baseline by over 3 average LAS points.
|
| 136 |
+
|
| 137 |
+
# 4.1 Additional results: splitting bits and external parsers
|
| 138 |
+
|
| 139 |
+
We perform additional experiments to test implementation variants of our encodings, as well as to put our results into context with respect to non-sequence-labeling parsers and simultaneous work. In the previous tables, both for the 4-bit and 7-bit experiments, all bits were predicted as a single, atomic task. We contrast this with a multi-task version where we split certain groups of bits to be predicted separately. We only explore a preliminary division of bits. For the 4-bit encoding, instead of predicting a label of the form $b_{0}b_{1}b_{2}b_{3}$ , the model predicts two labels of the form $b_{0}b_{1}$ and $b_{2}b_{3}$ , respectively. We call this method 4-bit-s. For the 7-bit encoding, we decided to predict the bits corresponding to each plane as a separate task, i.e., $b_{0}b_{2}b_{3}b_{4}$ and $b_{1}b_{5}b_{6}$ . We call this method 7-bit-s. We acknowledge that other divisions could be better. However, this falls outside the scope of this paper.
|
| 140 |
+
|
| 141 |
+
We additionally compare our results with other relevant models. As mentioned earlier, alongside this work, Amini et al. (2023) introduced a parsing-as-tagging method called hexatagging. In what follows, we abbreviate this method as 6tg. We implement 6tg under the same framework as our encodings for homogeneous comparison, and we predict these hexatags through two separate linear layers, one to predict the arc representation and another for the dependency type. We also consider a split version, 6tg-s, where the two components of the arc representation are predicted separately. For a better understanding of their method, we refer the reader to Amini et al. and Appendix E. Finally, we include a comparison against the biaffine graph-based parser by Dozat et al. (2017). For this, we trained the implementation in $\mathrm{SuPar}^6$ using xlmroberta-large as the encoder, which is often taken as a strong upper bound baseline.
|
| 142 |
+
|
| 143 |
+
Table 4 compares the performance of external parsers with our bit encodings. First, the results show that the choice of whether to split labels into components or not has a considerable influence, both for 6tg (where splitting is harmful across the board) and for our encodings (where it is mostly
|
| 144 |
+
|
| 145 |
+
<table><tr><td>Treebank</td><td>4-bit</td><td>7-bit</td><td>6tg</td><td>6tg-s</td><td>4-bit-s</td><td>7-bit-s</td><td>biaffine</td></tr><tr><td>PTB</td><td>94.72</td><td>94.66</td><td>96.13</td><td>96.04</td><td>94.92</td><td>94.88</td><td>95.32</td></tr><tr><td>RussianGSD</td><td>88.04</td><td>89.58</td><td>91.83</td><td>90.95</td><td>88.78</td><td>90.18</td><td>90.17</td></tr><tr><td>FinnishTDT</td><td>92.19</td><td>92.74</td><td>94.12</td><td>92.66</td><td>92.11</td><td>93.10</td><td>93.33</td></tr><tr><td>Anc-GreekPerseus</td><td>67.63</td><td>75.36</td><td>73.12</td><td>72.78</td><td>68.02</td><td>76.12</td><td>79.81</td></tr><tr><td>ChineseGSD</td><td>85.36</td><td>85.70</td><td>87.39</td><td>87.32</td><td>85.99</td><td>86.13</td><td>88.67</td></tr><tr><td>HebrewHTB</td><td>90.81</td><td>90.58</td><td>92.82</td><td>91.27</td><td>90.81</td><td>91.05</td><td>91.88</td></tr><tr><td>TamilTTB</td><td>65.16</td><td>65.69</td><td>78.33</td><td>76.32</td><td>66.99</td><td>67.19</td><td>67.52</td></tr><tr><td>UyghurUDT</td><td>67.17</td><td>69.10</td><td>71.11</td><td>65.23</td><td>67.55</td><td>69.13</td><td>72.33</td></tr><tr><td>WolofwTF</td><td>76.24</td><td>75.57</td><td>76.04</td><td>72.11</td><td>76.85</td><td>76.24</td><td>76.73</td></tr><tr><td>EnglishewT</td><td>89.48</td><td>91.78</td><td>92.62</td><td>90.06</td><td>89.48</td><td>92.15</td><td>92.72</td></tr><tr><td>Macro avg</td><td>81.68</td><td>83.08</td><td>85.35</td><td>83.47</td><td>82.15</td><td>83.62</td><td>84.85</td></tr></table>
|
| 146 |
+
|
| 147 |
+
Table 4: LAS comparison against related parsers, for the linguistically-diverse test sets.
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Treebank</td><td>4bit</td><td>7bit</td><td>6tg</td><td>6tg-s</td><td>4bit-s</td><td>7bit-s</td><td>biaffine</td></tr><tr><td>BelarusianHSE</td><td>86.77</td><td>88.23</td><td>89.14</td><td>89.01</td><td>87.01</td><td>88.52</td><td>93.83</td></tr><tr><td>GalicianTTreeGal</td><td>81.54</td><td>81.22</td><td>82.03</td><td>81.94</td><td>81.97</td><td>81.31</td><td>86.81</td></tr><tr><td>LithuanianHSE</td><td>55.56</td><td>56.02</td><td>64.47</td><td>64.74</td><td>55.97</td><td>57.31</td><td>56.75</td></tr><tr><td>MarathiUFL</td><td>66.50</td><td>67.19</td><td>75.00</td><td>74.66</td><td>66.92</td><td>67.57</td><td>61.22</td></tr><tr><td>Old-East-SlavicRNC</td><td>68.96</td><td>68.84</td><td>71.35</td><td>71.37</td><td>69.02</td><td>68.86</td><td>72.06</td></tr><tr><td>WelshCCG</td><td>82.31</td><td>82.00</td><td>87.05</td><td>86.92</td><td>82.62</td><td>82.13</td><td>85.05</td></tr><tr><td>TamilTTB</td><td>65.16</td><td>65.69</td><td>78.33</td><td>77.91</td><td>65.27</td><td>65.82</td><td>76.12</td></tr><tr><td>Macro average</td><td>72.40</td><td>72.74</td><td>78.19</td><td>78.07</td><td>72.68</td><td>73.07</td><td>75.97</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 5: LAS comparison against related parsers, for the low-resource test sets.
|
| 152 |
+
|
| 153 |
+
beneficial, perhaps because the structure of the encoding in bits with independent meanings naturally lends itself to multi-task learning). Second, on average, the best (multi-task) version of our 7-bit encoding is about 1.7 points behind the 6tg and 1.2 behind biaffine state-of-the-art parsers in terms of LAS. However, the difference between versions with and without multi-task learning suggests that there might be room for improvement by investigating different splitting techniques. Additionally, in Appendix F, Table 14 compares the processing speeds of these parsers (on a single CPU). In Appendix G, Tables 15 and 16 show how often heuristics are applied in decoding.
|
| 154 |
+
|
| 155 |
+
Finally, Table 5 shows the external comparison on the low-resource treebanks, where our encodings lag further behind biaffine and especially 6tg, which surpasses 7-bit-s by over 5 points.
|
| 156 |
+
|
| 157 |
+
# 5 Conclusion
|
| 158 |
+
|
| 159 |
+
We have presented two new bracketing encodings for dependency parsing as sequence labeling, which use a bounded number of labels. The 4-bit encoding, designed for projective trees, excels in projective treebanks and low-resource setups. The 7-bit encoding, designed to accommodate non-projectivity, clearly outperforms the best prior sequence-labeling encodings across a diverse set of treebanks. The source code is available at https://github.com/Polifack/CoDeLin/releases/tag/1.25.
|
| 160 |
+
|
| 161 |
+
# Limitations
|
| 162 |
+
|
| 163 |
+
In our experiments, we do not perform any hyperparameter optimization or other task-specific tweaks to try to bring the raw accuracy figures as close as possible to state of the art. This is for several reasons: (1) limited resources, (2) the paper having a mainly theoretical focus, with the experiments serving to demonstrate that our encodings are useful when compared to alternatives (the baselines) rather than chasing state-of-the-art accuracy, and (3) because we believe that one of the primary advantages of parsing as sequence labeling is its ease of use for practitioners, as one can perform parsing with any off-the-shelf sequence labeling library, and our results directly reflect this kind of usage. We note that, even under such a setup, raw accuracies are remarkably good.
|
| 164 |
+
|
| 165 |
+
# Ethics Statement
|
| 166 |
+
|
| 167 |
+
This is a primarily theoretical paper that presents new encodings for the well-known task of dependency parsing. We conduct experiments with the sole purpose of evaluating the new encodings, and we use publicly-available standard datasets that have long been in wide use among the NLP community. Hence, we do not think this paper raises any ethical concern.
|
| 168 |
+
|
| 169 |
+
# Acknowledgments
|
| 170 |
+
|
| 171 |
+
This work has received funding by the European Research Council (ERC), under the Horizon Europe research and innovation programme (SALSA, grant agreement No 101100615), ERDF/MICINN-AEI (SCANNER-UDC, PID2020-113230RB-C21), Xunta de Galicia (ED431C 2020/11), Grant GAP (PID2022-139308OA-I00) funded by MCIN/AEI/10.13039/501100011033/ and by ERDF "A way of making Europe", and Centro de Investigación de Galicia "CITIC", funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS).
|
| 172 |
+
|
| 173 |
+
# References
|
| 174 |
+
|
| 175 |
+
Iago Alonso-Alonso, David Vilares, and Carlos Gomez-Rodriguez. 2022. The fragility of multi-treebank
|
| 176 |
+
|
| 177 |
+
Parsing evaluation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 5345-5359, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
| 178 |
+
Afra Amini and Ryan Cotterell. 2022. On parsing as tagging. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8884-8900, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 179 |
+
Afra Amini, Tianyu Liu, and Ryan Cotterell. 2023. Hexatagging: Projective dependency parsing as tagging. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1453-1464, Toronto, Canada. Association for Computational Linguistics.
|
| 180 |
+
Mark Anderson, Mathieu Dehouck, and Carlos Gomez-Rodriguez. 2021. A falta de pan, buenas son tortas: The efficacy of predicted UPOS tags for low resource UD parsing. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 78-83, Online. Association for Computational Linguistics.
|
| 181 |
+
Mark Anderson and Carlos Gómez-Rodríguez. 2020. Distilling neural networks for greener and faster dependency parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 2-13, Online. Association for Computational Linguistics.
|
| 182 |
+
Mark Anderson and Carlos Gómez-Rodríguez. 2021. A modest Pareto optimisation analysis of dependency parsers in 2021. In Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 119-130, Online. Association for Computational Linguistics.
|
| 183 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
|
| 184 |
+
Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics.
|
| 185 |
+
Carlos Gómez-Rodríguez and Joakim Nivre. 2010. A transition-based parser for 2-planar dependency structures. In Proceedings of the 48th Annual Meeting of
|
| 186 |
+
|
| 187 |
+
the Association for Computational Linguistics, pages 1492-1501, Uppsala, Sweden. Association for Computational Linguistics.
|
| 188 |
+
|
| 189 |
+
Carlos Gomez-Rodriguez and Joakim Nivre. 2013. Divisible transition systems and multiplanar dependency parsing. Computational Linguistics, 39(4):799-845.
|
| 190 |
+
|
| 191 |
+
Carlos Gómez-Rodríguez, Michalina Strzyz, and David Vilares. 2020. A unifying theory of transition-based and sequence labeling parsing. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3776-3793, Barcelona, Spain (Online). International Committee on Computational Linguistics.
|
| 192 |
+
|
| 193 |
+
Carlos Gómez-Rodríguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314-1324, Brussels, Belgium. Association for Computational Linguistics.
|
| 194 |
+
|
| 195 |
+
Nikita Kitaev and Dan Klein. 2020. Tetra-tagging: Word-synchronous parsing with linear-time inference. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6255-6261, Online. Association for Computational Linguistics.
|
| 196 |
+
|
| 197 |
+
Ophélie Lacroix. 2019. Dependency parsing as sequence labeling with head-based encoding and multitask learning. In Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 136-143, Paris, France. Association for Computational Linguistics.
|
| 198 |
+
|
| 199 |
+
Alberto Muñoz-Ortiz, Michalina Strzyz, and David Vi-lares. 2021. Not all linearizations are equally data-hungry in sequence labeling parsing. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 978-988, Held Online. INCOMA Ltd.
|
| 200 |
+
|
| 201 |
+
Michalina Strzyz, David Vilaras, and Carlos Gomez-Rodriguez. 2019. Viable dependency parsing as sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 717-723, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 202 |
+
|
| 203 |
+
Michalina Strzyz, David Vilares, and Carlos Gomez-Rodriguez. 2020. Bracketing encodings for 2-planar dependency parsing. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2472-2484, Barcelona, Spain (Online). International Committee on Computational Linguistics.
|
| 204 |
+
|
| 205 |
+
Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multitask learning in NLP. In Proceedings of the 16th
|
| 206 |
+
|
| 207 |
+
Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 176-197, Online. Association for Computational Linguistics.
|
| 208 |
+
|
| 209 |
+
Yufei Wang, Mark Johnson, Stephen Wan, Yifang Sun, and Wei Wang. 2019. How to best use syntax in semantic role labelling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5338-5343, Florence, Italy. Association for Computational Linguistics.
|
| 210 |
+
|
| 211 |
+
Anssi Yli-Jyra. 2017. Bounded-depth high-coverage search space for noncrossing parses. In Proceedings of the 13th International Conference on Finite State Methods and Natural Language Processing (FSMNLP 2017), pages 30–40, Umeå, Sweden. Association for Computational Linguistics.
|
| 212 |
+
|
| 213 |
+
Anssi Mikael Yli-Jyvä. 2003. Multiplanarity - a model for dependency structures in treebanks. In TLT 2003. Proceedings of the Second Workshop on Treebanks and Linguistic Theories, volume 9 of Mathematical Modelling in Physics, Engineering and Cognitive Sciences, pages 189-200, Växjö, Sweden. Växjö University Press.
|
| 214 |
+
|
| 215 |
+
Anssi Mikael Yli-Jyrä. 2019. How to embed noncrossing trees in universal dependencies treebanks in a low-complexity regular language. Journal of Language Modelling, 7(2):177-232.
|
| 216 |
+
|
| 217 |
+
# A Further Data
|
| 218 |
+
|
| 219 |
+
Tables 6 and 7 show treebank statistics for the general and low-resource set of treebanks, respectively.
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Treebank</td><td>projective</td><td>1-planar</td><td>r arcs</td><td>avg d</td></tr><tr><td>PTB</td><td>99.89%</td><td>99.89%</td><td>48.74%</td><td>2.295</td></tr><tr><td>RussianGSD</td><td>93.87%</td><td>93.89%</td><td>49.03%</td><td>2.263</td></tr><tr><td>FinnishTDT</td><td>93.85%</td><td>93.88%</td><td>52.88%</td><td>2.365</td></tr><tr><td>Anc-GreekPerseus</td><td>37.66%</td><td>37.67%</td><td>52.81%</td><td>2.447</td></tr><tr><td>ChinesEGSD</td><td>97.75%</td><td>97.87%</td><td>63.67%</td><td>2.440</td></tr><tr><td>HebrewHTB</td><td>96.26%</td><td>96.28%</td><td>49.21%</td><td>2.242</td></tr><tr><td>TamilTTB</td><td>98.33%</td><td>98.33%</td><td>68.56%</td><td>2.262</td></tr><tr><td>UyghurUDT</td><td>95.02%</td><td>96.03%</td><td>64.31%</td><td>2.140</td></tr><tr><td>WolofWTF</td><td>97.01%</td><td>97.10%</td><td>48.21%</td><td>2.519</td></tr><tr><td>EnglishEWT</td><td>97.47%</td><td>97.63%</td><td>57.18%</td><td>2.525</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 6: Statistics for the linguistically-diverse set of treebanks: percentage of projective trees, 1-planar trees, percentage of rightward arcs (r arcs), and average dependency distance (avg d).
|
| 224 |
+
|
| 225 |
+
<table><tr><td>Treebank</td><td>projective</td><td>1-planar</td><td>r arcs</td><td>avg d</td></tr><tr><td>BelarusianHSE</td><td>94.92%</td><td>95.22%</td><td>46.92%</td><td>2.232</td></tr><tr><td>GalicianTreeGal</td><td>88.80%</td><td>89.20%</td><td>53.02%</td><td>2.530</td></tr><tr><td>LithuanianHSE</td><td>85.93%</td><td>86.69%</td><td>58.40%</td><td>2.321</td></tr><tr><td>Old-East-SlavicRNC</td><td>66.26%</td><td>66.35%</td><td>58.21%</td><td>2.433</td></tr><tr><td>MarathiUFUL</td><td>95.92%</td><td>96.35%</td><td>50.81%</td><td>2.362</td></tr><tr><td>WelshCCG</td><td>98.24%</td><td>98.24%</td><td>43.94%</td><td>2.324</td></tr><tr><td>TamilTTB</td><td>98.33%</td><td>98.33%</td><td>68.56%</td><td>2.262</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 7: Statistics for the low-resource set of treebanks: percentage of projective trees, 1-planar trees, percentage of rightward arcs (r arcs), and average dependency distance (avg d).
|
| 228 |
+
|
| 229 |
+
Table 8 shows the number of labels and the arc coverage of each considered encoding for the low-resource treebank set of Anderson et al. (2021), in the same notation as in Table 1. As can be seen in the table, the trends are analogous to those for the other treebanks (Table 1 in the main text).
|
| 230 |
+
|
| 231 |
+
<table><tr><td rowspan="2">Treebank</td><td colspan="2">B</td><td colspan="2">B-2P</td><td colspan="2">4bit</td><td colspan="2">7bit</td></tr><tr><td>L</td><td>C</td><td>L</td><td>C</td><td>L</td><td>C</td><td>L</td><td>C</td></tr><tr><td>BelarusianHSE</td><td>133</td><td>99.53</td><td>228</td><td>>99.99</td><td>16</td><td>99.46</td><td>89</td><td>>99.99</td></tr><tr><td>GalicianTreeGal</td><td>79</td><td>99.51</td><td>129</td><td>>99.99</td><td>16</td><td>99.52</td><td>60</td><td>>99.99</td></tr><tr><td>LithuanianHSE</td><td>64</td><td>98.88</td><td>84</td><td>99.98</td><td>16</td><td>98.82</td><td>45</td><td>99.98</td></tr><tr><td>MarathiUFAL</td><td>46</td><td>99.44</td><td>58</td><td>100</td><td>16</td><td>99.32</td><td>36</td><td>100</td></tr><tr><td>Old-East-SlavicRNC</td><td>134</td><td>97.66</td><td>230</td><td>99.94</td><td>16</td><td>97.46</td><td>86</td><td>99.94</td></tr><tr><td>WelshCCG</td><td>53</td><td>99.90</td><td>71</td><td>100</td><td>16</td><td>99.93</td><td>38</td><td>100</td></tr><tr><td>TamilTTB</td><td>51</td><td>99.82</td><td>58</td><td>100</td><td>16</td><td>99.84</td><td>22</td><td>100</td></tr><tr><td>Macro average</td><td>80.0</td><td>99.25</td><td>122.6</td><td>99.99</td><td>16</td><td>99.19</td><td>53.7</td><td>99.99</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Tables 9 and 10 show the coverage of the encodings in terms of full trees, rather than arcs (i.e., what percentage of the dependency trees in each treebank can be fully encoded and decoded back by each of the encodings).
|
| 234 |
+
|
| 235 |
+
Table 8: Number of labels (L) and arc coverage (C) for each low-resource treebank and encoding. B and B-2P are the baselines.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>PTB</td><td>>99.99%</td><td>100%</td><td>>99.99%</td><td>100%</td></tr><tr><td>RussianGSD</td><td>96.94%</td><td>99.92%</td><td>95.65%</td><td>99.92%</td></tr><tr><td>FinnishTDT</td><td>99.43%</td><td>100%</td><td>99.35%</td><td>100%</td></tr><tr><td>Anc-GreekPerseus</td><td>72.25%</td><td>90.63%</td><td>50.48%</td><td>90.63%</td></tr><tr><td>ChinesEGSD</td><td>99.30%</td><td>100%</td><td>98.54%</td><td>100%</td></tr><tr><td>HebrewHTB</td><td>98.26%</td><td>99.89%</td><td>97.20%</td><td>99.89%</td></tr><tr><td>TamilTTB</td><td>99.50%</td><td>100%</td><td>98.67%</td><td>100%</td></tr><tr><td>UyghurUDT</td><td>97.80%</td><td>100%</td><td>97.19%</td><td>100%</td></tr><tr><td>WolofwTF</td><td>97.86%</td><td>99.95%</td><td>97.25%</td><td>99.95%</td></tr><tr><td>EnglishewT</td><td>98.73%</td><td>99.98%</td><td>98.18%</td><td>99.98%</td></tr><tr><td>Macro average</td><td>96.01%</td><td>99.04%</td><td>93.25%</td><td>99.04%</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 9: Full tree coverage for each encoding on the linguistically-diverse set of treebanks.
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>BelarusianHSE</td><td>96.36%</td><td>99.95%</td><td>96.22%</td><td>99.95%</td></tr><tr><td>GalicianTreeGal</td><td>92.90%</td><td>99.80%</td><td>92.60%</td><td>99.80%</td></tr><tr><td>LithuanianHSE</td><td>88.97%</td><td>99.62%</td><td>88.97%</td><td>99.62%</td></tr><tr><td>Old-East-SlavicRNC</td><td>72.15%</td><td>97.75%</td><td>72.05%</td><td>97.75%</td></tr><tr><td>MarathiUFAL</td><td>97.63%</td><td>100%</td><td>97.42%</td><td>100%</td></tr><tr><td>WelshCCG</td><td>98.88%</td><td>100%</td><td>98.88%</td><td>100%</td></tr><tr><td>TamilTb</td><td>99.50%</td><td>100%</td><td>98.67%</td><td>100%</td></tr><tr><td>Macro average</td><td>92.34%</td><td>99.59%</td><td>92.12%</td><td>99.59%</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Tables 11 and 12 show the total number of labels needed to encode the training set for each encoding and treebank, when considering full labels (i.e., the number of combinations of syntactic labels and dependency type labels). This can be relevant for implementations that generate such combinations as atomic labels (in our implementation, label components are generated separately instead).
|
| 244 |
+
|
| 245 |
+
Table 10: Full tree coverage for each encoding on the low-resource set of treebanks.
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>PTB</td><td>1216</td><td>1233</td><td>396</td><td>408</td></tr><tr><td>RussianGSD</td><td>802</td><td>961</td><td>400</td><td>614</td></tr><tr><td>FinnishTDT</td><td>1054</td><td>1223</td><td>435</td><td>685</td></tr><tr><td>Anc-GreekPerseus</td><td>1469</td><td>2401</td><td>304</td><td>1167</td></tr><tr><td>ChinesEGSD</td><td>804</td><td>912</td><td>321</td><td>406</td></tr><tr><td>HebrewHTB</td><td>754</td><td>798</td><td>317</td><td>357</td></tr><tr><td>TamilTTB</td><td>262</td><td>274</td><td>153</td><td>164</td></tr><tr><td>UyghurUDT</td><td>553</td><td>683</td><td>353</td><td>475</td></tr><tr><td>WolofwTF</td><td>585</td><td>643</td><td>318</td><td>382</td></tr><tr><td>EnglishEWT</td><td>1089</td><td>1281</td><td>487</td><td>709</td></tr><tr><td>Macro average</td><td>858.8</td><td>1040.9</td><td>348.4</td><td>536.7</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 11: Unique labels generated when encoding the training sets of the linguistically-diverse set of treebanks, including dependency types as a component of the labels.
|
| 250 |
+
|
| 251 |
+
<table><tr><td>Treebank</td><td>B</td><td>B-2P</td><td>4bit</td><td>7bit</td></tr><tr><td>BelarusianHSE</td><td>1136</td><td>1479</td><td>477</td><td>926</td></tr><tr><td>GalicianTreeGal</td><td>512</td><td>601</td><td>270</td><td>376</td></tr><tr><td>LithuanianHSE</td><td>398</td><td>432</td><td>256</td><td>306</td></tr><tr><td>Old-East-SlavicRNC</td><td>910</td><td>1181</td><td>378</td><td>715</td></tr><tr><td>MarathiUFLAL</td><td>275</td><td>291</td><td>197</td><td>223</td></tr><tr><td>WelshCCG</td><td>474</td><td>514</td><td>265</td><td>312</td></tr><tr><td>TamilTTB</td><td>262</td><td>274</td><td>153</td><td>164</td></tr><tr><td>Macro average</td><td>566.7</td><td>681.7</td><td>285.1</td><td>431.7</td></tr></table>
|
| 252 |
+
|
| 253 |
+
# B Hyperparameters
|
| 254 |
+
|
| 255 |
+
We did not perform hyperparameter search, but just used MaChAmp's defaults, which can be seen in Table 13.
|
| 256 |
+
|
| 257 |
+
Table 12: Unique labels generated when encoding the training sets of the low-resource set of treebanks, including dependency types as a component of the labels.
|
| 258 |
+
|
| 259 |
+
<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>dropout</td><td>0.1</td></tr><tr><td>max input length</td><td>128</td></tr><tr><td>batch size</td><td>8</td></tr><tr><td>training epochs</td><td>50</td></tr><tr><td>optimizer</td><td>adam</td></tr><tr><td>learning rate</td><td>0.0001</td></tr><tr><td>weight decay</td><td>0.01</td></tr></table>
|
| 260 |
+
|
| 261 |
+
Table 13: Hyperparameter settings
|
| 262 |
+
|
| 263 |
+
# C Coverage Differences
|
| 264 |
+
|
| 265 |
+
It is worth noting that, while the 7-bit encoding has exactly the same coverage as the 2-planar bracketing encoding (see Tables 1, 8, 9, 10); the 4-bit encoding has less coverage than the basic bracketing. As mentioned in the main text, both have full coverage of projective trees, but there are subtle differences in how they behave when they are applied to non-projective trees. We did not enumerate all of these differences in detail for space reasons. In particular, they are the following:
|
| 266 |
+
|
| 267 |
+
- Contrary to basic bracketing, the 4-bit encoding needs to encode the arc originating from the dummy root explicitly. This means that it cannot encode non-projective, but planar trees where the dummy root arc crosses a right arc (or equivalently, the syntactic root is covered by a right arc).
|
| 268 |
+
- In the basic bracketing, a dependency involving words $w_{i}$ and $w_{j}$ ( $i < j$ ) is not encoded in the labels of $w_{i}$ and $w_{j}$ , but in the labels of $w_{i+1}$ and $w_{j}$ (see (Strzyz et al., 2019)), as a technique to alleviate sparsity (in the particular case of that encoding, it guarantees that the worst-case number of labels is linear, rather than quadratic, with respect to sentence length). In the 2-planar, 4- and 7-bit encodings, this is unneeded so dependencies are encoded directly in the labels of the intervening words.
|
| 269 |
+
- Contrary to basic bracketing, in the 4-bit encoding a single / or \ element is shared by several arcs. Thus, if an arc cannot be successfully encoded due to unsupported non-projectivity, the problem can propagate to sibling dependencies. In other words, due to being more compact, the 4-bit encoding has less redundancy than basic bracketing.
|
| 270 |
+
|
| 271 |
+
# D Plane Assignment
|
| 272 |
+
|
| 273 |
+
The 2-planar and 7-bit encodings need a strategy to partition trees into two planes. We used the second-plane-averse strategy based on restriction propagation on the crossings graph (Strzyz et al., 2020). It can be summarized as follows:
|
| 274 |
+
|
| 275 |
+
1. The crossings graph is defined as an undirected graph where each node corresponds to an arc in the dependency tree, and there is an edge between nodes $a$ and $b$ if arc $a$ crosses arc $b$ in the dependency tree.
|
| 276 |
+
2. Initially, both planes are marked as allowed for every arc in the dependency tree.
|
| 277 |
+
3. The arcs are visited in the order of their right endpoint, moving from left to right. Priority is given to shorter arcs if they have a common right endpoint. Once sorted, we iterate through the arcs.
|
| 278 |
+
4. Whenever we assign an arc $a$ to a given plane $p$ , we immediately propagate restrictions in
|
| 279 |
+
|
| 280 |
+
the following way: we forbid plane $p$ for the arcs that cross $a$ (its neighbors in the crossings graph), we forbid the other plane $(p')$ for the neighbors of its neighbors, plane $p$ for the neighbors of those, and so on.
|
| 281 |
+
|
| 282 |
+
5. Plane assignment is made by traversing arcs. For each new arc $a$ , we look at the restrictions and assign it to the first plane if allowed, otherwise to the second plane if allowed, and finally to no plane if none is allowed (for non-2-planar structures).
|
| 283 |
+
|
| 284 |
+
# E Hexatagging
|
| 285 |
+
|
| 286 |
+
Amini et al. (2023) use an intermediate representation, called binary head trees, that acts as a proxy between dependency trees and hexatags. These trees have a structure akin to binary constituent trees in order to apply the tetra-tagging encoding (Kitaev and Klein, 2020). In addition, non-terminal intermediate nodes are labeled with 'L' or 'R' based on whether the head of the constituent is on its left or right subtree. We direct the reader to the paper for specifics. However, a mapping between projective dependency trees and this structure can be achieved by starting at the sentence's root and conducting a depth-first traversal of the tree. The arc representation components for each hexatag encode: (i) the original label corresponding to the tetratag, and (ii) the value of the non-terminal symbol in the binary head tree.
|
| 287 |
+
|
| 288 |
+
# F Speed comparison
|
| 289 |
+
|
| 290 |
+
Table 14 compares the speed of the models over an execution on a single CPU. It is important to note that while SuPar is an optimized parser, in this context, we used MaChAmp as a general sequence labeling framework without specific optimization for speed. With a more optimized model, practical processing speeds in the range of 100 sentences per second on CPU or 1000 on a consumer-grade GPU should be achievable (cf. the figures for sequence-labeling parsing implementations in (Anderson and Gomez-Rodriguez, 2021)).
|
| 291 |
+
|
| 292 |
+
# G Non-Surjectivity in Decoding
|
| 293 |
+
|
| 294 |
+
As mentioned in the main text, all encodings explored in this paper are non-surjective, meaning that there are label sequences that do not correspond to a valid tree. In these cases, the labels
|
| 295 |
+
|
| 296 |
+
<table><tr><td>Treebank</td><td>biaffine</td><td>6tg</td><td>4bit</td><td>7bit</td></tr><tr><td>Penn-Treebank</td><td>28.34</td><td>14.65</td><td>14.28</td><td>14.42</td></tr><tr><td>UD-Russian-GSD</td><td>28.15</td><td>14.27</td><td>14.63</td><td>14.30</td></tr><tr><td>UD-Finnish-TDT</td><td>34.68</td><td>18.22</td><td>17.56</td><td>17.82</td></tr><tr><td>UD-Ancient-Greek-Perseus</td><td>24.12</td><td>12.53</td><td>12.93</td><td>12.15</td></tr><tr><td>UD-Chinese-GSD</td><td>22.64</td><td>10.78</td><td>11.05</td><td>10.86</td></tr><tr><td>UD-Hebrew-HTB</td><td>27.06</td><td>13.46</td><td>13.15</td><td>13.71</td></tr><tr><td>UD-Tamil-TTB</td><td>29.19</td><td>11.98</td><td>12.17</td><td>12.87</td></tr><tr><td>UD-Uyghur-UDT</td><td>34.87</td><td>18.69</td><td>18.01</td><td>18.93</td></tr><tr><td>UD-Wolof-WTF</td><td>28.14</td><td>12.61</td><td>12.31</td><td>12.61</td></tr><tr><td>UD-English-EWT</td><td>35.02</td><td>20.03</td><td>19.87</td><td>20.17</td></tr></table>
|
| 297 |
+
|
| 298 |
+
Table 14: Speed (sentences per second) for the linguistically-diverse test sets.
|
| 299 |
+
|
| 300 |
+
<table><tr><td>Treebank</td><td>6tg</td><td>4-bit</td><td>7-bit</td></tr><tr><td>PTB</td><td>4.01%</td><td>8.24%</td><td>4.13%</td></tr><tr><td>RussianGSD</td><td>14.42%</td><td>19.34%</td><td>16.57%</td></tr><tr><td>FinnishTDT</td><td>3.75%</td><td>10.01%</td><td>8.84%</td></tr><tr><td>Anc-GreekPerseus</td><td>12.66%</td><td>20.08%</td><td>18.81%</td></tr><tr><td>ChineseGSD</td><td>12.31%</td><td>22.06%</td><td>21.81%</td></tr><tr><td>HebrewHTB</td><td>10.82%</td><td>16.76%</td><td>16.79%</td></tr><tr><td>TamilTTB</td><td>29.06%</td><td>36.12%</td><td>37.67%</td></tr><tr><td>UyghuUTD</td><td>18.13%</td><td>22.19%</td><td>18.52%</td></tr><tr><td>WolofwTF</td><td>30.01%</td><td>42.15%</td><td>50.54%</td></tr><tr><td>EnglishEWT</td><td>4.01%</td><td>12.24%</td><td>6.48%</td></tr><tr><td>Macro average</td><td>13.92%</td><td>20.92%</td><td>20.02%</td></tr></table>
|
| 301 |
+
|
| 302 |
+
are decoded using simple heuristics (e.g. skipping dependency creation if the stack is empty, ignoring material remaining in the stack after decoding, attaching unconnected nodes and breaking cycles). Table 15 shows data about the number of trees in the test set such that the labels output by the tagger do not directly correspond to a valid tree, and at least one of these heuristics has to be applied. Table 16 shows the same information in terms of the percentage of dependency arcs that are affected by said heuristics.
|
| 303 |
+
|
| 304 |
+
Table 15: Percentage of trees in the linguistically-diverse test sets where the label sequence output by the tagger does not correspond to a valid tree, and heuristics need to be applied to deal with unconnected nodes, cycles or out-of-bounds indexes.
|
| 305 |
+
|
| 306 |
+
<table><tr><td>Treebank</td><td>6tg</td><td>4-bit</td><td>7-bit</td></tr><tr><td>PTB</td><td>0.531%</td><td>0.941%</td><td>0.566%</td></tr><tr><td>RussianGSD</td><td>0.930%</td><td>1.479%</td><td>1.200%</td></tr><tr><td>FinnishTDT</td><td>0.291%</td><td>0.987%</td><td>0.780%</td></tr><tr><td>Anc-GreekPerseus</td><td>0.563%</td><td>2.291%</td><td>1.917%</td></tr><tr><td>ChineseGSD</td><td>0.705%</td><td>1.622%</td><td>1.593%</td></tr><tr><td>HebrewHTB</td><td>0.550%</td><td>0.965%</td><td>0.958%</td></tr><tr><td>TamilTTB</td><td>2.728%</td><td>3.819%</td><td>4.280%</td></tr><tr><td>UyghurUDT</td><td>2.052%</td><td>2.801%</td><td>2.191%</td></tr><tr><td>WolofwTF</td><td>1.853%</td><td>3.043%</td><td>3.868%</td></tr><tr><td>EnglishewT</td><td>0.554%</td><td>1.523%</td><td>0.726%</td></tr><tr><td>Macro average</td><td>1.075%</td><td>1.947%</td><td>1.807%</td></tr></table>
|
| 307 |
+
|
| 308 |
+
Table 16: Percentage of dependency arcs in the linguistically-diverse test sets where heuristics need to be applied to deal with unconnected nodes, cycles or out-of-bounds indexes.
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:23545ad6f76be7161d60728a20539f2d01a1dadc1027e75806671880a536a1f6
|
| 3 |
+
size 559248
|
4and7bitlabelingforprojectiveandnonprojectivedependencytrees/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fd6b8f367e986541add0085c787dcb291ba42baa597d11eeb2cb9ca6108fe1eb
|
| 3 |
+
size 373664
|
abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:482418522068e99efa5129966881197fbc75f77666e14914892f7f86788a45c8
|
| 3 |
+
size 57524
|
abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:76f7970222b2fff318385eba332dff9c47b1628fbbb6c4af740560ca77246cd3
|
| 3 |
+
size 66787
|
abenchmarkforreasoningwithspatialprepositions/de6e802d-d566-4dfe-a8a2-54cb2d33626d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:13789a4d6c71fe5f4034d6cfcb11b75fb8b0f3b7ee65881feb3a0dfddfae70a9
|
| 3 |
+
size 194684
|
abenchmarkforreasoningwithspatialprepositions/full.md
ADDED
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Benchmark for Reasoning with Spatial Prepositions
|
| 2 |
+
|
| 3 |
+
Iulia-Maria Comsa
|
| 4 |
+
|
| 5 |
+
Google DeepMind
|
| 6 |
+
|
| 7 |
+
iuliacomsa@gmail.com
|
| 8 |
+
|
| 9 |
+
Srini Narayanan
|
| 10 |
+
|
| 11 |
+
Google DeepMind
|
| 12 |
+
|
| 13 |
+
srinin@google.com
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Spatial reasoning is a fundamental building block of human cognition, used in representing, grounding, and reasoning about physical and abstract concepts. We propose a novel benchmark focused on assessing inferential properties of statements with spatial prepositions. The benchmark includes original datasets in English and Romanian and aims to probe the limits of reasoning about spatial relations in large language models. We use prompt engineering to study the performance of two families of large language models, PaLM and GPT-3, on our benchmark. Our results show considerable variability in the performance of smaller and larger models, as well as across prompts and languages. However, none of the models reaches human performance. $^{1}$
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Large language models (LLMs) are becoming increasingly human-like in their performance on many tasks, but are still not on par with more advanced aspects of human cognition (Choi, 2022). On the other hand, they are showing emerging capabilities that were previously thought beyond their limits, such as grounding conceptual spaces (Patel and Pavlick, 2022). Currently, many questions are open regarding the limits of reasoning in LLMs and how they compare to humans in cognitive domains that require a deeper understanding of the world.
|
| 22 |
+
|
| 23 |
+
One such domain is spatial reasoning, which is a fundamental part of human cognition (Regier, 1996; Herskovits, 2009; Gärdenfors, 2014). This type of reasoning is relevant not only for the representation, prediction and manipulation of physical objects, but also for representing and performing inferences with abstract concepts. This is reflected in common uses of spatial prepositions, which traditionally indicate location, but are also used to refer
|
| 24 |
+
|
| 25 |
+
to abstract states, forces or goals. For example, one can be "in Paris" or "under a tree" (physical locations), but one can also be "in trouble" or "under sedation" (abstract concepts).
|
| 26 |
+
|
| 27 |
+
Given their lack of embodied spatial experience and the scarcity of commonsense knowledge in training data (Gordon and Van Durme, 2013), we hypothesise that LLMs have may difficulties reasoning about physical and abstract spatial relations.
|
| 28 |
+
|
| 29 |
+
We investigate this using a novel benchmark for assessing inferences on sentences containing spatial prepositions. The sentences are designed to be easy for humans, but non-trivial for models that cannot differentiate between uses of prepositions with different concepts. Our task has similarities with other NLI tasks (Bowman et al., 2015).
|
| 30 |
+
|
| 31 |
+
This paper makes the following contributions:
|
| 32 |
+
|
| 33 |
+
- We propose a novel benchmark, available in English and Romanian, to probe a model's ability to reason with spatial prepositions in physical and abstract domain, through compositional statements.
|
| 34 |
+
|
| 35 |
+
- We assess two families of large language models, PaLM (Chowdhery et al., 2022) and GPT-3 (Brown et al., 2020) and compare them each other and against human performance on the benchmark. We find that performance varies considerably with model size, prompt setup and language. However, none of the models reaches human performance.
|
| 36 |
+
|
| 37 |
+
# 2 Related Work
|
| 38 |
+
|
| 39 |
+
To investigate commonsense spatial reasoning, Liu et al. (2022) introduced a benchmark focused on assessing the relative size of objects, as well as positional relationships between humans and objects during various actions. Yatskar et al. (2016) extracted a dataset of commonsense spatial relationships from a large corpus where this information appears implicitly. Weston et al. (2015) proposed a set of toy tasks for questions answering, including positional reasoning, while Mirzaee et al. (2021) introduced SpartQA, a dataset of challenging textual
|
| 40 |
+
|
| 41 |
+
<table><tr><td>First premise</td><td>Second premise</td><td>Potential conclusion</td><td>Holds?</td></tr><tr><td>John is in the crib</td><td>the crib is in the living room</td><td>John is in the living room</td><td>✓</td></tr><tr><td>John is in the newspaper</td><td>the newspaper is in the kitchen</td><td>John is in the kitchen</td><td>✗</td></tr><tr><td>the helmet is above the scooter</td><td>the scooter is above the parking lot</td><td>the helmet is above the parking lot</td><td>✓</td></tr><tr><td>the helmet is above the scooter</td><td>the scooter is above my pay grade</td><td>the helmet is above my pay grade</td><td>✗</td></tr><tr><td>the robot is in the tent</td><td>the tent is under the bridge</td><td>the robot is under the bridge</td><td>✓</td></tr><tr><td>the robot is in the building</td><td>the building is under construction</td><td>the robot is under construction</td><td>✗</td></tr></table>
|
| 42 |
+
|
| 43 |
+
Table 1: Examples showcasing our benchmark on reasoning with spatial prepositions. Each example consists of two premises and a conclusion. The composition of the premises can be transitive (the conclusion holds) or intransitive (the conclusion does not hold). Similar examples are present in the Romanian version of the dataset.
|
| 44 |
+
|
| 45 |
+
commonsense spatial relationships.
|
| 46 |
+
|
| 47 |
+
In contrast to these studies, our benchmark proposes the additional challenge of using spatial prepositions to refer to abstract concepts in addition to physical relationships. Reasoning with metaphorical and literal statements has been previously studied (Comsa et al., 2022), but here we focus specifically on spatial prepositions.
|
| 48 |
+
|
| 49 |
+
# 3 Dataset
|
| 50 |
+
|
| 51 |
+
We create small, manually-curated datasets, intended to be used as a benchmark, and not for training purposes. Each dataset consists of 400 class-balanced items. As illustrated in Table 1, each item consists of:
|
| 52 |
+
|
| 53 |
+
- premise1: "X is $[prep_1]$ Y"
|
| 54 |
+
- premise2: "Y is \([prep_2] Z"
|
| 55 |
+
- conclusion: "X is $[prep_3] Z"$
|
| 56 |
+
|
| 57 |
+
where prep is a spatial preposition such as "in" or "on" and prep3 is one of {prep1, prep2}. Given the premises, the conclusion may or may not hold.
|
| 58 |
+
|
| 59 |
+
In the case of congruent compositions, the conclusion holds, typically indicating a similar type of spatial relationship. For example, if "John is in the crib" and "the crib is in the living room", the conclusion "John is in the living room" holds.
|
| 60 |
+
|
| 61 |
+
On the other hand, in the incongruent compositions, the spatial prepositions in each premise refers to a different type of relation, such as through a conceptual metaphor, and the conclusion does not hold. However, the items are designed such that without a deep understanding of the commonsense semantics of the spatial prepositions, a mistaken interpretation is possible, leading to the false impression that the conclusion holds. For example, if "John is in the newspaper" and "the newspaper is in the kitchen", the conclusion "John is in the kitchen" does not hold. In this example, the spatial preposition "in" is used differently in the two premises:
|
| 62 |
+
|
| 63 |
+
in the first premise, it refers to an abstract concept (inclusion as content in a newspaper), while in the second premise it refers to a physical location. Hence, in this example, combining the premises does not validate the conclusion.
|
| 64 |
+
|
| 65 |
+
The items are class-balanced: for every congruent item that uses prepositions $\{prep_1, prep_2, prep_3\}$ there is an incongruent item containing the same prepositions in sequence.
|
| 66 |
+
|
| 67 |
+
We release datasets in English and in Romanian. For both languages, each item was created by a native or a proficient speaker of the language, and always independently verified by another native speaker. In the process of creating items, we aimed to cover common cases for each chosen spatial preposition in order to create a representative sample of spatial preposition semantics. The creation of items was assisted by standard dictionaries with usage examples for each preposition. For a discussion on the limitations of the data generation process, please refer to Section 7.
|
| 68 |
+
|
| 69 |
+
In English, we use the spatial prepositions "in", "at", "on", "with", "under", "above" and "behind". In Romanian, we use their equivalents "i", "la", "pe", "cu", and "sub", respectively. The use of prepositions is different in the two languages and hence the datasets are not direct translations of each other, but reflect the semantics of each language. The distribution of prepositions is shown in Table 2.
|
| 70 |
+
|
| 71 |
+
To validate the benchmark, we asked English-speaking and Romanian-speaking adults to answer dataset questions of the form "if {premise1} and {premise2}, does that imply that {conclusion}?" with "yes" or "no". The respondents were told that the aim was to collect a set of commonsense re
|
| 72 |
+
|
| 73 |
+
<table><tr><td rowspan="2">Prep.</td><td rowspan="2">Count</td><td colspan="5">PaLM</td><td colspan="4">GPT-3</td><td rowspan="2">Avg. LLM</td><td rowspan="2">Human</td></tr><tr><td>8b</td><td>62b</td><td>62b-1.3</td><td>540b</td><td>Flan</td><td>Ada</td><td>Babb.</td><td>Curie</td><td>DaVinci</td></tr><tr><td>above</td><td>186</td><td>52.3</td><td>62.9</td><td>72.8</td><td>80.3</td><td>88.5</td><td>49.2</td><td>52.2</td><td>51.1</td><td>75.8</td><td>64.2</td><td>94.5</td></tr><tr><td>at</td><td>146</td><td>51.8</td><td>68.0</td><td>71.7</td><td>85.4</td><td>88.1</td><td>51.6</td><td>52.7</td><td>53.7</td><td>83.4</td><td>66.9</td><td>92.6</td></tr><tr><td>behind</td><td>148</td><td>54.7</td><td>59.2</td><td>68.0</td><td>76.4</td><td>70.9</td><td>52.5</td><td>51.1</td><td>50.7</td><td>76.1</td><td>62.1</td><td>89.8</td></tr><tr><td>in</td><td>250</td><td>56.5</td><td>72.3</td><td>75.9</td><td>89.1</td><td>86.4</td><td>51.2</td><td>54.0</td><td>51.6</td><td>88.2</td><td>68.9</td><td>96.2</td></tr><tr><td>on</td><td>228</td><td>52.6</td><td>69.0</td><td>70.0</td><td>82.6</td><td>86.1</td><td>51.3</td><td>55.1</td><td>51.0</td><td>81.5</td><td>66.1</td><td>91.1</td></tr><tr><td>under</td><td>202</td><td>53.8</td><td>60.2</td><td>65.5</td><td>79.9</td><td>80.9</td><td>50.0</td><td>53.5</td><td>45.2</td><td>75.0</td><td>62.3</td><td>94.7</td></tr><tr><td>with</td><td>40</td><td>52.5</td><td>68.3</td><td>69.2</td><td>89.2</td><td>90.0</td><td>53.8</td><td>56.7</td><td>50.0</td><td>85.0</td><td>68.0</td><td>100.0</td></tr><tr><td>cu</td><td>126</td><td>57.4</td><td>50.8</td><td>61.3</td><td>64.6</td><td>82.8</td><td>56.6</td><td>56.9</td><td>52.4</td><td>78.6</td><td>61.9</td><td>90.8</td></tr><tr><td>la</td><td>220</td><td>60.5</td><td>50.2</td><td>62.8</td><td>72.4</td><td>88.2</td><td>52.4</td><td>57.7</td><td>52.0</td><td>76.7</td><td>63.2</td><td>93.3</td></tr><tr><td>pe</td><td>222</td><td>58.1</td><td>50.2</td><td>63.4</td><td>72.8</td><td>84.8</td><td>58.6</td><td>54.7</td><td>51.8</td><td>81.5</td><td>63.5</td><td>91.0</td></tr><tr><td>sub</td><td>242</td><td>53.6</td><td>50.9</td><td>53.6</td><td>71.2</td><td>82.5</td><td>58.0</td><td>56.2</td><td>51.9</td><td>76.0</td><td>60.9</td><td>90.5</td></tr><tr><td>\(\hat{\mathbf{in}}\)</td><td>390</td><td>60.0</td><td>50.6</td><td>60.6</td><td>78.5</td><td>85.6</td><td>55.6</td><td>55.6</td><td>50.3</td><td>81.6</td><td>63.7</td><td>95.0</td></tr></table>
|
| 74 |
+
|
| 75 |
+
Table 2: The number of occurrences of each preposition in our dataset, alongside the accuracy (in percentage) of humans and LLMs on items containing each preposition.
|
| 76 |
+
|
| 77 |
+
spondes from humans and compare them to LLMs responses, which they consented to. The respondents were not paid. Each respondent answered 20 randomly-selected questions from the dataset. As a response quality measure, we only included the responses where the accuracy for congruent questions, which we consider easier, was above $75\%$ . We thus obtained responses from 27 English-speaking and 23 Romanian-speaking adults. This allows the detection of an effect size of 0.56 and 0.61, respectively, at alpha 0.05 and power 0.8. The results are shown in Table 3.
|
| 78 |
+
|
| 79 |
+
# 4 Large Language Models Evaluation
|
| 80 |
+
|
| 81 |
+
We evaluated the performance of PaLM (Chowdhery et al., 2022) in different sizes: 540b, 62b (the original model, as well as the model trained to 1.3T tokens as explained in their Appendix F) and 8b, as well as Flan-PaLM-540b (Chung et al., 2022). We also evaluated GPT-3 (Brown et al., 2020) Ada (text-ada-001), Babbage (text-babbage-001), Curie (text-curie-001) and DaVinci (text-davinci-003).
|
| 82 |
+
|
| 83 |
+
We prompted the models with questions of the form "if {premise1} and {premise2}, does that imply that {conclusion}?". We tested the LLMs with 0-shot, 1-shot and 5-shot prompts. In few-shot settings, each example was prefixed with 1 or 5 different randomly-selected examples from the dataset, each followed by its correct answer ("yes" or "no").
|
| 84 |
+
|
| 85 |
+
We assessed LLMs in a binary-choice setup of the benchmark. The models were asked to score the strings "yes" and "no" (and their Romanian equivalents) given as candidate continuations to the above prompt. An example was labelled as correct if the log likelihood score of the correct continuation
|
| 86 |
+
|
| 87 |
+
string was higher than the log likelihood score of the incorrect continuation.
|
| 88 |
+
|
| 89 |
+
To mitigate prompt sensitivity (Lu et al., 2022; Cao et al., 2021), we used multiple prompt variations, as detailed in Appendix A. We report the best prompt performance for each model and setup. For each best prompt, we obtained confidence intervals by randomly sampling sets of 20 responses, similarly to the format of the humans responses.
|
| 90 |
+
|
| 91 |
+
As a baseline, we ran the same experiment using only the conclusion as a prompt, in the form: “{conclusion}?”. This can probe whether the performance might be explained by the likelihood of the conclusion only. We report the results for the highest-scoring baseline value across all models.
|
| 92 |
+
|
| 93 |
+
As an alternative to the binary-choice setup, our benchmark can also be used in a generative setting. This can be useful for assessing LLMs for open-ended or conversational applications. To illustrate this use of the benchmark, we performed a generative assessment of the largest model, PaLM-540b. The setup was identical to the above, except that the model was asked to generate 10 tokens in response to the given prompt, and responses were scored accordingly (see Appendix B for details).
|
| 94 |
+
|
| 95 |
+
An additional experiment involving the negation of congruent sentences is presented in Appendix C.
|
| 96 |
+
|
| 97 |
+
# 5 Results
|
| 98 |
+
|
| 99 |
+
As shown in Table 3, human accuracy was $93.51\%$ for English and $92.6\%$ for Romanian. LLM performance varied considerably across models, with the number of shots and across languages. The highest LLM accuracies were recorded from PaLM-540b with 5-shot prompting at $85.67\%$ in English, and Flan-PaLM-540b with 5-shot prompting at $84.83\%$
|
| 100 |
+
|
| 101 |
+
<table><tr><td rowspan="3">Model</td><td colspan="6">Mean accuracy [95% C.I.]</td></tr><tr><td colspan="3">English</td><td colspan="3">Romanian</td></tr><tr><td>0-shot</td><td>1-shot</td><td>5-shot</td><td>0-shot</td><td>1-shot</td><td>5-shot</td></tr><tr><td>PaLM-8b</td><td>53.00[48.0-58.0]</td><td>53.00[48.0-58.0]</td><td>55.25[50.2-60.2]</td><td>60.25[55.3-65.1]</td><td>55.25[50.2-60.2]</td><td>59.00[54.0-63.9]</td></tr><tr><td>PaLM-62b</td><td>56.25[51.2-61.2]</td><td>69.25[64.5-73.7]</td><td>72.25[67.6-76.6]</td><td>50.25[45.2-55.3]</td><td>50.50[45.5-55.5]</td><td>51.00[46.0-56.0]</td></tr><tr><td>PaLM-62b-1.3</td><td>60.50[55.5-65.3]</td><td>74.00[69.4-78.2]</td><td>78.00[73.6-82.0]</td><td>58.50[53.5-63.4]</td><td>54.25[49.2-59.2]</td><td>64.00[59.1-68.7]</td></tr><tr><td>PaLM-540b</td><td>78.25[73.9-82.2]</td><td>83.50[79.5-87.0]</td><td>87.00[83.3-90.1]</td><td>65.75[60.9-70.4]</td><td>70.25[65.5-74.7]</td><td>84.25[80.3-87.7]</td></tr><tr><td>Flan-PaLM-540b</td><td>83.00[79.0-86.6]</td><td>82.75[78.7-86.3]</td><td>86.75[83.0-89.9]</td><td>83.25[79.2-86.8]</td><td>86.25[82.5-89.5]</td><td>85.50[81.7-88.8]</td></tr><tr><td>GPT-3-Ada</td><td>50.00[45.0-55.0]</td><td>50.75[45.7-55.8]</td><td>55.25[50.2-60.2]</td><td>54.50[49.5-59.5]</td><td>52.50[47.5-57.5]</td><td>61.50[56.5-66.3]</td></tr><tr><td>GPT-3-Babbage</td><td>50.25[45.2-55.3]</td><td>53.00[48.0-58.0]</td><td>57.00[52.0-61.9]</td><td>60.50[55.5-65.3]</td><td>53.75[48.7-58.7]</td><td>54.00[49.0-59.0]</td></tr><tr><td>GPT-3-Curie</td><td>50.25[45.2-55.3]</td><td>48.25[43.3-53.3]</td><td>52.75[47.7-57.7]</td><td>51.25[46.2-56.2]</td><td>51.25[46.2-56.2]</td><td>51.75[46.7-56.7]</td></tr><tr><td>GPT-3-DaVinci</td><td>83.00[79.0-86.6]</td><td>81.75[77.6-85.4]</td><td>78.25[73.9-82.2]</td><td>80.25[76.0-84.0]</td><td>79.75[75.5-83.6]</td><td>77.75[73.4-81.7]</td></tr><tr><td>Baseline(conclusion only)</td><td>71.75[67.1-76.1]</td><td>66.25[61.4-70.9]</td><td>71.00[66.3-75.4]</td><td>65.25[60.4-69.9]</td><td>65.25[60.4-69.9]</td><td>68.50[63.7-73.0]</td></tr><tr><td>Generative(PaLM-540b)</td><td>72.75[68.10-77.06]</td><td>82.00[77.88-85.64]</td><td>88.25[84.69-91.24]</td><td>62.38[57.30-67.02]</td><td>60.25[55.27-65.08]</td><td>82.00[77.88-85.64]</td></tr><tr><td>Human</td><td>93.51[91.8-95.3]</td><td></td><td></td><td>92.60[90.1-95.1]</td><td></td><td></td></tr></table>
|
| 102 |
+
|
| 103 |
+
Table 3: Performance of LLMs and humans on the spatial prepositions reasoning task in English and in Romanian. The best performance for each LLM across prompts is shown. Models with the best overlapping accuracy are highlighted. We include results for a baseline where the models made a response to the conclusion only, and for a generative experiment where PaLM-540b freely generated responses to the questions.
|
| 104 |
+
|
| 105 |
+
for Romanian. We also observed strong performance in the 5-shot generative setting, at $87.67\%$ for English and $80\%$ for Romanian.
|
| 106 |
+
|
| 107 |
+
The largest models (PaLM-540b, Flan-PaLM-540b and GPT-3-DaVinci) performed consistently better than the smaller models. Interestingly, PaLM-540b greatly benefited from 5-shot prompting in Romanian, whereas GPT-3-DaVinci showed slightly worse results with more shots.
|
| 108 |
+
|
| 109 |
+
Smaller GPT-3 models and PaLM-8b almost always performed close to chance level, whereas the other PaLM models benefited from few-shot prompts in English. We observed that some of the smaller models had consistent class bias, consistently answering "no" and thus scoring predominantly correctly on incongruent items only.
|
| 110 |
+
|
| 111 |
+
The performance of the models on the baseline examples suggests that a small part of the performance can be explained by the likelihood of the conclusion only, and not just reasoning capacity. However, as in all baseline cases the performance
|
| 112 |
+
|
| 113 |
+
does not approach that of the original examples, the likelihood of the conclusion is not sufficient to explain the performance of the models.
|
| 114 |
+
|
| 115 |
+
The overall performance was better for the English than for the Romanian dataset particularly in the case of PaLM models, including in the generative experiment. We expected this gap, in line with results from other multilingual tasks (Dumitrescu et al., 2021; Artetxe et al., 2020).
|
| 116 |
+
|
| 117 |
+
As shown in Table 2, performance varied across models for individual prepositions. There was only partial alignment in preposition accuracies between humans and LLMs. Humans performed best on items containing "with" and "in" in English, and "in" and "la" in Romanian, while performing worst on "behind" in English, which partially reflects the performance averaged across models. In contrast, the models made relatively more mistakes on "under". While Flan-PaLM-540b had better overall accuracy, its performance on "in" was slightly lower compared to the other larger models, and it
|
| 118 |
+
|
| 119 |
+
had more relative difficulty with "behind". Meanwhile, GPT-3-DaVinci had more relative difficulty with "above" and "under". Other prepositions show less clear agreement across models. Given these results, the distribution of prepositions in the dataset should be considered a factor that influences the reported accuracies.
|
| 120 |
+
|
| 121 |
+
# 6 Conclusions
|
| 122 |
+
|
| 123 |
+
We have introduced a novel and challenging benchmark for commonsense reasoning with spatial prepositions in multiple conceptual domains, and provided initial results on two families of LLMs. The task is part of our efforts towards investigating the limits of foundational reasoning in LLMs.
|
| 124 |
+
|
| 125 |
+
Our task captures highly variable performance scores across LLMs, with smaller LLMs typically performing at chance level and larger models approaching, but not reaching, human performance. The range of performance on this task makes it suitable as a checkpoint in examining trade-offs in models size and performance, particularly when complex or abstract reasoning is involved. We hope to encourage the development of more tasks that capture the building blocks of reasoning in LLMs.
|
| 126 |
+
|
| 127 |
+
# 7 Limitations
|
| 128 |
+
|
| 129 |
+
Our benchmark aims to provide a representative assessment for the capability of LLMs to operate across different meanings of spatial prepositions. We used a wide range of examples that cover an exemplary but not exhaustive range of spatial language; it was not in the scope of the study to capture all prepositions or constructions that indicate spatiality, but rather a representative set.
|
| 130 |
+
|
| 131 |
+
Due to the richness and uniqueness of the many expressions involving spatial prepositions, a rigorous description of the lexical meanings of prepositions has been a long-standing challenge in linguistics (Herskovits, 2009) and is beyond the scope of this study. Nevertheless, for reference, we provide in Table 4 an estimation of preposition frequency in a Wikipedia corpus, alongside the number of senses found in a dictionary as a proxy for the number of senses of each preposition. As can be observed, the number of senses is not proportional to corpus frequency. Moreover, each preposition might preferentially collocate with different verbs, and hence be more difficult to use in our dataset, where we chose the standard format "X is [prep1] Y". This is one reason why the preposition "with" is relatively
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Prep.</td><td>Wiki. count</td><td>Dict. entries</td></tr><tr><td>in</td><td>516438</td><td>28</td></tr><tr><td>with</td><td>151830</td><td>25</td></tr><tr><td>at</td><td>82579</td><td>15</td></tr><tr><td>on</td><td>136415</td><td>44</td></tr><tr><td>above</td><td>5775</td><td>5</td></tr><tr><td>under</td><td>14618</td><td>8</td></tr><tr><td>behind</td><td>2789</td><td>3</td></tr><tr><td>in</td><td>657525</td><td>20</td></tr><tr><td>pe</td><td>176677</td><td>43</td></tr><tr><td>la</td><td>293601</td><td>27</td></tr><tr><td>cu</td><td>217508</td><td>28</td></tr><tr><td>sub</td><td>19903</td><td>13</td></tr></table>
|
| 134 |
+
|
| 135 |
+
Table 4: The frequency of each preposition based on a Wikipedia corpus estimation (Goldhahn et al., 2012), alongside the number of entries as determined from a standard dictionary: Cambridge Dictionary (https://dictionary.cambridge.org/) and Dexonline (https://dexonline.ro/).
|
| 136 |
+
|
| 137 |
+
underrepresented in our dataset. Future extensions to our dataset could introduce more flexibility in the form of the items and allow for additional types of constructions.
|
| 138 |
+
|
| 139 |
+
Finally, prepositions cue space and concepts differently across languages. As there is no bijective correspondence of spatial prepositions across languages, an absolute performance comparison between languages is not possible with the approach proposed here. We are investigating a more geometric grounding approach by training multimodal classifiers similar to Patel and Pavlick (2022) which would sharpen the cross-linguistic comparison in geometric space.
|
| 140 |
+
|
| 141 |
+
In spite of these limitations, we believe that our benchmark can provide an insightful measure regarding the ability of LLMs to handle spatial prepositions used in different semantic registers, and a challenge with good scaling across model size and task setup.
|
| 142 |
+
|
| 143 |
+
# 8 Ethical Risks
|
| 144 |
+
|
| 145 |
+
The authors manually ensured that the items included in the proposed datasets do not contain offensive, unfair or otherwise unethical content. Prior to release, the datasets were seen by at least 3 other NLP researchers, who did not raise any concerns regarding the content.
|
| 146 |
+
|
| 147 |
+
# Acknowledgements
|
| 148 |
+
|
| 149 |
+
We thank Julian Eisenschlos, Yasemin Altun, Fernando Pereira, as well as our anonymous reviewers and meta-reviewers for valuable feedback.
|
| 150 |
+
|
| 151 |
+
# References
|
| 152 |
+
|
| 153 |
+
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
|
| 154 |
+
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
|
| 155 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
|
| 156 |
+
Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, and Jin Xu. 2021. Knowledgeable or educated guess? revisiting language models as knowledge bases. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1860-1874, Online. Association for Computational Linguistics.
|
| 157 |
+
Yejin Choi. 2022. The Curious Case of Commonsense Intelligence. Daedalus, 151(2):139-155.
|
| 158 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta
|
| 159 |
+
|
| 160 |
+
Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways. arXiv:2204.02311. Version 5.
|
| 161 |
+
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instructionfinetuned language models. Version 1.
|
| 162 |
+
Iulia Comsa, Julian Eisenschlos, and Srini Narayanan. 2022. MiQA: A benchmark for inference on metaphorical questions. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 373-381, Online only. Association for Computational Linguistics.
|
| 163 |
+
Stefan Dumitrescu, Petru Rebeja, Beata Lorincz, Mihaela Gaman, Andrei Avram, Mihai Ilie, Andrei Pruteanu, Adriana Stan, Lorena Rosa, Cristina Iacobescu, Luciana Morogan, George Dima, Gabriel Marchidan, Traian Rebedea, Madalina Chitez, Dani Yogatama, Sebastian Ruder, Radu Tudor Ionescu, Razvan Pascanu, and Viorica Patraucean. 2021. Liro: Benchmark and leaderboard for romanian language tasks. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1. Curran.
|
| 164 |
+
Dirk Goldhahn, Thomas Eckart, and Uwe Quasthoff. 2012. Building large monolingual dictionaries at the Leipzig corpora collection: From 100 to 200 languages. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 759-765, Istanbul, Turkey. European Language Resources Association (ELRA).
|
| 165 |
+
Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 25-30.
|
| 166 |
+
Peter Gärdenfors. 2014. The Geometry of Meaning: Semantics Based on Conceptual Spaces (Chapter 11). The MIT Press.
|
| 167 |
+
Annette Herskovits. 2009. Language and Spatial Cognition: An Interdisciplinary Study of the Prepositions in English. Cambridge University Press.
|
| 168 |
+
Xiao Liu, Da Yin, Yansong Feng, and Dongyan Zhao. 2022. Things not written in text: Exploring spatial commonsense from visual signals. In Proceedings of the 60th Annual Meeting of the Association for
|
| 169 |
+
|
| 170 |
+
Computational Linguistics (Volume 1: Long Papers), pages 2365-2376, Dublin, Ireland. Association for Computational Linguistics.
|
| 171 |
+
|
| 172 |
+
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8086-8098, Dublin, Ireland. Association for Computational Linguistics.
|
| 173 |
+
|
| 174 |
+
Roshanak Mirzaee, Hossein Rajaby Faghihi, Qiang Ning, and Parisa Kordjamshidi. 2021. SPARTQA: A textual question answering benchmark for spatial reasoning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4582-4598, Online. Association for Computational Linguistics.
|
| 175 |
+
|
| 176 |
+
Roma Patel and Ellie Pavlick. 2022. Mapping language models to grounded conceptual spaces. In International Conference on Learning Representations.
|
| 177 |
+
|
| 178 |
+
Terry Regier. 1996. The Human Semantic Potential: Spatial Language and Constrained Connectionism. The MIT Press.
|
| 179 |
+
|
| 180 |
+
Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush, and Bart van Merrienboer. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. Version 10.
|
| 181 |
+
|
| 182 |
+
Mark Yatskar, Vicente Ordonez, and Ali Farhadi. 2016. Stating the obvious: Extracting visual common sense knowledge. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 193-198, San Diego, California. Association for Computational Linguistics.
|
| 183 |
+
|
| 184 |
+
# A Appendix: Prompts
|
| 185 |
+
|
| 186 |
+
We consider the following types of prompts for assessing LLM performance on the preposition transitivity benchmark:
|
| 187 |
+
|
| 188 |
+
1. "If $\{\text{premise}1\}$ and $\{\text{premise}2\}$ , does that imply that $\{\text{conclusion}\}?$ "
|
| 189 |
+
2. "Q: If {premise1} and {premise2}, does that imply that {conclusion}? A:
|
| 190 |
+
3. "Question: If $\{\text{premise1}\}$ and $\{\text{premise2}\}$ , does that imply that $\{\text{conclusion}\}$ ? Answer:"
|
| 191 |
+
4. "QUESTION: If $\{premise1\}$ and $\{premise2\}$ , does that imply that $\{conclusion\}$ ? AN-SWER:,"
|
| 192 |
+
|
| 193 |
+
We made small variations to these four prompts (e.g. by adding quotes of different types around the premises and conclusions, and spaces or delimiters at the end of the prompt) to obtain up to 48 prompts.
|
| 194 |
+
|
| 195 |
+
For an initial assessment of the performance differences among different prompts, we performed two-sample Kolmogorov-Smirnov tests on the performance of the prompts on the original three PaLM models. For the baseline prompts, only $0.44\%$ of all pairwise prompt combinations had a p-value smaller than 0.05 before correction for multiple comparisons. For the task questions, we found an overlap of $6.96\%$ . The small overlap between prompt performance suggests that the models are highly sensitive to prompts.
|
| 196 |
+
|
| 197 |
+
# B Appendix: Generative Experiment
|
| 198 |
+
|
| 199 |
+
The generative experiment is intended to illustrate an alternative, open-ended way in which our benchmark can be used to explore LLM responses.
|
| 200 |
+
|
| 201 |
+
A preliminary analysis of the responses to the benchmark questions revealed that most answers consisted of either "yes" or "no", or an undetermined response, such as generating a new similar question without providing an answer. Most times, we did not find that the response attempted to meaningfully reason through the question; this was expected because the questions do not lend themselves to reasoning steps.
|
| 202 |
+
|
| 203 |
+
Based on the preliminary inspection of the generated responses, we defined the following scoring scheme. We labelled a response as correct if the correct label ("yes" or "no") appeared among the generated tokens and the incorrect label did not. If none or both labels were present in the response, it was labelled as ambiguous. Otherwise, if only the incorrect label appeared in the response, we labelled it as incorrect. We scored the responses by assigning scores of 1, 0.5 and 0 to correct, ambiguous and incorrect responses, respectively.
|
| 204 |
+
|
| 205 |
+
We ran this experiment with five different temperature parameter values between 0 and 1. We found that a lower temperature produced the best results most of the time, and hence report the results for a temperature value of 0.
|
| 206 |
+
|
| 207 |
+
# C Appendix: Negated Congruent Sentences
|
| 208 |
+
|
| 209 |
+
As an additional baseline and diagnostic tool, we assessed the performance of PaLM models on a dataset consisting of the congruent sentences and
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="3">Model</td><td colspan="6">Mean accuracy [95% C.I.]</td></tr><tr><td colspan="3">English</td><td colspan="3">Romanian</td></tr><tr><td>0-shot</td><td>1-shot</td><td>5-shot</td><td>0-shot</td><td>1-shot</td><td>5-shot</td></tr><tr><td>PaLM-8b</td><td>69.83[65.1-74.3]</td><td>74.06[69.5-78.3]</td><td>66.08[61.2-70.7]</td><td>70.00[65.2-74.5]</td><td>71.25[66.5-75.6]</td><td>64.00[59.1-68.7]</td></tr><tr><td>PaLM-62b</td><td>59.10[54.1-64.0]</td><td>64.59[59.7-69.3]</td><td>80.05[75.8-83.9]</td><td>50.75[45.7-55.8]</td><td>57.00[52.0-61.9]</td><td>53.25[48.2-58.2]</td></tr><tr><td>PaLM-62b-1.3</td><td>60.10[55.1-64.9]</td><td>78.30[73.9-82.2]</td><td>86.03[82.3-89.3]</td><td>72.00[67.3-76.3]</td><td>66.50[61.6-71.1]</td><td>75.00[70.5-79.2]</td></tr><tr><td>PaLM-540b</td><td>80.30[76.1-84.1]</td><td>86.53[82.8-89.7]</td><td>92.27[89.2-94.7]</td><td>65.00[60.1-69.7]</td><td>77.75[73.4-81.7]</td><td>89.50[86.1-92.3]</td></tr><tr><td>Flan-PaLM-540b</td><td>97.01[94.8-98.4]</td><td>97.26[95.1-98.6]</td><td>96.51[94.2-98.1]</td><td>95.25[92.7-97.1]</td><td>99.00[97.5-99.7]</td><td>99.75[98.6-100.0]</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 5: Performance of LLMs on the negated congruent sentences experiment, described in Appendix C.
|
| 214 |
+
|
| 215 |
+
their negation only. In negated form, sentences of the form "If John is in the crib and the crib is in the living room, does that imply that John is in the living room?" became "If John is in the crib and the crib is in the living room, does that imply that John is not in the living room?". This dataset is class-balanced, as the answer for the congruent sentences is always "yes", and the answer to their negation is always "no".
|
| 216 |
+
|
| 217 |
+
The results are shown in Table 5. In most cases, the models show visibly better performance compared to the original benchmark. This performance gap suggests that the models have additional difficulty with incongruent questions, where an individual spatial preposition refers to distinct types of spatial relationships.
|
abenchmarkforreasoningwithspatialprepositions/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2b91563a1ec9f4c4be2a1bd0852e32fad05d7cac93a1ada525b26ad2f867d46c
|
| 3 |
+
size 402386
|
abenchmarkforreasoningwithspatialprepositions/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:acc74c11254c6d88f627e40166463a93770a908606986b547cd5301b5eb97670
|
| 3 |
+
size 214437
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b6761272ccccc9e25b0f70f8c2140d778967325e36cf6e9cced2d9dffe8ee6e3
|
| 3 |
+
size 166363
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa202ab89e8516dd2fe17b1e0a85ccdb65719f900f9ed6e084818167484c247f
|
| 3 |
+
size 201038
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/8b71e37d-a8d0-4e33-866c-2549da4c9967_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:84b0a2d8f96dac4adda06f69a4c57e2a4d79665f37c89a4fea842f1f55cb2aeb
|
| 3 |
+
size 3719491
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a1711fe4fb9266a9569866b4794abd03f291c48633dcf3986b6029f7b87d1ff
|
| 3 |
+
size 680253
|
achallengingmultimodalvideosummarysimultaneouslyextractingandgeneratingkeyframecaptionpairsfromvideo/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:54f7a02b2cf1bb44d31513bf6b53f121740da3aec7ef41637186f4797c0e8b36
|
| 3 |
+
size 805265
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:520342b6cd54b81fcc5947b8a2e5fe22ee274d48a788eb76d06e61adae4e796f
|
| 3 |
+
size 76574
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94c79df72343152b643ab089e3df397dd9c843102cb2f62878d8d9879ac75f7f
|
| 3 |
+
size 96948
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/c338ad66-499d-436a-a664-bf1ab1e0729f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c7588fb14e8c9adef83ccbbbaa5c15bf149d8efc6a1ebbb4cb92ea51d4c29b18
|
| 3 |
+
size 439328
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/full.md
ADDED
|
@@ -0,0 +1,323 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Cheaper and Better Diffusion Language Model with Soft-Masked Noise
|
| 2 |
+
|
| 3 |
+
Jiaao Chen\*, Aston Zhang, Mu Li, Alex Smola, Diyi Yang
|
| 4 |
+
|
| 5 |
+
†Georgia Institute of Technology, ‡Meta GenAI, ℓStanford University
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Diffusion models that are based on iterative denoising have been recently proposed and leveraged in various generation tasks like image generation. Whereas, as a way inherently built for continuous data, existing diffusion models still have some limitations in modeling discrete data, e.g., languages. For example, the generally used Gaussian noise can not handle the discrete corruption well, and the objectives in continuous spaces fail to be stable for textual data in the diffusion process especially when the dimension is high. To alleviate these issues, we introduce a novel diffusion model for language modeling, Masked-Diffusion LM, with lower training cost and better performances, inspired by linguistic features in languages. Specifically, we design a linguistic-informed forward process which adds corruptions to the text through strategically soft-masking to better noise the textual data. Also, we directly predict the categorical distribution with cross-entropy loss function in every diffusion step to connect the continuous space and discrete space in a more efficient and straightforward way. Through experiments on 5 controlled generation tasks, we demonstrate that our Masked-Diffusion LM can achieve better generation quality than the state-of-the-art diffusion models with better efficiency. Code is available at https://github.com/SALT-NLP/Masked_Diffusioin_LM.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
We present a novel diffusion method for modeling languages, Masked-Diffusion LM (language model), which uses strategic soft-masking informed by linguistic features to corrupt both the discrete and continuous space, and then iteratively denoise them back by predicting the categorical distribution. Specifically, a strategic soft-masking process is designed that gradually adds perturbation to the input text in an order from harder or
|
| 14 |
+
|
| 15 |
+
more informative words to simpler or less informative words through soft-masking. As a result, the models are encouraged to recover and generate the text following an easy-first-generation nature (Dieleman et al., 2022) to improve the generation structure and quality with more flexibility. Also, during the diffusion process, we directly predict the discrete token with cross-entropy loss that maps the continuous space to discrete textual space to stabilize the intermediate diffusion steps. Through our proposed Masked-Diffusion LM, the application-specific performance metrics as well as training efficiency are significantly improved over current diffusion language models based on experiments.
|
| 16 |
+
|
| 17 |
+
Our work is inspired by recent advances in diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021; Yang et al., 2022; Ramesh et al., 2022; Rombach et al., 2022) that are introduced as a new generative modeling approach based on iterative denoising and have achieved high-quality generations for visual and audio modalities (Ramesh et al., 2022; Rombach et al., 2022; Saharia et al., 2022; Nichol and Dhariwal, 2021; Kong et al., 2020).
|
| 18 |
+
|
| 19 |
+
Although these approaches have received growing attention and achieved impressive success, applying diffusion models to textual domain is still challenging and under-explored due to the discrete nature of the text (e.g., one-hot vectors) compared to continuous data like images (e.g., RGB values) (Li et al., 2022). A few prior works (Li et al., 2022; Gong et al., 2022; He et al., 2022; Austin et al., 2021; Hoogeboom et al., 2021b) that explore using diffusion models on textual data can be divided into two lines. The first is to extend diffusion models to discrete state spaces (Austin et al., 2021; Hoogeboom et al., 2021b,a). The second is to perform the diffusion process and its reverse process in the continuous domain and bridge the continuous and the discrete domain through embedding and rounding (Li et al., 2022; He et al., 2022), for example,
|
| 20 |
+
|
| 21 |
+
Diffusion-LM (Li et al., 2022). Despite the improvements, most previous works fail to leverage the linguistic features (e.g., words in sentences are with different importance) to noise the input textual data and recover it back in a more suitable way. Besides, they usually neglect or fail to adapt large pre-trained language models (PLMs) (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Joshi et al., 2019; Sun et al., 2019; Clark et al., 2019; Lewis et al., 2020; Bao et al., 2020; He et al., 2020; Raffel et al., 2020), which is an unmissable treasure in the NLP community: their adopted $k$ -nearest-neighbor rounding technique that maps continuous space to discrete space cannot handle high-dimensional data in a stable and efficient way (Li et al., 2022). As a result, a corruption process tailored for languages and the objective that allows efficient and straightforward discrete and continuous space transformation is in great need. Our Masked-Diffusion LM realizes this extension.
|
| 22 |
+
|
| 23 |
+
To demonstrate the effectiveness of our introduced Masked-Diffusion LM, we perform experiments on E2E dataset (Novikova et al., 2017) and 5 controllable generation tasks (Li et al., 2022) including Semantic Content, Parts-of-speech, Syntax Tree, Syntax Spans, and Length. We observe that our Masked-Diffusion LM can (i) achieve the state-of-the-art performances compared to recent baseline models, and (ii) allow more efficient training and inference compared to previous Diffusion-LM.
|
| 24 |
+
|
| 25 |
+
To summarize, our contributions are: (1) We introduce a strategic masking noise strategy guided by linguistic features to corrupt the textual data in diffusion models for modeling languages. (2) We use linear layers and cross-entropy objectives to bridge the continuous and discrete spaces in the diffusion process for efficiency and stability. (3) We conduct experiments on different controllable generation tasks to demonstrate the effectiveness of our proposed methods compared to previous diffusion language models.
|
| 26 |
+
|
| 27 |
+
# 2 Related Work
|
| 28 |
+
|
| 29 |
+
Diffusion Models for Language There has been growing attention in deep generative diffusion models, which is a latent variable generative method based on iterative denoising (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021). Through a forward and diffusion process, diffusion models have shown state-of-the-art sample quality on generating in the continuous domain such as producing
|
| 30 |
+
|
| 31 |
+
images and audio (Ramesh et al., 2022; Rombach et al., 2022; Kong et al., 2020; Savinov et al., 2022). Despite their huge success, it is still challenging and under-explored to adapt diffusion models to discrete domains like languages. A few recent works have modified the diffusion models for textual data. For example, discrete forward processes, such as categorical transition kernels (Hoogeboom et al., 2021a; Ye et al., 2023), uniform transition kernels, and absorbing kernels (Hoogeboom et al., 2021b), have been introduced. However, replacing continuous diffusion with a discrete corruption process affords some flexibility (Dieleman et al., 2022; Zheng et al., 2023; Reid et al., 2022). Other works have also made efforts to model text in the continuous embedding space and applied Gaussian noise uniformly to every token (Li et al., 2022; He et al., 2022; Chen and Yang, 2023), which is closer to the settings in previous works of diffusion models. However, they neglect the inherent linguistic features in the text (e.g., different words are playing different roles in sentences) so the generated text often lacks coherence (He et al., 2022). Besides, the $k$ -nearest-neighbor rounding technique (Li et al., 2022; Gao et al., 2022) holds up the decoding and convergence speed especially when the vocabulary is large or the hidden dimension is high, thus limiting the potential of combining large pretrained language models (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Joshi et al., 2019; Sun et al., 2019; Clark et al., 2019; Lewis et al., 2020; Bao et al., 2020; He et al., 2020; Raffel et al., 2020). To alleviate these issues, in our work, we introduce a linguistic-informed soft-masking process to corrupt the discrete and continuous space with structures, and then use linear projections and cross-entropy objectives to directly map the latent variables to textual data for better efficiency and generating better text.
|
| 32 |
+
|
| 33 |
+
Non-Autoregressive Text Generation Most language models (Chowdhery et al., 2022; Brown et al., 2020) and text generation models (Vaswani et al., 2017a; Eikema and Aziz, 2021; Chen and Yang, 2020, 2021) follow a left-to-right autoregressive manner. However, the fixed generation order prevents the models' flexibility in editing former text based on later generation results, especially for global controllable generation settings. To overcome the limitations, non-autoregressive text modeling has been proposed (Ghazvininejad et al., 2019; Ren et al., 2020; Gu et al., 2018; Sa
|
| 34 |
+
|
| 35 |
+
haria et al., 2020; Savinov et al., 2022) through masked language models (Ghazvininejad et al., 2019), iterative sequence alignment (Saharia et al., 2020), insertion and deletion (Gu et al., 2018), or unrolling the generation path (Savinov et al., 2022). Our Masked-Diffusion LM achieves the non-autoregressive generation through gradually recovering the intermediate latent variables in a planned sequence from the forward process.
|
| 36 |
+
|
| 37 |
+
Plug-and-Play Controllable Generation Our work is also closely related to the line of research about plug-and-play controllable generation methods (Yang and Klein, 2021; Dathathri et al., 2020; Krause et al., 2021; Liu et al., 2021), which modify the outputs based on extra guidance such as classifiers without changing or fine-tuning the pretrained language models. Dathathri et al. (2020) used gradients to edit the autoregressive language model's hidden representations to fulfill the control guidance. Yang and Klein (2021) proposed to reweight the predicted token from the language models while (Krause et al., 2021; Liu et al., 2021) further fine-tuned a smaller LM to reweight the token predictions. In this work, we apply the gradient-based plug-and-play approach to our Masked-Diffusion LM for controllable generation by making classifier-guided gradient updates to the intermediate latent variables during the diffusion.
|
| 38 |
+
|
| 39 |
+
# 3 Method: the Masked-Diffusion LM
|
| 40 |
+
|
| 41 |
+
In this section, we describe our introduced Masked-Diffusion LM. The overall diagram is shown in Figure 1 and Algorithm 1,2. Different from the recent diffusion models for languages, e.g., Diffusion-LM (Li et al., 2022), which are based on continuous diffusion models, we propose to make corruptions in both discrete and continuous space to help modeling the textual data. Specifically, we formulate a novel corruption process as an alternative to Gaussian diffusion (in Section 3.2) and we directly map continuous vectors to discrete inputs in every diffusion step with cross-entropy objectives (in Section 3.3). Moreover, our approach could easily integrate pre-trained language models (in Section 3.4).
|
| 42 |
+
|
| 43 |
+
# 3.1 Embedding
|
| 44 |
+
|
| 45 |
+
For the input sentence $d$ with $l$ tokens $d = \hat{w}_{1:l}$ , we first map the discrete tokens to the continuous space and form the initial latent variable, $X_0$ , through a learnable embedding layer or an encoder $e(\cdot)$ :
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
X _ {0} = w _ {1: l} = e \left(w _ {1: l}\right). \tag {1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
This bridges the discrete space and continuous space. We will then add designed soft-masked noise to the tokens' representations in the later diffusion models.
|
| 52 |
+
|
| 53 |
+
# 3.2 Forward Process with Soft-Masking
|
| 54 |
+
|
| 55 |
+
Different words in sentences play different roles. As a result, when corrupting the sentences and recovering the sentences, words with various importance should be treated differently. Thus, in this work, instead of evenly adding Gaussian noise to all the token embeddings like in Diffusion-LM (Li et al., 2022), we add soft-masked noise to different tokens in the input text in different stages to corrupt the text gradually with structures. Intuitively, more important words would be perturbed with soft-masks in an earlier stage so that the model could be encouraged to generate them in the later phase to follow the easy-first-generation nature of language planning and generation.
|
| 56 |
+
|
| 57 |
+
In this work, we consider the following aspects to measure and define the importance of words in one sentence:
|
| 58 |
+
|
| 59 |
+
Word Relevancy We use the tfidf weights (Dessí et al., 2020), $w_{\mathrm{tf-idf}}$ , of the word as one way to measure the relevance of word $w$ in one sentence $d$ :
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
w _ {\mathrm {t f - i d f}} (w, d) = \frac {f _ {w , d}}{\sum_ {w ^ {\prime} \in d} f _ {w ^ {\prime} , d}} \tag {2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\log \frac {N}{1 + | \{d \in D : w \in d \} |},
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
where the $f_{w,d}$ is the number of times that word $w$ occurs in sentence $d$ , $N$ is the number of sentences in the corpus, and $D$ is the set of sentences, and $|\{d \in D : w \in d\}|$ is the number of sentences where the word $t$ appears. A higher weight for word $w$ in sentence $d$ in tfidf means that the word might be more important in the sentence.
|
| 70 |
+
|
| 71 |
+
Entropy We also consider measuring the amount of information with entropy $H$ (Bentz and Alikaniotis, 2016; He et al., 2022) in the word $w$ to reflect the importance of that word:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
H (w) = - p (w) \log (p (w)) \tag {3}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $p(w) = \frac{f_w}{\sum_{j=1}^{V} f_j}$ represents the probability of word $w$ and $f$ is the word Reluency in the corpus. A word with lower entropy indicates that the word might contain less information and thus be
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
Figure 1: The overall process of our Masked-Diffusion LM. In the forward process, soft-mask is added to more informative words earlier to gradually corrupt the input text. For example, NLP is soft-masked prior to stop words like $is$ . Then in the diffusion process, models learn to generate easy words like $is$ first and then fill in more important words such as $fun$ and $NLP$ .
|
| 81 |
+
|
| 82 |
+
less important compared to the words with higher entropy.
|
| 83 |
+
|
| 84 |
+
In practice, we combine these two measures (with normalization) to decide the importance $I$ of the word $w$ in one sentence $d$ by:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
I (w) = \frac {x _ {\mathrm {t f - i d f}} (w , d)}{\sum_ {w ^ {\prime} \in d} w _ {\mathrm {t f - i d f}} \left(w ^ {\prime} , d\right)} + \frac {H (w)}{\sum_ {w ^ {\prime} \in d} H \left(w ^ {\prime}\right)}. \tag {4}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
Based on the introduced importance $I$ of the words in a sentence, we first divide these words into $m$ buckets $\{W_{1:m}\}$ . The buckets with lower indices include words with higher importance. We will add soft-masked noise to words with higher importance before words with lower importance. By doing this, models could learn to generate the easier words first and then generate harder words in the reversed denoising process for better generation quality. Specifically, at every step $t$ , we will add a small amount of Gaussian noise to the hidden representation of the word $w_i$ in bucket $W_{\left|\frac{tm}{T}\right|}$ :
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
q \left(w _ {i, t + 1} \mid w _ {i, t}\right) = N \left(w _ {i, t + 1}; \sqrt {\left(1 - \beta_ {t}\right)} w _ {i, t}, \beta_ {t} I\right), \tag {5}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
where $\beta_{t}$ is the amount of noise added at diffusion step $t$ .
|
| 97 |
+
|
| 98 |
+
We further apply a square-root noise schedule following Li et al. (2022) to gradually increase $\beta_{t}$ :
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
\beta_ {t} = 1 - \sqrt {t / T + s}, \tag {6}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
# Algorithm 1 Forward Process
|
| 105 |
+
|
| 106 |
+
Input A sentence $X = [x_0, \ldots, x_n]$ .
|
| 107 |
+
|
| 108 |
+
Output Corrupted hidden representations $H_{T} = [h_{0},\dots ,h_{n}]$ .
|
| 109 |
+
|
| 110 |
+
1: Encode the sentence into hidden representations via an encodere(: $H_0 = e(X)$
|
| 111 |
+
2: for $t = 1, \ldots, K$ do
|
| 112 |
+
3: Add soft-masking noise to $H$ based on the importance of tokens (from higher-importance to lower-importance): $H_{t+1} = \text{soft-masking}(H_t)$
|
| 113 |
+
4: end for
|
| 114 |
+
|
| 115 |
+
where $s$ is a small constant that corresponds to the starting noise level. Thus, less noise would be added to harder words to stabilize the training. By performing the above noising steps, initial latent variable $X_0$ is gradually corrupted to a series of noisy latent variables $X_{1:T}$ .
|
| 116 |
+
|
| 117 |
+
# 3.3 Diffusion Process
|
| 118 |
+
|
| 119 |
+
After the forward process to corrupt the input tokens in sentences $d$ into latent variables $X_{1:T}$ , we then gradually denoise $X_{T}$ back to $X_{0}$ through diffusion steps, $\hat{X}_{t-1} = p(\hat{X}_t|\theta)$ , where $\theta$ is the learned parameter to model the state transition. In practice, we model the transition with Transformers (Vaswani et al., 2017b).
|
| 120 |
+
|
| 121 |
+
After every diffusion step $t \in (0, T]$ , instead of minimizing the distance between the hidden rep
|
| 122 |
+
|
| 123 |
+
# Algorithm 2 Diffusion Process
|
| 124 |
+
|
| 125 |
+
Input Corrupted hidden representations $H = [h_0,\dots ,h_n]$
|
| 126 |
+
|
| 127 |
+
Output A sentence $X = [x_0, \ldots, x_n]$ .
|
| 128 |
+
|
| 129 |
+
1: Utilize a transition network $f(\cdot)$ to recover the last state: $H_{t-1} = f(H_t)$
|
| 130 |
+
2: Utilize a linear layers to map hidden representations to actual tokens $X_{t - 1} = g(H_{t - 1})$
|
| 131 |
+
3: Compute the loss $\mathcal{L}_t$ and update the transition network.
|
| 132 |
+
4: Do the above steps until it recovers the sentence.
|
| 133 |
+
|
| 134 |
+
resentations of $\hat{X}_{t-1}$ and $X_0$ (Li et al., 2022), we first directly map the continuous space to discrete space using a learnable linear layer $f(.)$ and then minimize a weighted cross entropy between the predicted sentence and (i) the original sentence $d$ and (ii) the masked sentence $\hat{d}$ at time step $t-1$ :
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\begin{array}{l} \mathcal {L} _ {t} = \gamma_ {t} C E (f (\hat {X} _ {t - 1}), d; \theta) \\ + (1 - \gamma_ {t}) C E (f (\hat {X} _ {t - 1}), \hat {d}; \theta), t \in (0, T ] \\ \end{array}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
Here, $\gamma_{t} = \frac{T - t}{T}$ . In other words, we put higher weights on the masked tokens that are masked in this time step during the forward process and put lower weights to the other tokens. So the models are learned to generate the corresponding masked tokens first at every time step.
|
| 141 |
+
|
| 142 |
+
# 3.4 Adapting Pre-trained Language Models
|
| 143 |
+
|
| 144 |
+
Our introduced Masked-Diffusion LM also allows the use of large pre-trained language model (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Joshi et al., 2019; Sun et al., 2019; Clark et al., 2019; Lewis et al., 2020; Bao et al., 2020; He et al., 2020; Raffel et al., 2020). In this work, we use BERT (Devlin et al., 2019) as an example. To combine the prior knowledge in large language models, it is straightforward to directly replace the embedding layer $e(\cdot)$ with the pre-trained model and use the pre-trained model to get the hidden representations of input tokens as the initial state in diffusion models. We use the final linear layers in pre-trained models to predict the tokens. For efficiency, in our experiments, when using pre-trained models, we freeze the parameters in them and only learn the transition model $\theta$ in our Masked-Diffusion LM.
|
| 145 |
+
|
| 146 |
+
# 4 Controllable Text Generation with Masked-Diffusion LM
|
| 147 |
+
|
| 148 |
+
In this section, we illustrate how we apply our Masked-Diffusion LM to fulfill controllable text generation. Inspired by recent plug-and-play methods (Yang and Klein, 2021; Dathathri et al., 2020; Krause et al., 2021; Liu et al., 2021), we conduct controls $c$ from external modules (e.g., classifiers) directly on the latent variables $X_{t}$ in every intermediate step $t \in [0,T]$ in our Masked-Diffusion LM:
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
p \left(X _ {0: T} \mid c\right) = \prod_ {t = 1} ^ {T} p \left(X _ {t - 1} \mid X _ {t}, c\right). \tag {7}
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
We follow the conditional independence assumption (Yang and Klein, 2021; Dathathri et al., 2020; Krause et al., 2021; Liu et al., 2021) and decompose the above joint probability into a sequence of control task at every time step $t$ :
|
| 155 |
+
|
| 156 |
+
$$
|
| 157 |
+
\begin{array}{l} p \left(X _ {t - 1} \mid X _ {t}, c\right) \propto p \left(X _ {t - 1} \mid X _ {t}\right) \cdot p (c \mid X _ {t - 1}, X _ {t}) \\ = p \left(X _ {t - 1} \mid X _ {t}\right) \cdot p (c \mid X _ {t - 1}). \tag {8} \\ \end{array}
|
| 158 |
+
$$
|
| 159 |
+
|
| 160 |
+
As a result, for the $t$ -th step, we run gradient updates on $X_{t}$ to generate $X_{t-1}$ :
|
| 161 |
+
|
| 162 |
+
$$
|
| 163 |
+
\begin{array}{l} \nabla_ {X _ {t - 1}} \log p (X _ {t - 1} \mid X _ {t}, c) = \lambda \nabla_ {X _ {t - 1}} \\ \log p \left(X _ {t - 1} \mid X _ {t}\right) + \nabla_ {X _ {t - 1}} \log p (c \mid X _ {t - 1}), \tag {9} \\ \end{array}
|
| 164 |
+
$$
|
| 165 |
+
|
| 166 |
+
where both $\log p(X_{t - 1}|X_t)$ and $\log p(c|X_{t - 1})$ are differentiable: the first term is parametrized by the transition Transformers, $\theta$ , in Masked-Diffusion LM, and the second term is parametrized by extra neural network classifiers. Note that the extra classifiers are trained with the diffusion latent variables as input to allow direct gradient updates on the latent space. Note that $\lambda$ is a fluency regularization hyper-parameter to balance the fluency (gradient updates from Masked-Diffusion LM) and control (gradient updates from classifiers) in order to further improve the generation quality.
|
| 167 |
+
|
| 168 |
+
For the decoding strategy, following Li et al. (2022), the Minimum Bayes Risk (MBR) decoding (Kumar and Byrne, 2004) is used to aggregate and select the sample that has the lowest expected loss under the specified loss function from the Masked-Diffusion LM.
|
| 169 |
+
|
| 170 |
+
# 5 Experiments
|
| 171 |
+
|
| 172 |
+
# 5.1 Datasets
|
| 173 |
+
|
| 174 |
+
In this work, we train our Masked-Diffusion LM on the E2E datasets (Novikova et al., 2017), which
|
| 175 |
+
|
| 176 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">Semantic Content</td><td colspan="2">POS</td><td colspan="2">Syntax Tree</td><td colspan="2">Syntax Spans</td><td colspan="2">Length</td></tr><tr><td>Acc</td><td>Fluency</td><td>Acc</td><td>Fluency</td><td>Acc</td><td>Fluency</td><td>Acc</td><td>Fluency</td><td>Acc</td><td>Fluency</td></tr><tr><td>PPLM</td><td>9.9</td><td>5.32</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>FUDUGE</td><td>69.9</td><td>2.83</td><td>27.0</td><td>7.96</td><td>17.9</td><td>3.39</td><td>54.2</td><td>4.03</td><td>46.9</td><td>3.11</td></tr><tr><td>Diffusion-LM</td><td>81.2</td><td>2.55</td><td>90.0</td><td>5.16</td><td>86.0</td><td>3.71</td><td>93.8</td><td>2.53</td><td>99.9</td><td>2.16</td></tr><tr><td>+ BERT</td><td>77.4</td><td>2.68</td><td>86.2</td><td>5.43</td><td>82.3</td><td>3.92</td><td>89.3</td><td>3.13</td><td>99.9</td><td>2.68</td></tr><tr><td>Masked-Diffusion LM †</td><td>81.9</td><td>2.35</td><td>91.6</td><td>5.03</td><td>86.6</td><td>3.66</td><td>94.7</td><td>2.48</td><td>99.9</td><td>2.13</td></tr><tr><td>+ BERT †</td><td>82.9</td><td>2.30</td><td>92.9</td><td>4.78</td><td>89.7</td><td>3.44</td><td>95.8</td><td>2.33</td><td>100</td><td>2.08</td></tr></table>
|
| 177 |
+
|
| 178 |
+
Table 1: Main Results. The Accuracy (↑) and the Fluency (↓) of different methods on five controllable generation tasks including semantic content, POS, syntax tree, syntax spans and length. † indicates our methods.
|
| 179 |
+
|
| 180 |
+
<table><tr><td>Methods</td><td>Training (h)</td><td>Inference (s)</td></tr><tr><td>Diffusion-lm</td><td>8.0</td><td>80</td></tr><tr><td>+BERT</td><td>15.2</td><td>920</td></tr><tr><td>Masked-Diffusion LM</td><td>3.4</td><td>68</td></tr><tr><td>+BERT</td><td>4.8</td><td>700</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 2: Training time and inference time (generating 50 samples) for different models.
|
| 183 |
+
|
| 184 |
+
consists of 50K restaurant reviews together with the labels in terms of food type, price, and customer ratings.
|
| 185 |
+
|
| 186 |
+
Following Li et al. (2022), we conduct 5 control tasks to evaluate the learned Masked-Diffusion language model:
|
| 187 |
+
|
| 188 |
+
- Semantic Content. For a given field (e.g., food) and value (e.g., Japanese), sentences that covers field=value need to be generated. We evaluate the accuracy of the generated sentence by examine the exact match rate of "value" (word mention).
|
| 189 |
+
- Parts-of-speech. For a given sequence of parts-of-speech (POS) tags (e.g., Noun Verb Determiner Noun), the models need to produce the sentence with the same length and follow the exact given POS tag sequence (e.g., Birds eat the warmer). We evaluate the accuracy of the generation by checking the word-level POS tag exact match (under an oracle POS tagger).
|
| 190 |
+
- Syntax Tree. For a given syntactic parse tree, the generated sentence should have the same parse tree. We evaluate the accuracy by first parsing the generated sentence with an off-the-shelf parser and report the F1 scores compared to the given parse.
|
| 191 |
+
- Syntax Spans. For a given (span, syntactic category) pair (e.g., $(2,5,VP)$ ), the parse tree
|
| 192 |
+
|
| 193 |
+
of the generated sentence should match the given syntactic category over the given spans. We evaluate the accuracy of the sentence by the exact match rate of the given spans.
|
| 194 |
+
|
| 195 |
+
- Length. For a given target length (e.g., 20), the models need to generate a sentence within $\pm 2$ of the given target. We evaluate the accuracy by the match rate of the sentence lengths.
|
| 196 |
+
|
| 197 |
+
For every control task, we sample 200 control targets $c$ from the validation splits, and we generate 50 samples for each control target. The first four tasks rely on a classifier to guide the diffusion, and the last one task is classifier free. To further evaluate the fluency of the generated sentences from models, we use a teacher LM (i.e., a carefully fine-tuned GPT-2 model) and report the perplexity of generated text under the teacher LM. A lower perplexity indicates better sample quality and fluency.
|
| 198 |
+
|
| 199 |
+
# 5.2Baselines
|
| 200 |
+
|
| 201 |
+
We compare our Masked-Diffusion LM with the following state-of-the-art baselines on controllable generation tasks:
|
| 202 |
+
|
| 203 |
+
- PPLM (Dathathri et al., 2020) runs gradient ascent on the pre-trained language models' hidden representations to increase the classifier probabilities and language model probabilities.
|
| 204 |
+
- FUDGE (Yang and Klein, 2021) reweights the predicted tokens from the pre-trained language models by a discriminator which takes in a prefix sequence and predicts whether the complete sequence would satisfy the constraint.
|
| 205 |
+
- Diffusion-LM (Li et al., 2022) learns an embedding to map discrete text into the continuous space where it performs Gaussian
|
| 206 |
+
|
| 207 |
+
<table><tr><td>Methods</td><td>Semantic Content</td><td>POS</td><td>Syntax Tree</td><td>Syntax Spans</td><td>Length</td></tr><tr><td>Diffusion-lm</td><td>2.89</td><td>2.76</td><td>3.16</td><td>2.88</td><td>2.46</td></tr><tr><td>+BERT</td><td>3.87</td><td>3.46</td><td>3.72</td><td>3.68</td><td>3.34</td></tr><tr><td>Masked-Diffusion LM</td><td>2.56</td><td>2.48</td><td>2.88</td><td>2.35</td><td>2.18</td></tr><tr><td>+BERT</td><td>1.32</td><td>1.28</td><td>1.16</td><td>1.55</td><td>1.86</td></tr></table>
|
| 208 |
+
|
| 209 |
+
Table 3: The average ranking every method receives from human evaluation (lower is better).
|
| 210 |
+
|
| 211 |
+
<table><tr><td rowspan="2">Noise Type</td><td colspan="2">Semantic Content</td></tr><tr><td>Acc</td><td>Fluency</td></tr><tr><td>Gaussian</td><td>75.3</td><td>3.01</td></tr><tr><td>Random Mask</td><td>78.8</td><td>2.67</td></tr><tr><td>Mask w. POS</td><td>80.4</td><td>2.58</td></tr><tr><td>Mask w. Entropy</td><td>81.1</td><td>2.44</td></tr><tr><td>Mask w. Rel</td><td>80.8</td><td>2.52</td></tr><tr><td>Mask w. Entropy+Rel †</td><td>81.6</td><td>2.38</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 4: Performances on Semantic Content of Masked-Diffusion LM with different types of noise applied in forward noising process. $\dagger$ indicates our method.
|
| 214 |
+
|
| 215 |
+
diffusion process. Also, a rounding step is designed to map the embeddings back into discrete texts. For every control task, the Diffusion-LM infuses the controlling signals in every diffusion step.
|
| 216 |
+
|
| 217 |
+
# 5.3 Experimental Setting
|
| 218 |
+
|
| 219 |
+
We use a Transformer with 80M parameters to parameterize our Masked-Diffusion LM, with a sequence length $n = 64$ , diffusion steps $T = 500$ and a square-root noise schedule. For Masked-Diffusion LM, we set the hidden dimension to 128. We set the number of word buckets $m = 3$ . When combining pre-trained models, we incorporate BERT-base (Devlin et al., 2019) with about 110M parameters. We use BERT to encode the input text into vectors with dimension of 768 and freeze the parameters in BERT. We learn Masked-Diffusion LM with the AdamW optimizer (Loshchilov and Hutter, 2019) for 20,000 steps with learning rate of 3e-4, dropout probability of 0.1, and batch size of 32. We use a linear warmup schedule starting with 1,000 warmup steps. All experiments are conducted on NVIDIA A100 Tensor Core GPUs. We use 4 GPUs for training and a single GPU for sampling.
|
| 220 |
+
|
| 221 |
+
# 5.4 Results
|
| 222 |
+
|
| 223 |
+
We show the main results on five controllable generation tasks in Table 1. When the diffusion process is engaged, the performances on all the controlled generation tasks receives significant boosts (e.g., 81.2 of Diffusion-LM vs. 69.9 if FUDUGE on Semantic Content task), suggesting the superiority of the diffusion model on controllable generation tasks. While the previous Diffusion-LM cannot be well combined with large language model like BERT (e.g., a $5\%$ drop on Semantic Content accuracy), largely due to the fact that their way (rounding) to bridge continuous space and discrete space suffers from significantly higher dimensions. Compared to Diffusion-LM, our proposed Masked-Diffusion LM consistently outperforms the previous models in all tasks (e.g., a $1.7\%$ improvement on the POS task), indicating the effectiveness of our introduced linguistic-informed noise forward process. Also, when combined with large language models like BERT, our method significantly outperforms the previous methods, demonstrating that our approach can be well aligned with pre-trained models.
|
| 224 |
+
|
| 225 |
+
Efficiency We also display the training cost and inference cost in Table 2. Compared to the previous Diffusion-LM, our method requires significantly less training time to converge and needs less inference time to generate sentences. This is because our introduced noise process is more stable and suitable for modeling languages. Besides, the objectives we introduced are more efficient than the rounding techniques in previous work.
|
| 226 |
+
|
| 227 |
+
Human Evaluation We then conduct human evaluation to evaluate the generated conversations qualitatively. We ask native speakers of English from Amazon Mechanical Turk to rank the quality of 50 generated sentences (randomly sampled) from different models for every control task. Specifically, annotators need to rank different system outputs based on the (i) fluency (whether the
|
| 228 |
+
|
| 229 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="2">Semantic Content</td><td colspan="2">POS</td><td colspan="2">Syntax Tree</td><td colspan="2">Syntax Spans</td><td colspan="2">Length</td></tr><tr><td>Acc</td><td>fluency</td><td>Acc</td><td>fluency</td><td>Acc</td><td>fluency</td><td>Acc</td><td>fluency</td><td>Acc</td><td>fluency</td></tr><tr><td>L2</td><td>81.1</td><td>2.44</td><td>90.6</td><td>5.17</td><td>86.2</td><td>3.68</td><td>94</td><td>2.51</td><td>99.8</td><td>2.14</td></tr><tr><td>L2-BERT</td><td>80.1</td><td>2.48</td><td>89.4</td><td>5.82</td><td>84.1</td><td>3.91</td><td>93.2</td><td>2.88</td><td>99.9</td><td>2.89</td></tr><tr><td>CE †</td><td>81.9</td><td>2.35</td><td>91.6</td><td>5.03</td><td>86.6</td><td>3.66</td><td>94.7</td><td>2.48</td><td>99.9</td><td>2.13</td></tr><tr><td>CE-BERT †</td><td>82.9</td><td>2.30</td><td>92.9</td><td>4.78</td><td>89.7</td><td>3.44</td><td>95.8</td><td>2.33</td><td>100</td><td>2.08</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 5: Performances of Masked-Diffusion LM trained with different objectives on controllable generation tasks. $\dagger$ indicates our method.
|
| 232 |
+
|
| 233 |
+
<table><tr><td>Case Study</td><td>Sentences</td></tr><tr><td>Input</td><td>7</td></tr><tr><td>t = 500</td><td>[mask] [mask] [mask] [mask] [mask] [mask] [mask]</td></tr><tr><td>t = 400</td><td>[mask] is an [mask] restaurant.</td></tr><tr><td>t = 200</td><td>The [mask] is an Indian restaurant.</td></tr><tr><td>t = 0</td><td>The Mill is an Indian restaurant.</td></tr><tr><td>Input</td><td>name: Travellers Rest Beefeater</td></tr><tr><td>t = 500</td><td>[mask] [mask] [mask] [mask] [mask] [mask] [mask] [mask] [mask] [mask] [mask]</td></tr><tr><td>t = 400</td><td>[mask] Rest [mask] is a [mask] [mask] [mask] that is [mask].</td></tr><tr><td>t = 200</td><td>Travellers Rest [mask] is a reasonably [mask] restaurant that is awesome.</td></tr><tr><td>t = 0</td><td>Travellers Rest Beefeater is a reasonably priced restaurant that is awesome.</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 6: Examples of the intermediate generated text of our Masked-Diffusion LM on the Length and Semantic Content tasks.
|
| 236 |
+
|
| 237 |
+
given sentence is readable and fluent) and (ii) the controllability (whether the given sentence match the given control conditions). To increase annotation quality, we require turkers to have a $98\%$ approval rate with over 10,000 approved tasks for their previous work. The pay rate was $0.15 per hit. Every example is assessed by 3 annotators, and the rank for every sentence is aggregated by majority voting. The Intra-Class Correlation $(ICC1k)$ was 0.63, indicating moderate agreement (Koo and Li, 2016). The results are shown in Table 3. As it shows, our proposed Masked-Diffusion LM and its variation with BERT received the best average ranks, suggesting the effectiveness of our proposed diffusion modeling strategy for languages.
|
| 238 |
+
|
| 239 |
+
# 5.5 Ablation Studies
|
| 240 |
+
|
| 241 |
+
We then perform ablation studies to demonstrate the effectiveness of our introduced linguistic-informed noise and the cross entropy objectives.
|
| 242 |
+
|
| 243 |
+
Noise Strategy We first demonstrate the performances on Semantic Content task of Masked-Diffusion LM with different types of noise strategy in Table 4. Gaussian adds Gaussian noise to all the tokens in the input sentence in the forward process following Li et al. (2022). We also compare different masking noise strategies: (i) Random Mask, where the soft-mask is added to tokens in a random
|
| 244 |
+
|
| 245 |
+
order. (ii) Mask with POS, where the soft-mask perturbs the tokens in an order (noun $\rightarrow$ verb $\rightarrow$ other words) based on POS tags. Our introduced noise strategy (Mask with Entropy and Reluency) shows significantly better performances on semantic content generation. This indicates that our introduced noise strategy that considers the linguistic features in sentences is providing more appropriate perturbation to the textual data for the diffusion process.
|
| 246 |
+
|
| 247 |
+
Objectives We further show the impact of different objectives in Table 5. We compare our used cross entropy objectives with the $L_{2}$ object that is used in Li et al. (2022) where they minimize the distance between latent intermediate variables and the initial latent variable instead of directly predicting the text. We observe that cross entropy objectives slightly perform better than $L_{2}$ when the pre-trained model is not used. After combining with large language models, CE-BERT significantly outperforms the $L_{2}$ -BERT, indicating the effectiveness of our introduced objectives in terms of incorporating large language models.
|
| 248 |
+
|
| 249 |
+
# 5.6 Case Studies
|
| 250 |
+
|
| 251 |
+
We also include some examples of intermediate steps of Masked-Diffusion LM in Table 6. In the denoising diffusion process, easy words are generated first. For example, "is", "an", and "restaurant".
|
| 252 |
+
|
| 253 |
+
With more diffusion steps, sentences are enriched with more informative words such as "Mill" and "Indian". It shows that our Masked-Diffusion LM encourages the generation to follow an easy-first order for stable and better generation quality.
|
| 254 |
+
|
| 255 |
+
# 6 Conclusion
|
| 256 |
+
|
| 257 |
+
In this work, we present a novel diffusion model for language, Masked-Diffusion LM, which corrupts the discrete text with a linguistic-informed soft-masking strategy and then iteratively denoises them back by directly predicting the text. Specifically, we gradually soft-mask the tokens in the sentence following an order from more informative words to less informative words in the forward process. This satisfies the flexibility for diffusion models, as well as encourages the easy-first-generation nature in the denoising process for better generation quality. Also, we directly predict the discrete token during the diffusion process with the cross-entropy loss to stabilize the intermediate diffusion steps and make our approach orthogonal to large pre-trained language models. Experiments on E2E dataset and five controllable generation tasks including Semantic Content, Parts-of-speech, Syntax Tree, Syntax Spans, and Length show that our Masked-Diffusion LM can (i) achieve the state-of-the-art performances compared to recent baseline models and (ii) allow more efficient training and inference compared to the previous Diffusion-LM.
|
| 258 |
+
|
| 259 |
+
# 7 Limitations
|
| 260 |
+
|
| 261 |
+
In this work, we mainly leverage linguistic softmasking such as word relevancy and word entropy. We encourage future work to explore how to incorporate other linguistic structures to design the nosing process. And we mainly test with smaller models like simple transformer models as well as BERT-based models. Future work might test with larger pre-trained models to evaluate whether diffusion methods would work better or not. Also, we focused on controllable generation to evaluate the models. Future work may study different downstream tasks.
|
| 262 |
+
|
| 263 |
+
# References
|
| 264 |
+
|
| 265 |
+
Jacob Austin, Daniel D. Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg. 2021. Structured denoising diffusion models in discrete state-spaces.
|
| 266 |
+
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan
|
| 267 |
+
|
| 268 |
+
Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jianfeng Gao, Ming Zhou, et al. 2020. Unilmv2: Pseudomasked language models for unified language model pre-training. arXiv preprint arXiv:2002.12804.
|
| 269 |
+
Christian Bentz and Dimitrios Alikaniotis. 2016. The word entropy of natural languages.
|
| 270 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
|
| 271 |
+
Jiaao Chen and Diyi Yang. 2020. Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4106-4118, Online. Association for Computational Linguistics.
|
| 272 |
+
Jiaao Chen and Diyi Yang. 2021. Structure-aware abstractive conversation summarization via discourse and action graphs. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1380-1391, Online. Association for Computational Linguistics.
|
| 273 |
+
Jiaao Chen and Diyi Yang. 2023. Controllable conversation generation with conversation structures via diffusion models. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 7238-7251.
|
| 274 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan First, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
|
| 275 |
+
|
| 276 |
+
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2019. Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations.
|
| 277 |
+
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations.
|
| 278 |
+
Danilo Dessí, Rim Helaoui, Vivek Kumar, Diego Reforgiato Recupero, and Daniele Riboni. 2020. Tfidf vs word embeddings for morbidity identification in clinical notes: An initial study.
|
| 279 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*.
|
| 280 |
+
Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, and Jonas Adler. 2022. Continuous diffusion for categorical data.
|
| 281 |
+
Bryan Eikema and Wilker Aziz. 2021. Sampling-based approximations to minimum bayes risk decoding for neural machine translation.
|
| 282 |
+
Zhujin Gao, Junliang Guo, Xu Tan, Yongxin Zhu, Fang Zhang, Jiang Bian, and Linli Xu. 2022. Differmer: Empowering diffusion model on embedding space for text generation. arXiv preprint arXiv:2212.09412.
|
| 283 |
+
Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6112-6121, Hong Kong, China. Association for Computational Linguistics.
|
| 284 |
+
Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and Lingpeng Kong. 2022. Diffuseq: Sequence to sequence text generation with diffusion models.
|
| 285 |
+
Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In International Conference on Learning Representations.
|
| 286 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
|
| 287 |
+
Zhengfu He, Tianxiang Sun, Kuanning Wang, Xuanjing Huang, and Xipeng Qiu. 2022. Diffusionbert: Improving generative masked language models with diffusion models.
|
| 288 |
+
|
| 289 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models.
|
| 290 |
+
Emiel Hoogeboom, Alexey A. Gritsenko, Jasmijn Bastings, Ben Poole, Rianne van den Berg, and Tim Salimans. 2021a. Autoregressive diffusion models.
|
| 291 |
+
Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forre, and Max Welling. 2021b. Argmax flows and multinomial diffusion: Learning categorical distributions.
|
| 292 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2019. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
|
| 293 |
+
Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. 2020. Diffwave: A versatile diffusion model for audio synthesis.
|
| 294 |
+
Terry K Koo and Mae Y Li. 2016. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of chiropractic medicine, 15(2):155-163.
|
| 295 |
+
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2021. GeDi: Generative discriminator guided sequence generation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4929-4952, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 296 |
+
Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169-176, Boston, Massachusetts, USA. Association for Computational Linguistics.
|
| 297 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. SCL.
|
| 298 |
+
Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B. Hashimoto. 2022. Diffusion improves controllable text generation.
|
| 299 |
+
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691-6706, Online. Association for Computational Linguistics.
|
| 300 |
+
|
| 301 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 302 |
+
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
|
| 303 |
+
Alex Nichol and Prafulla Dhariwal. 2021. Improved denoising diffusion probabilistic models.
|
| 304 |
+
Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to-end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbrücken, Germany. Association for Computational Linguistics.
|
| 305 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer.
|
| 306 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents.
|
| 307 |
+
Machel Reid, Vincent J Hellendoorn, and Graham Neubig. 2022. Diffuser: Discrete diffusion via edit-based reconstruction. arXiv preprint arXiv:2210.16886.
|
| 308 |
+
Yi Ren, Jinglin Liu, Xu Tan, Zhou Zhao, Sheng Zhao, and Tie-Yan Liu. 2020. A study of non-autoregressive model for sequence generation.
|
| 309 |
+
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674-10685.
|
| 310 |
+
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. 2022. Photorealistic text-to-image diffusion models with deep language understanding.
|
| 311 |
+
Chitwan Sahara, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive machine translation with latent alignments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.
|
| 312 |
+
Nikolay Savinov, Junyoung Chung, Mikolaj Binkowski, Erich Elsen, and Aaron van den Oord. 2022. Step-unrolled denoising autoencoders for text generation. In International Conference on Learning Representations.
|
| 313 |
+
|
| 314 |
+
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256-2265, Lille, France. PMLR.
|
| 315 |
+
Jiaming Song, Chenlin Meng, and Stefano Ermon. 2021. Denoising diffusion implicit models. In International Conference on Learning Representations.
|
| 316 |
+
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223.
|
| 317 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need.
|
| 318 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017b. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.
|
| 319 |
+
Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511-3535, Online. Association for Computational Linguistics.
|
| 320 |
+
Ling Yang, Zhilong Zhang, Yang Song, Shenda Hong, Runsheng Xu, Yue Zhao, Yingxia Shao, Wentao Zhang, Bin Cui, and Ming-Hsuan Yang. 2022. Diffusion models: A comprehensive survey of methods and applications.
|
| 321 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764.
|
| 322 |
+
Jiasheng Ye, Zaixiang Zheng, Yu Bao, Lihua Qian, and Mingxuan Wang. 2023. Dinoiser: Diffused conditional sequence learning by manipulating noises. arXiv preprint arXiv:2302.10025.
|
| 323 |
+
Lin Zheng, Jianbo Yuan, Lei Yu, and Lingpeng Kong. 2023. A reparameterized discrete diffusion model for text generation. arXiv preprint arXiv:2302.05737.
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f20e03bec724324e7a81690b6a9ccbfa212d015827284cfa7c7c619f682cbd5e
|
| 3 |
+
size 334981
|
acheaperandbetterdiffusionlanguagemodelwithsoftmaskednoise/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96a3f3723e4996298d55ba257387bd0f92935380b122f8d025fca37d7a728145
|
| 3 |
+
size 387779
|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:97b23866096e9db728c76a037ffca2217261e4c8718e77579debae9b457715c5
|
| 3 |
+
size 116695
|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:867279558c8fe511cc9ce186433cb2b27b8ae4b8c4b3803b7005a7958f9af43c
|
| 3 |
+
size 134741
|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/5940b58c-25de-42f4-a138-332d2d2b1178_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:88635ba03be3787ce896b28808332c154cc04c9e669225b922edb936853d1529
|
| 3 |
+
size 4388716
|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/full.md
ADDED
|
@@ -0,0 +1,468 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Comprehensive Evaluation of Biomedical Entity Linking Models
|
| 2 |
+
|
| 3 |
+
David Kartchner<sup>1,2*</sup>
|
| 4 |
+
|
| 5 |
+
Jennifer Deng<sup>2</sup>
|
| 6 |
+
|
| 7 |
+
Shubham Lohiya²
|
| 8 |
+
|
| 9 |
+
Tejasri Kopparthi²
|
| 10 |
+
|
| 11 |
+
Prasanth Bathala
|
| 12 |
+
|
| 13 |
+
Daniel Domingo-Fernández†
|
| 14 |
+
|
| 15 |
+
Cassie S. Mitchell† 2
|
| 16 |
+
|
| 17 |
+
1Enveda Biosciences
|
| 18 |
+
|
| 19 |
+
2Georgia Institute of Technology
|
| 20 |
+
|
| 21 |
+
david.kartchner@gatech.edu
|
| 22 |
+
|
| 23 |
+
dani@enedabio.com
|
| 24 |
+
|
| 25 |
+
cassie.mitchell@bme.gatech.edu
|
| 26 |
+
|
| 27 |
+
# Abstract
|
| 28 |
+
|
| 29 |
+
Biomedical entity linking (BioEL) is the process of connecting entities referenced in documents to entries in biomedical databases such as the Unified Medical Language System (UMLS) or Medical Subject Headings (MeSH). The study objective was to comprehensively evaluate nine recent state-of-the-art biomedical entity linking models under a unified framework. We compare these models along axes of (1) accuracy, (2) speed, (3) ease of use, (4) generalization, and (5) adaptability to new ontologies and datasets. We additionally quantify the impact of various preprocessing choices such as abbreviation detection. Systematic evaluation reveals several notable gaps in current methods. In particular, current methods struggle to correctly link genes and proteins and often have difficulty effectively incorporating context into linking decisions. To expedite future development and baseline testing, we release our unified evaluation framework and all included models on GitHub at https://github.com/davidkartchner/biomedical-entity-linking.
|
| 30 |
+
|
| 31 |
+
# 1 Introduction
|
| 32 |
+
|
| 33 |
+
Biomedical entity linking (BioEL) is the process of identifying biomedical concepts (e.g. diseases, chemicals, cell types, etc.) in text and connecting them to a unique identifier in a knowledge base (KB). Entity linking (EL) is critical in text mining, as it allows concepts to be connected across disparate literature. This "harmonization" enables quick access to connected information in the knowledge base and allows for unified reasoning regarding diverse surface forms and mentions.
|
| 34 |
+
|
| 35 |
+
While entity linking is a critical task for text mining, BioEL remains an unsolved problem with diverse challenges. First, biomedical literature has complex, specialized jargon that may differ between biomedical subspecialties. This leads to large, varied sets of synonyms that can be used to reference the same entity. For example, the entity ncbigene:37970 can be referred to by the aliases "ORC", "ORC4", "origin recognition complex subunit 4", "CG2917", "rDmORC", "dmOrc4", etc. Moreover, the entity referenced by a particular surface form is context-dependent and may require specialized domain expertise to disambiguate. For instance, within the Unified Medical Language System (UMLS), "AD" could refer to Alzheimer's Disease, Atopic Dermatitis, Actinomycin D, or Admitting Diagnosis.
|
| 36 |
+
|
| 37 |
+
Second, annotating a biomedical corpus is a time-consuming task that requires specialized domain expertise, which have limited availability to label data. Concretely, the largest labeled BioEL dataset, MedMentions (Mohan and Li, 2019), covers approximately $1\%$ of the candidate entities in its reference ontology while annotating $0.17\%$ of the abstracts in PubMed.
|
| 38 |
+
|
| 39 |
+
Third, though dozens of ontologies and terminologies have been curated in recent years, concepts are often not cross-referenced, leading to a lack of interoperability. Furthermore, even carefully unified collections such as UMLS lack synonyms and definitions for the vast majority of concepts.
|
| 40 |
+
|
| 41 |
+
Most biomedical concepts are not labeled in any gold-standard EL corpus. Thus, robust zero-shot performance is critical for effectively performing EL at scale. However, lack of labelled data by specialized domain experts simultaneously makes it
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
Figure 1: Overview of BioEL evaluation framework.
|
| 45 |
+
|
| 46 |
+
difficult to accurately assess the capacity of current models to generalize to unseen data.
|
| 47 |
+
|
| 48 |
+
While some BioEL surveys have been published (French and McInnes, 2022), they do not evaluate models in a consistent way or on a uniform collection of datasets. Rather than a traditional survey, we contend a systematic evaluation of current BioEL models is needed to: 1) accurately compare current models; 2) identify strengths and weaknesses; 3) prioritize directions for future research; 4) provide a framework to expedite future BioEL development. To address these needs, this paper contributes the following:
|
| 49 |
+
|
| 50 |
+
- We release a synthesized collection of current BioEL models, which can be uniformly evaluated on a large collection of biomedical datasets.
|
| 51 |
+
- We present a systematic framework to evaluate entity linking models along axes of scalability, adaptability, and zero-shot robustness (Section 5).
|
| 52 |
+
- We conduct, to our knowledge, the largest and most comprehensive comparative evaluation of BioEL models to date.
|
| 53 |
+
- We highlight strengths and pitfalls of current BioEL modeling techniques and suggest directions for future improvement (Section 7).
|
| 54 |
+
- We provide our unified framework as open source repo to expedite future BioEL method development and baseline testing.
|
| 55 |
+
|
| 56 |
+
# 2 Problem Definition
|
| 57 |
+
|
| 58 |
+
We assume that we are given a corpus $\mathcal{D} = \{d_i\}_{i=1}^N$ of text, where each $d_i$ is a document in the cor
|
| 59 |
+
|
| 60 |
+
<table><tr><td>Symbol</td><td>Definition</td></tr><tr><td>D</td><td>Corpus of documents</td></tr><tr><td>di</td><td>Individual document in corpus</td></tr><tr><td>mij</td><td>An entity mention in document i</td></tr><tr><td>c- (+)ij</td><td>Left (right) context of entity mention mij</td></tr><tr><td>M</td><td>Collection of all entities in context</td></tr><tr><td>E</td><td>Database of entities</td></tr><tr><td>ek</td><td>Individual entity</td></tr></table>
|
| 61 |
+
|
| 62 |
+
Table 1: Notation used throughout paper
|
| 63 |
+
|
| 64 |
+
pus (e.g. a clinical note, biomedical research abstract, etc.). Each document is annotated with mentions spans $m_{ij} \in d_i$ , where every mention span $m_{ij} = t_{ij}^{(1)}, \ldots, t_{ij}^{(\ell)}$ is a sequence of tokens corresponding to a single entity. Every mention is given with surrounding contextual information $c_{ij}^{-}$ and $c_{ij}^{+}$ , which correspond to token spans before and after the entity mention $m_{ij}$ . Define the collection of contextual mentions for a document $M_i = \{c_{ij}^- m_{ij} c_{ij}^+\}_{j=1}^{nj}$ . Subsequently, we discuss mentions within the context of a single document and thus drop the document subscript $i$ from mention and context annotations.
|
| 65 |
+
|
| 66 |
+
We assume that a database of entities is provided $\mathcal{E} = \{e_k\}_{k=1}^K$ . Each entity is identified by a unique identifier and may also contain informational metadata such as entity type(s), definition, aliases, etc. Most entity-linkers assume access to ground truth entity mention spans. However, these can be determined programmatically via a named entity recognition algorithm.
|
| 67 |
+
|
| 68 |
+
The task of entity linking is to learn a function $f: \mathcal{M} \to \mathcal{E}$ that maps each mention $m_j$ to the correct entity $e_j \in \mathcal{E}$ .
|
| 69 |
+
|
| 70 |
+
Most entity linkers use a two-stage approach to find the correct entity link for a given mention span.
|
| 71 |
+
|
| 72 |
+
The first stage is Candidate Generation (CG), which defines a function $f_{CG} : \mathcal{M} \to \mathcal{E}^n$ that filters $\mathcal{E}$ down to a set of $n$ high-quality candidate entities. Once a set of entity candidates have been generated, they are passed into a Named Entity Disambiguation * (NED) module $f_{NED} : \mathcal{E}^n \times \mathcal{M} \to \mathcal{E}$ , which chooses the best candidate for a final entity link. In practice, $f_{CG}$ is chosen to be a computationally inexpensive algorithm with high recall, while $f_{NED}$ is more costly and precise. The final entity linker is defined as $f = f_{NED} \circ f_{CG}$ .
|
| 73 |
+
|
| 74 |
+
# 3 Datasets
|
| 75 |
+
|
| 76 |
+
We evaluate included BioEL methods on a variety of biomedical datasets (Table 2), with detailed descriptions of each in the Appendix A. All datasets used were taken from BigBio (Fries et al., 2022). Additionally, Table 10 in the appendix describes the extent to which entities and mentions overlap between the training and testing data. Entity overlap is defined as the proportion of entities in the testing data that are in the training data. Mention overlap represents the proportion of mentions in the testing data whose entity is present in the training data (e.g. if an entity is mentioned more than once in the test set).
|
| 77 |
+
|
| 78 |
+
# 3.1 Data Preprocessing
|
| 79 |
+
|
| 80 |
+
In order to simplify data processing, we pulled all included datasets from BigBio (Fries et al., 2022), a recent effort to unify the format of biomedical text datasets for improved consistency and ease of use. Any bug and error fixes for included datasets were contributed directly to BigBio. For KBs, we downloaded the KBs to which each database is linked, namely UMLS (Bodenreider, 2004), MeSH (Lipscomb, 2000), Entrez Gene (Maglott et al., 2005), and the MEDIC dictionary (Davis et al., 2019), which contains disease entities from MeSH and OMIM (Hamosh et al., 2005). The KBs used for each dataset are listed in Table 2.
|
| 81 |
+
|
| 82 |
+
We removed any entity mentions whose Concept Unique Identifiers (CUIs) were no longer available in the corresponding ontology or remapped them to the updated CUIs when possible. We used Ab3P (Sohn et al., 2008) to identify and (optionally) resolve abbreviations at train/inference time.
|
| 83 |
+
|
| 84 |
+
In Entrez gene, we additionally dropped "tRNA" and "hypothetical protein" gene types that were not
|
| 85 |
+
|
| 86 |
+
used for entity linking. For methods able to process additional metadata (ArboEL, ClusterEL), we add species information for each gene in the entity description. For alias matching methods, we added the species name of each gene to its canonical name when the canonical name was not unique. We did not augment other aliases with species information.
|
| 87 |
+
|
| 88 |
+
# 3.2 Excluded Datasets
|
| 89 |
+
|
| 90 |
+
This evaluation focuses on entity linking in biomedical scientific research articles (BioEL). Therefore, this systematic evaluation excludes EL in non-scientific texts. Additionally, text extracted from electronic health records (EHR), such as notes or discharge summaries, are also excluded. EL for EHR is distinct from BioEL in its scope, purpose, and accessibility. Previous EHR EL efforts for informal, patient-generated text include CADEC (Karimi et al., 2015), AskAPatient (Limsopatham and Collier, 2016), and PsyTAR (Zolnoori et al., 2019). These EHR EL platforms link diseases, symptoms, and adverse drug reaction mentions to a variety of relevant ontologies. Similarly, COMETA (Basaldella et al., 2020) links a diverse array of entities in Reddit posts to SNOMED-CT.
|
| 91 |
+
|
| 92 |
+
# 4 Models
|
| 93 |
+
|
| 94 |
+
A wide variety of methods have been used for BioEL. Here we describe families of models used for BioEL and list included models from each category. More detailed descriptions of each individual model are found in Appendix B. We summarize the different models evaluated in Table 3.
|
| 95 |
+
|
| 96 |
+
Models evaluated were those with near state-of-the-art performance at time of publication when evaluated on at least one included BioEL entity linking dataset. From this pool, we excluded models with no open-source implementation or whose implementation was rendered unusable due to lack of documentation or software updates. With the exception of MetaMap, all models were published in the past 5 years.
|
| 97 |
+
|
| 98 |
+
# 4.1 Alias Matching EL
|
| 99 |
+
|
| 100 |
+
Alias based entity linking seeks to link entities by matching an entity mention with a correct entity alias in a KB. The simplest form of this is exact string matching, but can be extended using any model that produces similarity scores between a mention and a set of candidate aliases. Evaluated alias matching methods include MetaMap (Aron
|
| 101 |
+
|
| 102 |
+
<table><tr><td>Dataset</td><td>Num Docs</td><td>Mentions</td><td>Unique Ents</td><td>Ent Types</td><td>Doc Type</td><td>Ontology</td></tr><tr><td>MedMentions Full</td><td>4,392</td><td>352,496</td><td>34,724</td><td>127</td><td>PubMed Abstracts</td><td>UMLS</td></tr><tr><td>MedMentions ST21PV</td><td>4,392</td><td>203,282</td><td>25,419</td><td>21</td><td>PubMed Abstracts</td><td>UMLS</td></tr><tr><td>BC5CDR</td><td>1,500</td><td>29,044</td><td>2,348</td><td>2</td><td>PubMed Abstracts</td><td>MeSH</td></tr><tr><td>GNormPlus</td><td>533</td><td>6,252</td><td>1,353</td><td>2</td><td>PubMed Abstracts</td><td>Entrez</td></tr><tr><td>NCBI Disease</td><td>792</td><td>6,881</td><td>789</td><td>4</td><td>PubMed Abstracts</td><td>MEDIC</td></tr><tr><td>NLM Chem</td><td>150</td><td>37,999</td><td>1,787</td><td>1</td><td>PMC Full-Text</td><td>MeSH</td></tr><tr><td>NLM Gene</td><td>550</td><td>15,553</td><td>3,348</td><td>5</td><td>PMC Full-Text</td><td>Entrez</td></tr></table>
|
| 103 |
+
|
| 104 |
+
Table 2: Summary of datasets used for evaluation.
|
| 105 |
+
|
| 106 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Model Characteristics</td><td colspan="3">Data Requirements</td><td colspan="3">Reproducibility Code</td><td colspan="2">Usability</td></tr><tr><td>Supervised</td><td>Type</td><td>Names</td><td>Definitions</td><td>Aliases</td><td>Preprocessing</td><td>Model Source</td><td>Pretrained Model</td><td>Documentation</td><td>New Dataset</td></tr><tr><td>MedLinker</td><td>Yes</td><td>Contextualized</td><td>Yes</td><td>Yes</td><td>Yes</td><td>No</td><td>Yes</td><td>No</td><td>Fair</td><td>No</td></tr><tr><td>SciSpacy</td><td>Yes</td><td>Alias Match</td><td>Yes</td><td>Optional</td><td>Yes</td><td>N/A</td><td>Yes</td><td>Yes</td><td>Excellent</td><td>Yes</td></tr><tr><td>ClusterEL</td><td>Yes</td><td>Contextualized</td><td>Yes</td><td>Optional</td><td>Optional</td><td>Yes</td><td>Yes</td><td>No</td><td>Good</td><td>No</td></tr><tr><td>ArboEL</td><td>Yes</td><td>Contextualized</td><td>Yes</td><td>Optional</td><td>Optional</td><td>Yes</td><td>Yes</td><td>No</td><td>Good</td><td>No</td></tr><tr><td>KRISSBERT</td><td>Distant</td><td>Contextualized</td><td>Yes</td><td>Optional</td><td>Optional</td><td>No</td><td>Partial</td><td>Yes</td><td>Good</td><td>No</td></tr><tr><td>BioSyn</td><td>Distant</td><td>Alias Match</td><td>Yes</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Good</td><td>No</td></tr><tr><td>SapBERT</td><td>Distant</td><td>Alias Match</td><td>Yes</td><td>No</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Yes</td><td>Good</td><td>Partial</td></tr><tr><td>BioBART</td><td>Yes</td><td>Autoregressive</td><td>Yes</td><td>No</td><td>Yes</td><td>No</td><td>Yes</td><td>Yes</td><td>Poor</td><td>No</td></tr><tr><td>BioGenEL</td><td>Yes</td><td>Autoregressive</td><td>Yes</td><td>No</td><td>Yes</td><td>No</td><td>Yes</td><td>No</td><td>Fair</td><td>No</td></tr></table>
|
| 107 |
+
|
| 108 |
+
Table 3: Comparison of model characteristics, reproducibility, and usability
|
| 109 |
+
|
| 110 |
+
son and Lang, 2010), SciSpacy (Neumann et al., 2019), BioSyn (Sung et al., 2020), and SapBERT (Liu et al., 2021). Note that BioSyn is included via SapBERT since the latter is a higher-performing edition of BioSyn.
|
| 111 |
+
|
| 112 |
+
# 4.2 Contextualized EL
|
| 113 |
+
|
| 114 |
+
Much of the work in transformer-based EL has built upon seminal works in zero-shot EL using semantic similarity between contextualized mentions and entity descriptions (Logeswaran et al., 2019; Wu et al., 2020). These methods use entity description metadata to generate and disambiguate entity candidates without the use of alias tables or large-scale supervised mentions, making it easier to generalize EL beyond the scope of training data. Wu et al. (2020) in particular uses a pretrained BERT bi-encoder (Devlin et al., 2019) model to generate candidates by encoding similarity between mentions and descriptions. It then uses a more expensive cross-encoder model to disambiguate candidates for the final entity link. Our evaluation includes MedLinker (Loureiro and Jorge, 2020), ClusterEL (Angell et al., 2021), ArboEL (Agarwal et al., 2022), and KRISSBERT (Zhang et al., 2021). We also note that Bootleg (Varma et al., 2021; Orr et al., 2021) has been used for biomedical entity linking but do not include it due to lack of code for configuring/running their published BioEL models.
|
| 115 |
+
|
| 116 |
+
# 4.3 Autoregressive EL
|
| 117 |
+
|
| 118 |
+
First proposed by (Cao et al., 2021), autoregressive EL uses a generative language model to map the text of each mention to its canonical entity name, rather than identifying the index of the correct database entity. It claims the potential to better accommodate additions to a database because an existing model can easily normalize to new entity names without needing to re-train a final output layer. Autoregressive EL can also preform alias matching by training on an alias table potentially reducing the need for hand-labeled training data. Our survey includes BioGenEL (Yuan et al., 2022b) and BioBART (Yuan et al., 2022a).
|
| 119 |
+
|
| 120 |
+
# 5 Evaluation Strategy
|
| 121 |
+
|
| 122 |
+
As noted in (Zhang et al., 2021), evaluation strategies between different entity linking papers are inconsistent, leading to wide disparities in reported results. Differences primarily revolve around how to score predictions where multiple normalizations are given for a named entity, e.g. because all predicted entities share the same alias. We identified three main strategies for this in the literature.
|
| 123 |
+
|
| 124 |
+
1. Basic resolves ties by randomly ordering all equally ranked entities.
|
| 125 |
+
2. Relaxed counts an entity link as correct if any of the predicted normalizations match any of
|
| 126 |
+
|
| 127 |
+
the ground-truth normalizations for a given entity.
|
| 128 |
+
|
| 129 |
+
3. Strict counts a normalization as correct only if all predicted normalizations match groundtruth normalizations for a given entity. Same as basic if no equally ranked normalizations.
|
| 130 |
+
|
| 131 |
+
For each dataset, we generate ranked entity candidates from each model in Sec. 4. For models that only natively link to UMLS, links to other KBs are computed by predicting entities in UMLS (Bodenreider, 2004) and mapping these predictions to other KBs using cross references provided by UMLS and OBOFoundary (Smith et al., 2007). Predictions are ranked and evaluated using recall @ k for $k \in \{1, 2, \dots, 10\}$ (note that recall@1 is equivalent to accuracy). We perform our main evaluations using the basic evaluation strategy unless otherwise specified.
|
| 132 |
+
|
| 133 |
+
# 5.1 Error Analysis
|
| 134 |
+
|
| 135 |
+
For errors in the dataset, we analyze the following:
|
| 136 |
+
|
| 137 |
+
Stage of EL failure: For incorrectly linked mentions, did the failure occur in CG or NED phase? For failures that occur in candidate generation phase, what proportion of generated candidates have the correct semantic type/semantic group?
|
| 138 |
+
|
| 139 |
+
Failure subgroups: When a model fails, can we identify slices with high/low chances of failure? Inspired by Orr et al. (2021) and Chen et al. (2021), we investigate possible failure modes including:
|
| 140 |
+
|
| 141 |
+
- Entity type. Are entities of particular types frequently linked incorrectly? Are generated candidates in the correct semantic type/group?
|
| 142 |
+
- Popularity. How often are incorrectly linked entities present in the training data?
|
| 143 |
+
- Available metadata. Do incorrectly linked surface forms match aliases in the KB? Are KB entities with few aliases and/or no definition more likely to be incorrectly linked?
|
| 144 |
+
|
| 145 |
+
Common Misunderstandings: There are some cases where all models in our comparison find the incorrect entity link in our data. We manually examined cases where all BioEL models provided an incorrect entity link and describe common mistakes made by current BioEL models.
|
| 146 |
+
|
| 147 |
+
# 6 Results
|
| 148 |
+
|
| 149 |
+
Our main result in Table 4 shows the recall@1 (accuracy) and recall@5 of each model across all of the datasets. This estimates how well models perform both on candidate ranking and overall candidate generation. Here ArboEL outperforms most models across the majority of datasets. An additional visualization of how recall@k changes for for $k = 1, \ldots, 10$ is shown in Figure 2.
|
| 150 |
+
|
| 151 |
+
# 6.1 Performance on specific entity types
|
| 152 |
+
|
| 153 |
+
While most of the datasets evaluated contain only 1-2 entity types, MedMentions contains 127 distinct entity types split into 10 semantic groups. Similarly, both NLM-Gene and GNormPlus link gene mentions from many different species. We compared whether models perform better on specific semantic groups (MedMentions) or on genes from specific species (NLM-Gene). The results are shown in Tables 5 and 12 (Appendix) respectively.
|
| 154 |
+
|
| 155 |
+
# 6.2 Performance on entities with limited metadata
|
| 156 |
+
|
| 157 |
+
We analyzed the models' performance on different data slices, as described in section 5.1. Linked entities are biased towards more commonly seen entities, which enables more robust extrapolation of model zero-shot performance and performance on entities with limited metadata (e.g. aliases, definitions, etc). Results for MedMentions ST21PV are shown in Table 6.
|
| 158 |
+
|
| 159 |
+
# 7 Discussion
|
| 160 |
+
|
| 161 |
+
Of the models evaluated, there was no model that clearly performed "best" for all datasets or evaluation metrics. However, ArboEL showed consistently high performance and was always among the highest-performing models on each dataset. SapBERT was arguably the best-performing alias matching method, sometimes surpassing ArboEL in recall@5 for various datasets.
|
| 162 |
+
|
| 163 |
+
One noteworthy result is the relatively poor performance of all models in Table 4 on gene recognition. For alias matching models we see significant increases in recall@k as k increases on both NLM-Gene and GNormPlus than we do for any other datasets. We hypothesize this is due to gene aliases being poorly differentiated between species. This is supported by the steeply increasing recall@k performance of autoregressive and alias-matching
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
Figure 2: Recall@K for all models using basic evaluation.
|
| 179 |
+
|
| 180 |
+
<table><tr><td></td><td colspan="2">BC5CDR</td><td colspan="2">MM-Full</td><td colspan="2">MM-ST21PV</td><td colspan="2">GNormPlus</td><td colspan="2">NLM-Chem</td><td colspan="2">NLM-Gene</td><td colspan="2">NCBI-Disease</td></tr><tr><td></td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td></tr><tr><td>SapBERT</td><td>0.883</td><td>0.934</td><td>0.611</td><td>0.786</td><td>0.637</td><td>0.788</td><td>0.234</td><td>0.614</td><td>0.812</td><td>0.889</td><td>0.075</td><td>0.348</td><td>0.753</td><td>0.896</td></tr><tr><td>MetaMap</td><td>0.828</td><td>0.856</td><td>0.588</td><td>0.731</td><td>0.568</td><td>0.699</td><td>0.624</td><td>0.633</td><td>0.680</td><td>0.707</td><td>0.261</td><td>0.263</td><td>0.669</td><td>0.712</td></tr><tr><td>KRISSBERT</td><td>0.735</td><td>0.766</td><td>0.591</td><td>0.755</td><td>0.559</td><td>0.701</td><td>0.079</td><td>0.087</td><td>0.560</td><td>0.596</td><td>0.279</td><td>0.482</td><td>0.752</td><td>0.803</td></tr><tr><td>SciSpacy</td><td>0.780</td><td>0.830</td><td>0.582</td><td>0.759</td><td>0.572</td><td>0.741</td><td>0.471</td><td>0.772</td><td>0.467</td><td>0.503</td><td>0.163</td><td>0.349</td><td>0.680</td><td>0.780</td></tr><tr><td>MedLinker</td><td>0.720</td><td>0.767</td><td>0.568</td><td>0.662</td><td>0.521</td><td>0.627</td><td>0.178</td><td>0.469</td><td>0.514</td><td>0.542</td><td>0.084</td><td>0.255</td><td>0.545</td><td>0.768</td></tr><tr><td>ClusterEL</td><td>0.876</td><td>0.938</td><td>0.696</td><td>0.851</td><td>0.692</td><td>0.849</td><td>0.302</td><td>0.448</td><td>0.758</td><td>0.868</td><td>0.490</td><td>0.676</td><td>0.748</td><td>0.801</td></tr><tr><td>ArboEL</td><td>0.921</td><td>0.958</td><td>NR</td><td>NR</td><td>0.747</td><td>0.890</td><td>0.441</td><td>0.524</td><td>0.828</td><td>0.882</td><td>0.543</td><td>0.734</td><td>0.774</td><td>0.832</td></tr><tr><td>BioBART</td><td>0.572</td><td>0.733</td><td>0.548</td><td>0.764</td><td>0.496</td><td>0.700</td><td>0.175</td><td>0.499</td><td>0.512</td><td>0.650</td><td>0.051</td><td>0.229</td><td>0.423</td><td>0.608</td></tr><tr><td>BioGenEL</td><td>0.909</td><td>0.953</td><td>0.567</td><td>0.763</td><td>0.520</td><td>0.691</td><td>0.081</td><td>0.281</td><td>0.786</td><td>0.879</td><td>0.043</td><td>0.233</td><td>0.518</td><td>0.692</td></tr></table>
|
| 181 |
+
|
| 182 |
+
Table 4: Recall@1 (accuracy) and recall @ 5 of all models. NR=Not reproducible
|
| 183 |
+
|
| 184 |
+
<table><tr><td>Semantic Group</td><td>SapBERT</td><td>MetaMap</td><td>KRISSBERT</td><td>SciSpacy</td><td>ClusterEL</td><td>ArboEL</td><td>BioBART</td><td>BioGenEL</td><td>Prevalence</td></tr><tr><td>Disorders</td><td>0.083‡</td><td>0.065‡</td><td>0.026‡</td><td>0.071‡</td><td>0.038‡</td><td>0.033‡</td><td>0.051‡</td><td>0.073‡</td><td>0.202</td></tr><tr><td>Chemicals & Drugs</td><td>-0.027‡</td><td>-0.011</td><td>-0.103‡</td><td>0.007</td><td>-0.045‡</td><td>-0.034‡</td><td>-0.101‡</td><td>0.000</td><td>0.185</td></tr><tr><td>Procedures</td><td>-0.097‡</td><td>-0.133‡</td><td>0.018*</td><td>-0.127‡</td><td>-0.019†</td><td>-0.009</td><td>-0.039‡</td><td>-0.076‡</td><td>0.165</td></tr><tr><td>Living Beings</td><td>0.063‡</td><td>0.031‡</td><td>0.045‡</td><td>0.043‡</td><td>0.043‡</td><td>0.047‡</td><td>0.100‡</td><td>0.053‡</td><td>0.099</td></tr><tr><td>Physiology</td><td>-0.004</td><td>-0.060‡</td><td>0.046‡</td><td>-0.001</td><td>0.040‡</td><td>0.016</td><td>0.068‡</td><td>0.024*</td><td>0.095</td></tr><tr><td>Concepts & Ideas</td><td>-0.011</td><td>0.049‡</td><td>0.060‡</td><td>-0.019</td><td>-0.014</td><td>-0.029‡</td><td>0.038‡</td><td>-0.018</td><td>0.092</td></tr><tr><td>Anatomy</td><td>0.058‡</td><td>0.125‡</td><td>0.047‡</td><td>0.073‡</td><td>0.035‡</td><td>0.031‡</td><td>0.014</td><td>0.059‡</td><td>0.082</td></tr><tr><td>Genes & Molecular Sequences</td><td>-0.144‡</td><td>-0.098‡</td><td>-0.192‡</td><td>-0.14‡</td><td>-0.152‡</td><td>-0.129‡</td><td>-0.153‡</td><td>-0.249‡</td><td>0.028</td></tr><tr><td>Other</td><td>-0.030*</td><td>0.027</td><td>-0.039‡</td><td>0.008</td><td>-0.039‡</td><td>-0.032†</td><td>-0.040†</td><td>-0.112‡</td><td>0.055</td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 5: Performance on different semantic groups within MedMentions. Values represent absolute difference in slice accuracy vs. overall accuracy for each model. *p<0.05; †p<0.01; ‡p<0.001 after Bonferroni correction.
|
| 187 |
+
|
| 188 |
+
<table><tr><td>Slice</td><td>SapBERT</td><td>MetaMap</td><td>KRISSBERT</td><td>SciSpacy</td><td>ClusterEL</td><td>ArboEL</td><td>BioBART</td><td>BioGenEL</td><td>Prevalence</td></tr><tr><td>is_abbrev</td><td>0.037‡</td><td>0.080‡</td><td>-0.062‡</td><td>0.076‡</td><td>-0.023*</td><td>0.003</td><td>-0.038‡</td><td>0.023*</td><td>0.091</td></tr><tr><td>has alias match</td><td>0.280‡</td><td>0.289‡</td><td>0.114‡</td><td>0.298‡</td><td>0.205‡</td><td>0.194‡</td><td>0.064‡</td><td>0.161‡</td><td>0.157</td></tr><tr><td>no alias match</td><td>-0.052‡</td><td>-0.054‡</td><td>-0.021‡</td><td>-0.055‡</td><td>-0.038‡</td><td>-0.036‡</td><td>-0.012‡</td><td>-0.030‡</td><td>0.843</td></tr><tr><td>wrong alias match</td><td>-0.259‡</td><td>-0.213‡</td><td>-0.129‡</td><td>-0.175‡</td><td>-0.156‡</td><td>-0.150‡</td><td>-0.156‡</td><td>-0.213‡</td><td>0.081</td></tr><tr><td>train_text match</td><td>0.094‡</td><td>0.082‡</td><td>0.230‡</td><td>0.077‡</td><td>0.124‡</td><td>0.099‡</td><td>0.094‡</td><td>0.077‡</td><td>0.556</td></tr><tr><td>train-entity match</td><td>0.015‡</td><td>0.023‡</td><td>0.163‡</td><td>0.011‡</td><td>0.058‡</td><td>0.046‡</td><td>0.037‡</td><td>0.017‡</td><td>0.774</td></tr><tr><td>single alias</td><td>-0.075‡</td><td>-0.117‡</td><td>-0.041‡</td><td>-0.148‡</td><td>0.005</td><td>-0.031‡</td><td>-0.116‡</td><td>-0.133‡</td><td>0.096</td></tr><tr><td>five alias or less</td><td>-0.074‡</td><td>-0.085‡</td><td>-0.055‡</td><td>-0.085‡</td><td>-0.04‡</td><td>-0.051‡</td><td>-0.056‡</td><td>-0.079‡</td><td>0.448</td></tr><tr><td>no_definition</td><td>-0.101‡</td><td>-0.157‡</td><td>-0.262‡</td><td>-0.126‡</td><td>-0.158‡</td><td>-0.144‡</td><td>-0.152‡</td><td>-0.113‡</td><td>0.196</td></tr><tr><td>zero-shot</td><td>-0.051‡</td><td>-0.08‡</td><td>-0.559‡</td><td>-0.038‡</td><td>-0.200‡</td><td>-0.157‡</td><td>-0.128‡</td><td>-0.059‡</td><td>0.226</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 6: Performance differential of models on various slices of data, micro-averaged over all datasets. Values represent absolute difference in slice accuracy vs. overall accuracy for each model. *p<0.05; †p<0.01; ‡p<0.001 after Bonferroni correction.
|
| 191 |
+
|
| 192 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">BC5CDR</td><td colspan="2">MM-Full</td><td colspan="2">MM-ST21PV</td><td colspan="2">GNormPlus</td><td colspan="2">NLM-Chem</td><td colspan="2">NLM-Gene</td><td colspan="2">NCBI-Disease</td></tr><tr><td>CG</td><td>NED</td><td>CG</td><td>NED</td><td>CG</td><td>NED</td><td>CG</td><td>NED</td><td>CG</td><td>NED</td><td>CG</td><td>NED</td><td>CG</td><td>NED</td></tr><tr><td>SapBERT</td><td>0.552</td><td>0.448</td><td>0.462</td><td>0.538</td><td>0.546</td><td>0.454</td><td>0.058</td><td>0.942</td><td>0.511</td><td>0.489</td><td>0.141</td><td>0.853</td><td>0.257</td><td>0.743</td></tr><tr><td>MetaMap</td><td>0.836</td><td>0.164</td><td>0.640</td><td>0.360</td><td>0.682</td><td>0.318</td><td>0.976</td><td>0.024</td><td>0.914</td><td>0.086</td><td>0.996</td><td>0.004</td><td>0.868</td><td>0.132</td></tr><tr><td>KRISSBERT</td><td>0.860</td><td>0.140</td><td>0.541</td><td>0.459</td><td>0.628</td><td>0.372</td><td>0.991</td><td>0.009</td><td>0.894</td><td>0.106</td><td>0.668</td><td>0.332</td><td>0.744</td><td>0.256</td></tr><tr><td>SciSpacy</td><td>0.613</td><td>0.383</td><td>0.430</td><td>0.566</td><td>0.441</td><td>0.555</td><td>0.331</td><td>0.669</td><td>0.819</td><td>0.181</td><td>0.729</td><td>0.267</td><td>0.590</td><td>0.407</td></tr><tr><td>MedLinker</td><td>0.783</td><td>0.217</td><td>0.689</td><td>0.311</td><td>0.689</td><td>0.311</td><td>0.323</td><td>0.677</td><td>0.919</td><td>0.081</td><td>0.499</td><td>0.501</td><td>0.410</td><td>0.590</td></tr><tr><td>ClusterEL</td><td>0.310</td><td>0.688</td><td>0.297</td><td>0.698</td><td>0.292</td><td>0.703</td><td>0.669</td><td>0.324</td><td>0.399</td><td>0.599</td><td>0.475</td><td>0.519</td><td>0.620</td><td>0.380</td></tr><tr><td>ArboEL</td><td>0.403</td><td>0.597</td><td>NR</td><td>NR</td><td>0.275</td><td>0.722</td><td>0.780</td><td>0.219</td><td>0.536</td><td>0.464</td><td>0.477</td><td>0.521</td><td>0.677</td><td>0.323</td></tr><tr><td>BioBART</td><td>0.291</td><td>0.709</td><td>0.306</td><td>0.691</td><td>0.325</td><td>0.672</td><td>0.202</td><td>0.795</td><td>0.320</td><td>0.680</td><td>0.375</td><td>0.619</td><td>0.242</td><td>0.747</td></tr><tr><td>BioGenEL</td><td>0.308</td><td>0.692</td><td>0.353</td><td>0.644</td><td>0.417</td><td>0.582</td><td>0.510</td><td>0.481</td><td>0.324</td><td>0.676</td><td>0.358</td><td>0.639</td><td>0.449</td><td>0.544</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 7: Stage of model (CG or NED) at which entity linking failed. Values represent the proportion of errors that occurred in each stage. NR=Not reproducible
|
| 195 |
+
|
| 196 |
+
models, which cannot differentiate between multiple entities containing the same alias. Comparison to the recall@k curves under a relaxed evaluation (Figure 11, Appendix) reveals that these models are excellent at finding the correct alias but lack the capacity to choose the correct entity from among them.
|
| 197 |
+
|
| 198 |
+
For datasets focusing on chemicals and diseases (BC5CDR, NCBI-Disease, NLM-Chem), curves comparing recall from 1 - 10 flatten out quickly; this result indicates that when the correct candidate is retrieved, it is generally ranked highly.
|
| 199 |
+
|
| 200 |
+
# 7.1 Failure Stage
|
| 201 |
+
|
| 202 |
+
Most entity linking models consist of two stages, CG and NED. Therefore, it is useful to see at which stage each model failed. If a model is not choosing a set of candidates with the correct entity in the CG stage, the NED stage will never be able to choose the correct one. Table 7 shows how errors are split between candidate generation and reranking for each model.
|
| 203 |
+
|
| 204 |
+
Failure stage varies widely by dataset and model. MetaMap and KRISSBERT tend to struggle most with candidate generation while BioBART and BioGenEL make most of their errors in entity disam
|
| 205 |
+
|
| 206 |
+
biguation. Other models tend to have more evenly distributed errors, with failure stage being highly dataset dependent. Overall, these results indicate that substantial gains can be made to EL through work on both CG and NED.
|
| 207 |
+
|
| 208 |
+
# 7.2 Impact of Abbreviation Resolution
|
| 209 |
+
|
| 210 |
+
Abbreviation resolution (AR) is commonly used as a means to potentially improve the performance of EL models. We investigated to what extent this is true by running each of the models with and without AR. The results, shown in Table 8, indicate that AR has a positive, statistically significant effect overall on EL performance: AR improved performance by up to $69.5\%$ on abbreviated entities in some datasets. However, this was not the case for gene normalization where AR showed a negative or insignificant effect. We hypothesize this is because genes are more commonly referred to by their abbreviations than by their longer full names, which limits the usefulness of AR.
|
| 211 |
+
|
| 212 |
+
# 7.3 Robustness on Slices + Zero Shot
|
| 213 |
+
|
| 214 |
+
In addition to AR, we evaluated how models performed on different subsets of the data. Some common entity characteristics, along with their perfor
|
| 215 |
+
|
| 216 |
+

|
| 217 |
+
Figure 3: Performance on zero-shot, few alias, and unmatched/mismatched test set instances, evaluated on MedMentions ST21PV.
|
| 218 |
+
|
| 219 |
+
mance, are shown in Table 6. A plot of performance in low-data slices (no/wrong alias match in training data; few aliases in KB; zero-shot performance) for MedMentions are shown in Figure 3. Unsurprisingly, we see that the models have significantly improved performance on entities that match an alias in the target ontology; are in the training set; or have definitions. The models performed worse when the mention matches the alias of a different entity; when the ground-truth entity does not have a definition; and when only few aliases are present for an entity in the ontology. We also see that performance degrades in zero-shot settings, but this degradation proportion seems lowest in alias matching models. Overall zero-shot performance is highest on ArboEL, followed by SapBERT.
|
| 220 |
+
|
| 221 |
+
Taken as a whole, these results indicate that "in the wild" entity linking performance will suffer for entities outside of the training distribution, but these effects can be mitigated by model choice.
|
| 222 |
+
|
| 223 |
+
# 7.4 Scalability
|
| 224 |
+
|
| 225 |
+
Scalability is critical for deploying models in practice. To measure the scalability of the models, we compared training and evaluation time on Med-Mentions. We compared training time in Figure 4 and evaluation time in Figure 5 (Appendix). When a model came pretrained, we include the loading and/or dictionary embedding time as part of its training time. We generally found that simpler alias matching models tended to be faster than autoregressive and contextualized models.
|
| 226 |
+
|
| 227 |
+
# 7.5 Usability, Adaptability, Reproducibility
|
| 228 |
+
|
| 229 |
+
We compared the usability and reproducibility of models in Table 3. At the time of our evaluation,
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
Figure 4: Comparison of training time (s) vs. top-1 entity linking accuracy for BioEL models. All experiments were performed on a single NVIDIA A40 GPU.
|
| 233 |
+
|
| 234 |
+
most available research models for EL lacked some or all important elements of reproducibility. For example, a surprising number of models lacked instructions on how to test their method on a different dataset and many models had poor/outdated usage documentation. Some were missing critical details needed to reproduce reported experiments or to simply to run the baseline model. At the time of our evaluation, SciSpacy had the best documentation and use instructions. MedLinker, BioGenEL, and ArboEL were the most difficult to adapt and reproduce.
|
| 235 |
+
|
| 236 |
+
# 8 Future Work and Conclusion
|
| 237 |
+
|
| 238 |
+
# 8.1 Future Directions
|
| 239 |
+
|
| 240 |
+
Large language models (LLMs), such as GPT 3.5 (Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and BLOOM (Scao et al., 2022) have shown powerful few and zero-shot performance at a variety of tasks. However, these models are known to hallucinate and produce factually incorrect information. To our knowledge, little work has been done to analyze how well these models can correctly link entities, especially biomedical entities that may not be well represented within their training distributions. An evaluation of LLM-based EL stands to improve the performance of BioEL models while also improving the quality and accuracy of LLM-generated text.
|
| 241 |
+
|
| 242 |
+
# 8.2 Conclusion
|
| 243 |
+
|
| 244 |
+
Entity linking is an essential task for knowledge-intensive natural language processing and is particularly in scientific and biomedical domains. This paper presents a systematic evaluation of BioEL
|
| 245 |
+
|
| 246 |
+
<table><tr><td>Dataset</td><td>SapBERT</td><td>MetaMap</td><td>KrissBERT</td><td>SciSpacy</td><td>ClusterEL</td><td>ArboEL</td></tr><tr><td>BC5CDR</td><td>0.598‡</td><td>0.588‡</td><td>0.136‡</td><td>0.695‡</td><td>0.329‡</td><td>0.263‡</td></tr><tr><td>MM-Full</td><td>0.426‡</td><td>0.472‡</td><td>0.142‡</td><td>0.408‡</td><td>0.181‡</td><td>N\A</td></tr><tr><td>MM-ST21PV</td><td>0.398‡</td><td>0.454‡</td><td>0.131‡</td><td>0.403‡</td><td>0.187‡</td><td>0.198‡</td></tr><tr><td>GNormPlus</td><td>0.039</td><td>0.004</td><td>0.019</td><td>-0.169‡</td><td>-0.039</td><td>0.004</td></tr><tr><td>NLM-Chem</td><td>0.644‡</td><td>0.602‡</td><td>0.058‡</td><td>0.548‡</td><td>0.33‡</td><td>0.375‡</td></tr><tr><td>NLM-Gene</td><td>0.058</td><td>0.018</td><td>-0.003</td><td>0.003</td><td>-0.063</td><td>-0.087</td></tr><tr><td>NCBI-Dis</td><td>0.139†</td><td>0.468‡</td><td>0.035</td><td>0.381‡</td><td>0.221‡</td><td>0.091</td></tr><tr><td>Overall</td><td>0.447‡</td><td>0.464‡</td><td>0.095‡</td><td>0.426‡</td><td>0.22‡</td><td>0.227‡</td></tr></table>
|
| 247 |
+
|
| 248 |
+
Table 8: Absolute difference in accuracy on for abbreviated entities after abbreviation resolution of abbreviation resolution. *p<0.05; †p<0.01; ‡p<0.001 after Bonferroni correction.
|
| 249 |
+
|
| 250 |
+
models along axes of performance, scalability, usability, and robustness, enabling more principled, rigorous development and evaluation of future EL work.
|
| 251 |
+
|
| 252 |
+
# Limitations
|
| 253 |
+
|
| 254 |
+
One limitation of our paper is a lack of extensive hyperparameter tuning due to computing constraints. While we did perform early stopping on multiple methods to find the optimal amount of model training, we did not perform an exhaustive hyperparameter search for the models listed. For most models, we followed the parameter choices listed by the authors in their respective papers.
|
| 255 |
+
|
| 256 |
+
In addition to the general, multi-purpose BioEL models included in this work, there are other models designed to address specific entity types (e.g. genes, chemicals). Such models may be better able to deal with nuances of certain data types, such as species selection for gene/protein BioEL datasets. While these models could offer potential improvements on certain datasets and/or data slices, evaluating them is beyond the scope of this work.
|
| 257 |
+
|
| 258 |
+
KBs evolve over time with new discoveries and additional curation. While we performed significant manual efforts to identify and either update or remove deprecated entity links within the datasets used, additional curation would be required to ensure that every entity identifier properly aligns with the newer KB versions used when the original was unavailable.
|
| 259 |
+
|
| 260 |
+
Finally, while there could be benefits from performing multi-task entity linking on a combination of multiple datasets, exploring this option and the challenges associated with aligning multiple KBs is beyond the scope of this work.
|
| 261 |
+
|
| 262 |
+
# 9 Acknowledgements
|
| 263 |
+
|
| 264 |
+
This research was funded by the National Science Foundation CAREER grant 1944247 to C.M, the
|
| 265 |
+
|
| 266 |
+
National Institute of Health grant U19-AG056169 sub-award to C.M., the Morningside Center for Innovative and Affordable Medicine at Emory University via the Brown Innovation to Market Fund to C.M., and by the Chan Zuckerberg Initiative grant 253558 to C.M.
|
| 267 |
+
|
| 268 |
+
# References
|
| 269 |
+
|
| 270 |
+
Dhruv Agarwal, Rico Angell, Nicholas Monath, and Andrew McCallum. 2022. Entity linking via explicit mention-mention coreference modeling. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics.
|
| 271 |
+
|
| 272 |
+
Rico Angell, Nicholas Monath, Sunil Mohan, Nishant Yadav, and Andrew McCallum. 2021. Clustering-based inference for biomedical entity linking. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2598-2608, Online. Association for Computational Linguistics.
|
| 273 |
+
|
| 274 |
+
Alan R Aronson and Francois-Michel Lang. 2010. An overview of metamap: historical perspective and recent advances. Journal of the American Medical Informatics Association.
|
| 275 |
+
|
| 276 |
+
Marco Basaldella, Fangyu Liu, Ehsan Shareghi, and Nigel Collier. 2020. Cometa: A corpus for medical entity linking in the social media. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3122-3137, Online. Association for Computational Linguistics.
|
| 277 |
+
|
| 278 |
+
Olivier Bodenreider. 2004. The unified medical language system (uml's): integrating biomedical terminology. *Nucleic acids research*, 32(suppl_1):D267-D270.
|
| 279 |
+
|
| 280 |
+
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autoregressive entity retrieval. In International Conference on Learning Representations.
|
| 281 |
+
|
| 282 |
+
Anthony Chen, Pallavi Gudipati, Shayne Longpre, Xiao Ling, and Sameer Singh. 2021. Evaluating entity disambiguation and the role of popularity in retrieval-based NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4472-4485, Online. Association for Computational Linguistics.
|
| 283 |
+
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
|
| 284 |
+
Allan Peter Davis, Cynthia J Grondin, Robin J Johnson, Daniela Sciaky, Roy McMorran, Jolene Wiegers, Thomas C Wiegers, and Carolyn J Mattingly. 2019. The comparative toxicogenomics database: update 2019. Nucleic acids research, 47(D1):D948-D954.
|
| 285 |
+
Dina Demner-Fushman, Willie J Rogers, and Alan R Aronson. 2017. Metamap lite: an evaluation of a new java implementation of metapmap. Journal of the American Medical Informatics Association, 24(4):841-844.
|
| 286 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 287 |
+
Rezarta Islamaj Dogan, Robert Leaman, and Zhiyong Lu. 2014. Ncbi disease corpus: a resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47:1-10.
|
| 288 |
+
Evan French and Bridget T McInnes. 2022. An overview of biomedical entity linking throughout the years. Journal of Biomedical Informatics, page 104252.
|
| 289 |
+
Jason Fries, Leon Weber, Natasha Seelam, Gabriel Al-tay, Debajyoti Datta, Samuele Garda, Sunny Kang, Rosaline Su, Wojciech Kusa, Samuel Cahyawijaya, et al. 2022. Bigbio: A framework for data-centric biomedical natural language processing. Advances in Neural Information Processing Systems, 35:25792-25806.
|
| 290 |
+
Ada Hamosh, Alan F Scott, Joanna S Amberger, Carol A Bocchini, and Victor A McKusick. 2005. Online mendelian inheritance in man (omim), a knowledgebase of human genes and genetic disorders. *Nucleic acids research*, 33(suppl_1):D514–D517.
|
| 291 |
+
Rezarta Islamaj, Robert Leaman, Sun Kim, Dongseop Kwon, Chih-Hsuan Wei, Donald C Comeau, Yifan
|
| 292 |
+
|
| 293 |
+
Peng, David Cissel, Cathleen Coss, Carol Fisher, et al. 2021a. Nlm-chem, a new resource for chemical entity recognition in pubmed full text literature. Scientific Data, 8(1):1-12.
|
| 294 |
+
Rezarta Islamaj, Chih-Hsuan Wei, David Cissel, Nicholas Miliaras, Olga Printseva, Oleg Rodionov, Keiko Sekiya, Janice Ward, and Zhiyong Lu. 2021b. Nlm-gene, a richly annotated gold standard dataset for gene entities that addresses ambiguity and multispecies gene recognition. Journal of Biomedical Informatics, 118:103779.
|
| 295 |
+
Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015. Cadec: A corpus of adverse drug event annotations. Journal of biomedical informatics, 55:73-81.
|
| 296 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 297 |
+
Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sciaky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.
|
| 298 |
+
Nut Limsopatham and Nigel Collier. 2016. Normalising medical concepts in social media texts by learning semantic representation. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: long papers), pages 1014-1023.
|
| 299 |
+
Carolyn E Lipscomb. 2000. Medical subject headings (mesh). Bulletin of the Medical Library Association, 88(3):265.
|
| 300 |
+
Fangyu Liu, Ehsan Shareghi, Zaiqiao Meng, Marco Basaldella, and Nigel Collier. 2021. Self-alignment pretraining for biomedical entity representations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4228-4238.
|
| 301 |
+
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity descriptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3449-3460, Florence, Italy. Association for Computational Linguistics.
|
| 302 |
+
Daniel Loureiro and Alípio Mário Jorge. 2020. Medlinker: Medical entity linking with neural representations and dictionary matching. In European
|
| 303 |
+
|
| 304 |
+
Conference on Information Retrieval, pages 230-237. Springer.
|
| 305 |
+
Donna Maglott, Jim Ostell, Kim D Pruitt, and Tatiana Tatusova. 2005. Entrez gene: gene-centered information at ncbi. Nucleic acids research, 33(suppl_1):D54-D58.
|
| 306 |
+
Sunil Mohan and Donghui Li. 2019. Medmentions: A large biomedical corpus annotated with UMLS concepts. In 1st Conference on Automated Knowledge Base Construction, AKBC 2019, Amherst, MA, USA, May 20-22, 2019.
|
| 307 |
+
Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. ScispaCy: Fast and robust models for biomedical natural language processing. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 319-327, Florence, Italy. Association for Computational Linguistics.
|
| 308 |
+
Laurel J. Orr, Megan Leszczynski, Neel Guha, Sen Wu, Simran Arora, Xiao Ling, and Christopher Ré. 2021. Bootleg: Chasing the tail with self-supervised named entity disambiguation. In 11th Conference on Innovative Data Systems Research, CIDR 2021, Virtual Event, January 11-15, 2021, Online Proceedings. www.cidrdb.org.
|
| 309 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.
|
| 310 |
+
Tven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Galle, et al. 2022. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100.
|
| 311 |
+
Barry Smith, Michael Ashburner, Cornelius Rosse, Jonathan Bard, William Bug, Werner Ceusters, Louis J Goldberg, Karen Eilbeck, Amelia Ireland, Christopher J Mungall, et al. 2007. The obo foundry: coordinated evolution of ontologies to support biomedical data integration. Nature biotechnology, 25(11):1251-1255.
|
| 312 |
+
Sunghwan Sohn, Donald C Comeau, Won Kim, and W John Wilbur. 2008. Abbreviation definition identification based on automatic precision estimates. BMC bioinformatics, 9(1):1-10.
|
| 313 |
+
Mujeen Sung, Hwisang Jeon, Jinhyuk Lee, and Jaewoo Kang. 2020. Biomedical entity representations with synonym marginalization. In ACL.
|
| 314 |
+
Maya Varma, Laurel Orr, Sen Wu, Megan Leszczyński, Xiao Ling, and Christopher Ré. 2021. Cross-domain data integration for named entity disambiguation in biomedical text. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 4566-4575, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 315 |
+
|
| 316 |
+
Chih-Hsuan Wei, Hung-Yu Kao, and Zhiyong Lu. 2015. Gnormplus: an integrative approach for tagging genes, gene families, and protein domains. *BioMed research international*, 2015.
|
| 317 |
+
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computational Linguistics.
|
| 318 |
+
Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, and Sheng Yu. 2022a. BioBART: Pretraining and evaluation of a biomedical generative language model. In Proceedings of the 21st Workshop on Biomedical Language Processing, pages 97-109, Dublin, Ireland. Association for Computational Linguistics.
|
| 319 |
+
Hongyi Yuan, Zheng Yuan, and Sheng Yu. 2022b. Generative biomedical entity linking via knowledge base-guided pre-training and synonyms-aware fine-tuning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4038-4048, Seattle, United States. Association for Computational Linguistics.
|
| 320 |
+
Sheng Zhang, Hao Cheng, Shikhar Vashishth, Cliff Wong, Jinfeng Xiao, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2021. Knowledge-rich self-supervised entity linking. arXiv preprint arXiv:2112.07887.
|
| 321 |
+
Maryam Zolnoori, Kin Wah Fung, Timothy B Patrick, Paul Fontelo, Hadi Kharrazi, Anthony Faiola, Nilyay D Shah, Yi Shuan Shirley Wu, Christina E Eldredge, Jake Luo, et al. 2019. The psytar dataset: From patients generated narratives to a corpus of adverse drug events and effectiveness of psychiatric medications. Data in brief, 24:103838.
|
| 322 |
+
|
| 323 |
+
# A Datasets
|
| 324 |
+
|
| 325 |
+
# A.1 Additional Dataset Statistics
|
| 326 |
+
|
| 327 |
+
Table 9 presents key statistics about our datasets, particularly about the variety of mentions and abbreviations seen in the datasets
|
| 328 |
+
|
| 329 |
+
# A.2 Dataset Descriptions
|
| 330 |
+
|
| 331 |
+
Detailed descriptions of datasets included in our paper are as follows. Table 10 describes overlap of entities and mentions between the train and test sets.
|
| 332 |
+
|
| 333 |
+
MedMentions (MM) (Mohan and Li, 2019) is a collection of 4,392 randomly selected PubMed abstracts linked to the Unified Medical Language System (UMLS). Each abstract is comprehensively
|
| 334 |
+
|
| 335 |
+
<table><tr><td>Dataset</td><td>Total Mentions</td><td>Unique Mentions</td><td>Total Abbreviations</td><td>Unique Abbreviations</td></tr><tr><td>BC5CDR</td><td>29,018</td><td>5,915</td><td>2,811</td><td>388</td></tr><tr><td>GNormPlus</td><td>6,252</td><td>2,180</td><td>991</td><td>196</td></tr><tr><td>MM-Full</td><td>352,312</td><td>90,842</td><td>22,399</td><td>3,906</td></tr><tr><td>MM-ST21PV</td><td>203,185</td><td>65,947</td><td>18,701</td><td>3,398</td></tr><tr><td>NCBI Disease</td><td>6,881</td><td>2,136</td><td>1,611</td><td>143</td></tr><tr><td>NLM Gene</td><td>15,553</td><td>5,298</td><td>2,356</td><td>462</td></tr><tr><td>NLM Chem</td><td>37,999</td><td>4,706</td><td>8,684</td><td>372</td></tr></table>
|
| 336 |
+
|
| 337 |
+
annotated with all terms from UMLS, making Med-Mentions the largest and most comprehensive EL dataset containing span-level annotations. Due to the diversity of UMLS entity types, some categories are not particularly relevant to the majority of biomedical research (e.g. "Professional Group"). Accordingly, MM is most commonly evaluated on the ST21PV subset, which filters candidate entities to come from 18 high-quality ontologies and to fall under 21 semantic type groups.
|
| 338 |
+
|
| 339 |
+
Biocreative V CDR (BC5CDR) (Li et al., 2016) is a subset of 1,500 abstracts with chemical and disease annotations from the Comparative Toxicogenomics Database. Tagged diseases and chemicals are linked to the MeSH ontology.
|
| 340 |
+
|
| 341 |
+
GNormPlus (Wei et al., 2015) is a benchmark of 694 PubMed abstracts annotated with gene mentions linked to the Entrez ontology of genes. It contains the BioCreative II gene mention (BC2BM) task as a subset and an additional set of 151 annotated abstracts.
|
| 342 |
+
|
| 343 |
+
NLM Chem Corpus (Islamaj et al., 2021a) represents the most diverse gold-standard chemical entity linking corpus. Chemical mentions in 150 PMC full-text articles are normalized to MeSH.
|
| 344 |
+
|
| 345 |
+
NLM Gene Corpus (Islamaj et al., 2021b) is a corpus of over 500 full-text articles with gene mentions linked to Entrez gene.
|
| 346 |
+
|
| 347 |
+
NCBI Disease Corpus (Doogan et al., 2014) links disease mentions in PubMed abstracts to the NCBI disease ontology.
|
| 348 |
+
|
| 349 |
+
# B Additional details on included models
|
| 350 |
+
|
| 351 |
+
Here we provide additional details about the algorithms used by included models to supplement section 4.
|
| 352 |
+
|
| 353 |
+
Table 9: Metadata for each dataset
|
| 354 |
+
|
| 355 |
+
<table><tr><td>Dataset</td><td>Ent. Overlap</td><td>Ment. Overlap</td></tr><tr><td>MedMentions Full</td><td>0.6199</td><td>0.8221</td></tr><tr><td>MedMentions ST21PV</td><td>0.5755</td><td>0.7741</td></tr><tr><td>BC5CDR</td><td>0.5300</td><td>0.7733</td></tr><tr><td>GNormPlus</td><td>0.0789</td><td>0.0838</td></tr><tr><td>NCBI Disease</td><td>0.6700</td><td>0.8156</td></tr><tr><td>NLM Chem</td><td>0.4747</td><td>0.6229</td></tr><tr><td>NLM Gene</td><td>0.4819</td><td>0.5408</td></tr></table>
|
| 356 |
+
|
| 357 |
+
Table 10: Overlap between entities train and test sets. Mention overlap refers to the proportion of mentions in the test set whose entities are in training set mentions.
|
| 358 |
+
|
| 359 |
+
A wide variety of methods have been used for BioEL. Here we describe families of models used for BioEL and list included models from each category. Models evaluated were those with near state-of-the-art performance at time of publication when evaluated on at least one included BioEL entity linking dataset. From this pool, we excluded models with no open-source implementation or whose implementation was rendered unusable due to lack of documentation or software updates. With the exception of MetaMap, all models were published in the past 5 years. We summarize the different models evaluated in Table 3.
|
| 360 |
+
|
| 361 |
+
# B.1 Alias Matching EL
|
| 362 |
+
|
| 363 |
+
SciSpacy (Neumann et al., 2019) SciSpacy is a widely used, off-the-shelf library which offers a diversity of pipelines and models for identifying and linking entities in biomedical documents. SciSpacy jointly performs named entity recognition and abbreviation detection for end-to-end EL. EL is performed using TF-IDF matching on character 3-grams of entity mentions.
|
| 364 |
+
|
| 365 |
+
MetaMap (Aronson and Lang, 2010) MetaMap is a tool developed by the National Library of Medicine (NLM), first used in 1994. It uses nat
|
| 366 |
+
|
| 367 |
+
ural language processing to map biomedical entities to concepts in the Unified Medical Language System (UMLS) Metathesaurus. Input undergoes syntactic/lexical analysis, where candidate concepts and mappings are generated from phrases found. MetaMap's usage is highly configurable, both in processing and display options. Output can be shown excluding or restricting semantic types, specific vocabularies, concept unique identifiers (CUIs), etc. Its generation of word variants is thorough, and it is domain independent. On the other hand, MetaMap is limited to the English language. Computational speed is relatively slow, especially in the case where complex phrases are present.
|
| 368 |
+
|
| 369 |
+
BioSyn (Sung et al., 2020) BioSyn performs EL by normalizing each mention surface form to the best alias seen at training time. It does this via a combination of character-level sparse mention features and learned dense vector representations of each mention and entity, which are trained via an alias table such as the UMLS metathesaurus.
|
| 370 |
+
|
| 371 |
+
SapBERT (Liu et al., 2021) SapBERT (for "self-alignment pretraining BERT") fine-tunes a BioBERT () model to treat each alias of an entity equivalently and to map entity mentions to an alias contained in UMLS. Zhang et al. (2021) point out that SapBERT is unable to distinguish between aliases shared by multiple entities and returns all entities with an alias matching the normalized surface form.
|
| 372 |
+
|
| 373 |
+
# Contextualized EL
|
| 374 |
+
|
| 375 |
+
MedLinker (Loureiro and Jorge, 2020) MedLinker was one of the first EL works evaluated on MedMentions. It combines a BiLSTM model pre-trained on biomedical literature with approximate string matching from UMLS to conduct zero-shot EL (Mohan and Li, 2019).
|
| 376 |
+
|
| 377 |
+
ClusterEL (Angell et al., 2021) ClusterEL takes a unique approach to EL by treating linking as a supervised clustering problem. ClusterEL begins by creating an similarity graph of mentions within each document, which is then refined via edge removal until each cluster contains a maximum of one entity. This strategy has the dual benefit of jointly modeling EL with co-reference, enabling the NED model to compensate for failures that may occur in the candidate generation phase of EL. Since original implementation of ClusterEL has been merged into ArboEL, we evaluate ClusterEL
|
| 378 |
+
|
| 379 |
+
as the graph-based reranking of the candidates retrieved by ArboEL's candidate retrieval biencoder (described below).
|
| 380 |
+
|
| 381 |
+
ArboEL (Agarwal et al., 2022) ArboEL extends the work in ClusterEL by improving the scalability and training regimen of ClusterEL. While ArboEL uses a bi-encoder similar to (Wu et al., 2020), it also incorporates a training scheme based on a mention-mention similarity graph to identify hard negatives, which ultimately lead to better model precision.
|
| 382 |
+
|
| 383 |
+
KRISSBERT (Zhang et al., 2021) KRISSBERT presents a self-supervised framework for EL using contrastive learning on distantly supervised entity mentions. After distantly labeling a large number of potential entity links with the UMLS metathe-saurus, KRISSBERT learns a set of "prototypes" for each entity by training the model to separate mentions of different entities. They show that this can be extended to a supervised setting without additional fine-tuning by simply swapping noisy prototypes for supervised ones, which achieves performance on-par with the best supervised EL models.
|
| 384 |
+
|
| 385 |
+
# B.2 Autoregressive EL
|
| 386 |
+
|
| 387 |
+
BioGenEL and BioBART (Yuan et al., 2022b,a) BioGenEL adapts BART (Lewis et al., 2020) to perform entity linking via sequence-to-sequence modeling. It is trained to generate the correct surface form for an entity mention. BioBART uses the same procedure to generate text but additionally provides a BART model with a biomedical vocabulary and pre-trained on biomedical text.
|
| 388 |
+
|
| 389 |
+
# C Framework
|
| 390 |
+
|
| 391 |
+
Our evaluation framework seeks to uniformly evaluate biomedical entity linking datasets by using uniform protocols for 1) dataset processing, 2) ontology processing, and 3) evaluation. All packages are implemented in python. We describe each component of our evaluation framework below.
|
| 392 |
+
|
| 393 |
+
Our framework's dataset module builds on the BigBio framework (https://huggingface.co/bigbio) by adding additional preprocessing to prepare entity linking datasets for effective modeling. It provides APIs for stitching passages into whole documents, deduplicating entity mentions, resolving abbreviations, removing deprecated entities, and contextualizing mentions for modeling.
|
| 394 |
+
|
| 395 |
+
The ontology processing module of our framework enables different biomedical ontologies such as UMLS, Entrez, and others to be standardized to share common attributes. These attributes include database identifier, semantic type(s), canonical name, aliases, alternate IDs, descriptions, and other metadata such as species. Some of these ontologies are very large with elements distributed across multiple files. Accordingly, we provide APIs for extracting relevant subsets, particularly from UMLS.
|
| 396 |
+
|
| 397 |
+
The evaluation portion of our framework enables straightforward evaluation of multiple entity linking models across multiple metrics. It creates a standardized format for model outputs as well as an evaluation pipeline that can compute different metrics across the various evaluation strategies described in the paper.
|
| 398 |
+
|
| 399 |
+
# D Model Evaluation Details
|
| 400 |
+
|
| 401 |
+
# D.1 MetaMap
|
| 402 |
+
|
| 403 |
+
A single-line delimited input text file was generated with the unique text mentions from each dataset. The metadata are shown in Table 6. MetaMap's highly customizable nature means that many parameters can be altered to see the impact on model performance. Six parameters were adjusted for each dataset: model year, semantic types, vocabularies, strict or relaxed model, and term processing (Demner-Fushman et al., 2017). Term processing was added with relaxed model runs, as there was no significant difference between strict and relaxed model performance otherwise. For each run, the NLM data version was used, which includes the full UMLS other than a select number of vocabularies (Demner-Fushman et al., 2017). The 2022AA version was used for all datasets except for MedMentions, as those were originally annotated with the 2017AA UMLS. MetaMap does not handle non-ASCII characters, so we pre-processed input through a Java file that replaces/removes non-ASCII characters. A mapping was generated that keeps track of the terms that are altered, so evaluation can be done correctly.
|
| 404 |
+
|
| 405 |
+
# D.2 Evaluation Strategy for MetaMap
|
| 406 |
+
|
| 407 |
+
We performed a grid search over multiple different MetaMap settings, including strict vs relaxed model, term processing, and with/without WSD. WSD did not provide significant improvements in model performance and is not included in
|
| 408 |
+
|
| 409 |
+
the repository; adding the flag to the MetaMap command would suffice to compare the results. For all datasets, using the relaxed model produced the best results. Four methods of evaluation were tested from toggling two options: 1) ranking mappings first, and/or 2) resolving abbreviations. In addition to candidate concepts, MetaMap generates mappings, which are groups of the most promising candidates. A key point of interest when evaluating MetaMap was seeing whether ranking mappings first would improve evaluation metrics over ranking candidates first. Another salient point was examining the impact of expanding abbreviations. For example, the abbreviation for the chemical OCT can be expanded to 22-oxacalcitriol, which may improve MetaMap performance. The abbreviations within the datasets are expanded from mappings for each PMID, and the expanded forms are added to the original text in each dataset. For each method, we selected the configuration of parameters that maximized recall at 1, which varied between ranking mappings first but almost always resolved abbreviations.
|
| 410 |
+
|
| 411 |
+
# E Additional Results, Discussion, and Analysis
|
| 412 |
+
|
| 413 |
+
# E.1 Runtime Comparison
|
| 414 |
+
|
| 415 |
+
In addition to training time, we also measured the evaluation time of each included model. The results comparing eval time and accuracy are pictured in Figure 5.
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
Figure 5: Comparison of evaluation time (s) vs. top-1 entity linking accuracy for
|
| 419 |
+
|
| 420 |
+
# E.2 Relaxed Evaluation
|
| 421 |
+
|
| 422 |
+
We provide full results for the models evaluated under a relaxed evaluation strategy. A table of
|
| 423 |
+
|
| 424 |
+
<table><tr><td rowspan="2"></td><td colspan="2">BC5CDR</td><td colspan="2">MM-Full</td><td colspan="2">MM-ST21PV</td><td colspan="2">GNormPlus</td><td colspan="2">NLM-Chem</td><td colspan="2">NLM-Gene</td><td colspan="2">NCBI-Disease</td></tr><tr><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td><td>1</td><td>5</td></tr><tr><td>SapBERT</td><td>0.883</td><td>0.934</td><td>0.725</td><td>0.814</td><td>0.695</td><td>0.794</td><td>0.795</td><td>0.944</td><td>0.812</td><td>0.889</td><td>0.716</td><td>0.867</td><td>0.833</td><td>0.929</td></tr><tr><td>MetaMap</td><td>0.828</td><td>0.856</td><td>0.588</td><td>0.731</td><td>0.568</td><td>0.699</td><td>0.624</td><td>0.633</td><td>0.680</td><td>0.707</td><td>0.261</td><td>0.263</td><td>0.669</td><td>0.712</td></tr><tr><td>KRISSBERT</td><td>0.736</td><td>0.766</td><td>0.591</td><td>0.755</td><td>0.559</td><td>0.701</td><td>0.081</td><td>0.087</td><td>0.562</td><td>0.596</td><td>0.286</td><td>0.494</td><td>0.754</td><td>0.803</td></tr><tr><td>SciSpacy</td><td>0.772</td><td>0.797</td><td>0.799</td><td>0.807</td><td>0.778</td><td>0.789</td><td>0.836</td><td>0.854</td><td>0.426</td><td>0.484</td><td>0.396</td><td>0.399</td><td>0.752</td><td>0.752</td></tr><tr><td>MedLinker</td><td>0.720</td><td>0.767</td><td>0.568</td><td>0.662</td><td>0.521</td><td>0.627</td><td>0.178</td><td>0.469</td><td>0.514</td><td>0.542</td><td>0.084</td><td>0.255</td><td>0.545</td><td>0.768</td></tr><tr><td>ClusterEL</td><td>0.876</td><td>0.938</td><td>0.696</td><td>0.851</td><td>0.692</td><td>0.849</td><td>0.302</td><td>0.448</td><td>0.758</td><td>0.868</td><td>0.490</td><td>0.676</td><td>0.748</td><td>0.823</td></tr><tr><td>ArboEL</td><td>0.921</td><td>0.958</td><td>0.000</td><td>0.000</td><td>0.747</td><td>0.890</td><td>0.441</td><td>0.524</td><td>0.828</td><td>0.882</td><td>0.543</td><td>0.734</td><td>0.774</td><td>0.832</td></tr><tr><td>BioBART</td><td>0.572</td><td>0.733</td><td>0.662</td><td>0.800</td><td>0.544</td><td>0.711</td><td>0.696</td><td>0.847</td><td>0.512</td><td>0.650</td><td>0.521</td><td>0.714</td><td>0.457</td><td>0.689</td></tr><tr><td>BioGenEL</td><td>0.909</td><td>0.953</td><td>0.686</td><td>0.793</td><td>0.562</td><td>0.698</td><td>0.350</td><td>0.527</td><td>0.786</td><td>0.879</td><td>0.504</td><td>0.698</td><td>0.582</td><td>0.733</td></tr></table>
|
| 425 |
+
|
| 426 |
+
Table 11: Top-1 and top-5 accuracy of all models using relaxed evaluation.
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
|
| 434 |
+

|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
|
| 440 |
+

|
| 441 |
+
Figure 6: Recall@K for all models using relaxed evaluation.
|
| 442 |
+
|
| 443 |
+
results is given in Table 11 with a corresponding plot of recall@k in Figure 6.
|
| 444 |
+
|
| 445 |
+
# E.3 Slice-specific Model Performance
|
| 446 |
+
|
| 447 |
+
Here we include additional data on the performance of models on various data slices and entity types. Table 12 presents data on performance differentials for different species included in NLM-Gene.
|
| 448 |
+
|
| 449 |
+
# E.4 Prediction Correlation
|
| 450 |
+
|
| 451 |
+
It is useful to know to what extent models make similar predictions to know how well they could be ensembled to improve overall results. We accordingly plot the correlation of whether the top-1 predictions match each model. The results, pictured in Figure 7, indicate that models are generally somewhat closely correlated, but differ substantially on gene datasets.
|
| 452 |
+
|
| 453 |
+
<table><tr><td>Taxonomy</td><td>SapBERT</td><td>MetaMap</td><td>KRISSBERT</td><td>SciSpacy</td><td>ClusterEL</td><td>ArboEL</td><td>BioBART</td><td>BioGenEL</td><td>Prevalence</td></tr><tr><td>Homo sapiens</td><td>-0.021</td><td>0.307‡</td><td>0.064‡</td><td>0.201‡</td><td>0.125‡</td><td>0.107‡</td><td>-0.029‡</td><td>-0.014</td><td>0.447</td></tr><tr><td>Mus musculus</td><td>-0.048‡</td><td>-0.246‡</td><td>0.029</td><td>-0.162‡</td><td>-0.010</td><td>0.016</td><td>-0.040‡</td><td>-0.031‡</td><td>0.351</td></tr><tr><td>Rattus norvegicus</td><td>-0.075‡</td><td>-0.244‡</td><td>-0.160‡</td><td>-0.163‡</td><td>-0.249‡</td><td>-0.368‡</td><td>-0.046†</td><td>-0.043†</td><td>0.090</td></tr><tr><td>Saccharomyces cerevisiae</td><td>0.046</td><td>-0.261‡</td><td>-0.204‡</td><td>-0.163‡</td><td>-0.256‡</td><td>-0.216‡</td><td>0.071†</td><td>0.069†</td><td>0.039</td></tr><tr><td>Danio rerio</td><td>0.490‡</td><td>-0.261‡</td><td>-0.279‡</td><td>-0.163†</td><td>-0.316‡</td><td>-0.225†</td><td>0.573‡</td><td>0.551‡</td><td>0.025</td></tr><tr><td>Arabidopsis thaliana</td><td>0.601‡</td><td>-0.261†</td><td>-0.279†</td><td>-0.163</td><td>-0.196</td><td>-0.161</td><td>0.361‡</td><td>0.045</td><td>0.012</td></tr><tr><td>Ovis aries</td><td>-0.038</td><td>-0.261*</td><td>-0.279†</td><td>-0.163</td><td>-0.045</td><td>0.086</td><td>-0.014</td><td>-0.006</td><td>0.010</td></tr><tr><td>Caenorhabditis elegans</td><td>0.675‡</td><td>-0.261</td><td>0.021</td><td>-0.163</td><td>-0.190</td><td>0.157</td><td>0.549‡</td><td>0.507‡</td><td>0.007</td></tr><tr><td>other</td><td>0.365‡</td><td>-0.261‡</td><td>-0.179*</td><td>-0.163*</td><td>-0.410‡</td><td>-0.323‡</td><td>0.309‡</td><td>0.017</td><td>0.018</td></tr></table>
|
| 454 |
+
|
| 455 |
+
Table 12: Performance difference on genes of different species within NLM-Gene compared to overall performance. ${}^{ * }\mathrm{p} < {0.05};{}^{ \dagger }\mathrm{p} < {0.01};{}^{ \ddagger }\mathrm{p} < {0.001}$ after Bonferroni correction.
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
Figure 7: Correlation of top-1 accuracy across datasets. Low and negative correlations indicate that models are able to correctly link distinct subsets of data.
|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
|
| 468 |
+

|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:daa140c49276bedc887f878b5ea3ec0ccf36c02e302ccec26e37ba6af8a63b99
|
| 3 |
+
size 1310400
|
acomprehensiveevaluationofbiomedicalentitylinkingmodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:49d5bfa93527cae803b7978a26cae08b64c419883d6f3eb0d7042df83ea351e6
|
| 3 |
+
size 453368
|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:468660eb13b148d5bad6354194426eeae14c817902c71788ea6139de44f82fae
|
| 3 |
+
size 95421
|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:44fc93b23e1fc4a0fb6029d3609a26207a2c3b9a70d2243f364cefe43fdb6bb5
|
| 3 |
+
size 113172
|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/2931b51a-77cc-409b-9983-2291fbcc79ba_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c5de04381ae173431dcc8dba7207d3b9d7ab2b490a0672bfe9c671eeaab386cf
|
| 3 |
+
size 1007084
|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/full.md
ADDED
|
@@ -0,0 +1,438 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Deeper (Autoregressive) Approach to Non-Convergent Discourse Parsing
|
| 2 |
+
|
| 3 |
+
Yoav Tulpan
|
| 4 |
+
|
| 5 |
+
Ben Gurion University of the Negev
|
| 6 |
+
|
| 7 |
+
yoavtu@post.bgu.ac.il
|
| 8 |
+
|
| 9 |
+
Oren Tsur
|
| 10 |
+
|
| 11 |
+
Ben Gurion University of the Negev
|
| 12 |
+
|
| 13 |
+
orentsur@bgu.ac.il
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Online social platforms provide a bustling arena for information-sharing and for multi-party discussions. Various frameworks for dialogic discourse parsing were developed and used for the processing of discussions and for predicting the productivity of a dialogue. However, most of these frameworks are not suitable for the analysis of contentious discussions that are commonplace in many online platforms. A novel multi-label scheme for contentious dialog parsing was recently introduced by Zakharov et al. (2021). While the schema is well developed, the computational approach they provide is both naive and inefficient, as a different model (architecture) using a different representation of the input, is trained for each of the 31 tags in the annotation scheme. Moreover, all their models assume full knowledge of label collocations and context, which is unlikely in any realistic setting. In this work, we present a unified model for Non-Convergent Discourse Parsing that does not require any additional input other than the previous dialog utterances. We fine-tuned a RoBERTa backbone, combining embeddings of the utterance, the context and the labels through GRN layers and an asymmetric loss function. Overall, our model achieves results comparable with SOTA, without using label collocation and without training a unique architecture/model for each label. Our proposed architecture makes the labeling feasible at large scale, promoting the development of tools that deepen our understanding of discourse dynamics.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Online discourse has become a major part of modern communication due to the proliferation of online social platforms that allow people to easily share their ideas with a global audience. However, the ease of communication has also led to more heated debates and arguments that sometimes devolve into personal attacks (Arazy et al., 2013; Ku
|
| 22 |
+
|
| 23 |
+
mar et al., 2017; Zhang et al., 2018), and increase political and societal polarization (Kubin and von Sikorski, 2021; Lorenz-Spreen et al., 2022).
|
| 24 |
+
|
| 25 |
+
The ability to parse contentious discussions at a large scale bears practical and theoretical benefits. From a theoretical perspective it would allow the research community at large (social scientists and computational scientists alike) to better track and understand conversational and societal dynamics. From a practical perspective, it was found that early intervention by a human moderator or facilitator can improve the productivity and focus of a discussion (Wise and Chiu, 2011; Chen et al., 2018). Discourse parsing can be the first step in developing assistive moderation tools that can be employed at scale and promote a more productive discourse.
|
| 26 |
+
|
| 27 |
+
It is commonly argued that the convergence of views indicates the success (or productiveness) of a conversation (Barron, 2003; Dillenbourg and Fischer, 2007; Teasley et al., 2008; Lu et al., 2011). This perspective has been reflected in discourse annotation schemes that were proposed through the years (Teasley et al., 2008; Schwarz et al., 2018). However, the equation of productivity with convergence is being challenged based on both theoretical and empirical grounds, as non-convergent discussions can be very productive, as they serve as a fruitful venue for the development of dialogic agency (Parker, 2006; Lu et al., 2011; TrausanMatu et al., 2014; Kolikant and Pollack, 2015; Hennessy et al., 2016; Kolikant and Pollack, 2017).
|
| 28 |
+
|
| 29 |
+
The non-convergence perspective inspired a novel annotation scheme that was recently introduced by Zakharov et al. (2021). Its organizing principle is responsiveness, rather than acceptance and convergence of ideas – a productive discussion is one in which the interlocutors use speech acts that exhibit high responsiveness, while acts of low responsiveness deem the discussion unproductive. It is impor
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
(a) Low responsiveness snippet
|
| 33 |
+
Figure 1: Two annotated snippets extracted from the CMV dataset, displaying low responsiveness (claim: no need for privacy regulation), and high-responsiveness discourse (claim: online cancel culture is ineffective). Labels are indicated in the green rectangles to the left/right of each utterance.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
(b) High responsiveness snippet
|
| 37 |
+
|
| 38 |
+
tant to note that responsiveness is not the mere act of producing a response, but the act of responding in good faith. The application of this schema is illustrated by the two snippets in Figure 1. In the short exchange in Figure 1a, the first speaker uses sarcasm<sup>1</sup>, and later responds aggressively to a Counter Argument. The dialogue then goes from bad to worse with a series of Direct No utterances. The other discussion (Figure 1b) demonstrates how Counter Argument and Critical Question push for a reasoned answer, even though the topic is highly divisive. Another interesting observation that applies to many online discussions is the way argumentation tends to introduce sub-topics as rhetoric devices<sup>2</sup>.
|
| 39 |
+
|
| 40 |
+
Subscribing to this annotation scheme, the Conversational Discourse Parsing (CDP) task can be viewed as a sequence-of-utterances to sequence-of-sets task: an utterance can be labeled by multiple labels concurrently. For clarity, we provide a brief explanation of the tagset in Section 3. A formal
|
| 41 |
+
|
| 42 |
+
definition of the computational task is presented in Section 4.1.
|
| 43 |
+
|
| 44 |
+
The need for a dedicated discourse schema and the development of the tagset were well motivated by Zakharov et al. (2021). The authors released an annotated dataset of $\sim 10K$ utterances and demonstrated the feasibility of learning the annotation task. However, their computational approach suffers from a number of drawbacks: First, they cast the prediction task as a binary classification and trained a model for each tag separately. Second, considering the prediction of tag $l'$ to an utterance $u_i$ , they assumed access to an oracle providing complete and accurate knowledge of gold labels of preceding utterances and the correct binary assignment of all other tags for $u_i$ . This very strong assumption is not realistic in any real-world scenario. Finally, the results they report were achieved after feature engineering and an extensive grid search on the classifier and the features space. Consequently, each tag is predicted using a different classification framework, based on a uniquely crafted feature set.
|
| 45 |
+
|
| 46 |
+
In this work, we present $N$ -CoDiP - a unified autoregressive transformer for Non-Convergent Discourse Parsing. The model is trained to predict all labels together without using any external knowledge provided by an oracle. N-CoDiP performance (F-score macro and weighted averages) is compara
|
| 47 |
+
|
| 48 |
+
ble with the best results reported by Zakharov et al. (2021) without suffering from any its drawbacks.
|
| 49 |
+
|
| 50 |
+
Our proposed model uses the RoBERTa architecture (Liu et al., 2019) as the backbone. We use SimCSE (Gao et al., 2021) for sentence embedding and feed preceding utterances through a Gated Residual Network (GRN) (Lim et al., 2021). The model is fine-tuned using an asymmetric loss function that was recently demonstrated to improve performance in imbalanced multi-label assignment in vision (Ridnik et al., 2021). To the best of our knowledge, this is the first application of this loss function in this domain. We provide a detailed description of the architecture in Section 4. Results and analysis are provided in Section 6.
|
| 51 |
+
|
| 52 |
+
# 2 Related Work
|
| 53 |
+
|
| 54 |
+
Conversational Discourse Parsing There have been numerous dialog corpora collected and labeled with various schemes to model discourse structure. (Jurafsky et al., 1997) presented the Switchboard-DAMSL dialog act schema on a dataset of cooperative, task-oriented dialogues between pairs of interlocutors in phone conversations (Godfrey et al., 1992). This was extended in (Calhoun et al., 2010) to allow for more thorough analysis of linguistic features in dialog. There have been multiple studies approaching the dialog act classification problem with deep neural networks including transformers, with some emphasizing the importance of integrating context information from previous utterances (Liu et al., 2017; Saha et al., 2019; Santra et al., 2021; Želasko et al., 2021). The Switchboard-DAMSL corpus is a two party discourse analysis schema, which is different from the multi-party discourse parsing schema presented in (Zakharov et al., 2021) and modeled in this work. Multi-party dialog corpora such as STAC (Asher et al., 2016) as well as the Ubuntu unlabeled corpus (Lowe et al., 2015) and its labeled extension the Molweni discourse relation dataset (Li et al., 2020) are more closely related to the current task, though the discourse is not contentious and the utterances tend to be quite short when compared to messages in the CMV forum debates. Another key difference between these and the CDP corpus is that in the latter, the label scheme is oriented towards a more basic understanding of the components of a productive discourse, while the former is more focused on characterizing basic dialog acts.
|
| 55 |
+
|
| 56 |
+
CMV and Discourse Owing to the high quality of its discussions, CMV discussions are commonly used as a data source for various NLP and social science research, ranging from argument mining to the study of the effects of forum norms and moderation, as well as persuasive text analysis and linguistic style accommodation, e.g., Tan et al. (2016); Khazaei et al. (2017); Musi et al. (2018); Jo et al. (2018); Xiao and Khazaei (2019); Ben-Haim and Tsur (2021); Chandrasekharan et al. (2022).
|
| 57 |
+
|
| 58 |
+
Argumentation and argument mining Argument mining is another related line of research, for a comprehensive survey see (Lawrence and Reed, 2020). Argument mining is done on long-form documents, e.g., Wikipedia pages and scientific papers (Hua and Wang, 2018) or in dialogical contexts, e.g., Twitter, Wikipedia discussion pages, and Reddit-CMV (Tan et al., 2016; Musi et al., 2018; Al Khatib et al., 2018). Argument mining enables a nuanced classification of utterances into discourse acts: socializing, providing evidence, enhancing understanding, act recommendation, question, conclusion, and so forth (Al Khatib et al., 2018). Most of the argument mining work is aimed at identifying stance and opinionated utterance or generating arguments or supportive evidence to end users conducting formal debates (Slonim et al., 2021). Our work is inspired by these works, although our focus is on the way discursive acts reflect and promote responsiveness, rather than simply labeling texts as bearing 'evidence' or posing a 'question'. Moreover, while our focus is contentious non-convergent discussions, we wish to characterize discussions as win-win, rather than a competition.
|
| 59 |
+
|
| 60 |
+
Multi-label classification Regarding imbalanced multi-label classification, the existing approaches include over- and under-sampling the relevant classes, as well as adapting the classification architecture using auxiliary tasks to prevent overfitting to the majority classes (Yang et al., 2020; Tarekegn et al., 2021). Another approach is to apply imbalanced loss functions to neural network models such as weighted cross entropy and focal loss, which is closely related to the Asymmetric loss function incorporated in this work apart from some key improvements detailed in section 4.2.5 (Lin et al., 2017; Ridnik et al., 2021).
|
| 61 |
+
|
| 62 |
+
# 3 Data
|
| 63 |
+
|
| 64 |
+
Change My View (CMV) data CMV is self-described as "A place to post an opinion you accept may be flawed, in an effort to understand other perspectives on the issue. Enter with a mindset for conversation, not debate." Each discussion thread in CMV evolves around the topic presented in the submission by the Original Poster (OP). Each discussion takes the form of a conversation tree in which nodes are utterances. A directed edge $v \gets u$ denotes that utterance $u$ is a direct reply to utterance $v$ . A full branch from the root to a leaf node is a sequence of utterances which reflects a (possibly multi-participant) discussion. CMV is heavily moderated to maintain a high level of discussion. CMV data has been used in previous research on persuasion and argumentation, see a brief survey in Section 2.
|
| 65 |
+
|
| 66 |
+
Annotation scheme tagset The Contentious Discourse Parsing tag schema developed by Zakharov et al. (2021) consists of 31 labels that fall under four main categories: discursive acts that promote further discussion; discursive acts exhibiting or expected to cause low responsiveness; tone and style; explicit disagreement strategies. For convenience, the full schema and the labels' definitions are provided in Appendix B.
|
| 67 |
+
|
| 68 |
+
The annotation scheme allows a collocation of labels assigned to the same utterance as some labels reflect style while others reflect the argumentative move. For example, the utterance "well you're wrong on both accounts." (Figure 1a) carries an Aggressive tone, providing No Reason for the disagreement it conveys.
|
| 69 |
+
|
| 70 |
+
The annotated dataset The dataset released by (Zakharov et al., 2021) is composed of 101 discussion threads from CMV. These threads (discussion trees) have a total of 1,946 branches composed of 10,599 utterances (nodes) made by 1,610 unique users. The number of labels assigned to the nodes in the dataset is 17,964.
|
| 71 |
+
|
| 72 |
+
# 4 Computational Approach
|
| 73 |
+
|
| 74 |
+
# 4.1 Task Definition
|
| 75 |
+
|
| 76 |
+
We define the discourse parsing classification problem as follows: Given a tagset $T$ and a sequence of utterances $U = u_{1},\dots,u_{n}$ : find a corresponding sequence of labels $L = l_{1},\dots,l_{n}$ such that it maximizes the probability $P(L|U)$ . It is important to note that each $l_{i}$ is actually a set of labels from the $T$ such that $l_{i}\subset T$ , making this a sequence to sequence-of-sets task. The sequence of utterances is processed sequentially in an autoregressive manner. That is, when tagging $u_{i}$ the model already processed $u_{1}$ through $u_{i - 1}$ and $u_{j > i}$ are masked.
|
| 77 |
+
|
| 78 |
+
# 4.2 N-CoDiP Architecture and Components
|
| 79 |
+
|
| 80 |
+
Given a sequence of utterances $u_{1}, \ldots, u_{n}$ , utterance $u_{i}$ is processed along with its context $c_{i}$ - the utterances preceding it ( $u_{1}, \ldots, u_{i-1}$ ). First, we use the pretrained model to get two embedding vectors $\vec{u}_{i}$ and $\vec{c}_{i}$ representing $u_{i}$ and $c_{i}$ , respectively. We then use two GRN blocks: The first combines $\vec{c}_{i}$ with $l^{i-1}$ , the label embeddings vector produced in the previous iteration (processing $u_{i-1}$ ). The second GRN block combines the resulting vector with $\vec{u}_{i}$ for a combined representation. This representation is passed to a block of MLP classifiers which produce $\hat{l}_{i}$ , a vector assigning the likelihood of each tag $t \in T$ for $u_{i}$ . An illustrative figure of the model is provided in Figure 2. In the remainder of the section we present the components of the N-CoDiP architecture in detail.
|
| 81 |
+
|
| 82 |
+
# 4.2.1 Text Representation
|
| 83 |
+
|
| 84 |
+
The representation of the target utterance $u_{i}$ and the context utterances $c_{i}$ are produced separately in a slightly different way. $\vec{u_i}$ , the representation of $u_{i}$ is simply the [CLS] token vector obtained by passing $u_{i}$ to the pretrained model. The context representation $\vec{c_i}$ is the [CLS] of the concatenated word-tokens of the context utterances, using the [SEP] token to separate between utterances in order to allow context utterances to attend to each other. That is, the context utterances are passed as a sequence $u_{i - k}[SEP]u_{i - k + 1}[SEP]\dots [SEP]u_{i - 1}$ where $k$ is the length of the context and $u_{j}$ is the sequence of tokens in the $j^{th}$ utterance.
|
| 85 |
+
|
| 86 |
+
# 4.2.2 Context's Label Embedding
|
| 87 |
+
|
| 88 |
+
We define a label embedding function $Emb(\cdot) \in \mathbb{R}^d$ where $d$ is the transformer embedding dimension (in our case, 768). In cases where a previous
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 2: N-CoDiP architecture. Dotted arrows indicate optional components.
|
| 92 |
+
|
| 93 |
+
utterance is unlabeled, we add an additional embedding that represents an untagged context utterance. We combine the label embeddings of the multiple utterances in the context using mean-pooling.
|
| 94 |
+
|
| 95 |
+
# 4.2.3 Context Integration with GRNs
|
| 96 |
+
|
| 97 |
+
Gated Residual Networks (GRN) (Lim et al., 2021) were recently proposed in order to combine a primary input vector with context vectors of multiple types and unknown relevance. GRNs were demonstrated to be especially beneficial when the dataset is relatively small and noisy.
|
| 98 |
+
|
| 99 |
+
Formally, given a vector $x$ and a context vector $c$ :
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
G R N (x, c) =
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
L a y e r N o r m (x + G a t e d L i n e a r (\eta_ {1}))
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\eta_ {1} = W _ {1} \eta_ {2} + b _ {1}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\eta_ {2} = E L U \left(W _ {2} x + W _ {3} c + b _ {2}\right)
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
G a t e d L i n e a r (\gamma) =
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\sigma \left(W _ {4} \gamma + b _ {4}\right) \odot \left(W _ {5} \gamma + b _ {5}\right)
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
Where $W_{i}(\cdot) + b_{i}$ is a linear transformation maintaining the input dimension $d$ , and $ELU(\cdot)$ is an Exponential Linear Unit (Clevert et al., 2015).
|
| 126 |
+
|
| 127 |
+
We use GRNs to combine the textual embedding of the context $(\vec{c_i})$ with pooled label embeddings $(\vec{l_i})$ , and again to combine the result with $\vec{u_i}$ , the embedding vector of the target utterance.
|
| 128 |
+
|
| 129 |
+
# 4.2.4 Multi-head MLP
|
| 130 |
+
|
| 131 |
+
In the final layer, the combined representation is passed to $d$ independent MLP heads, with $d$ being the number of labels in the tagset. Given the last hidden layer output $z$ , the model's prediction for the $i$ 'th label is:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\hat {l} _ {i} = \sigma (W _ {i, 2} R e L U (W _ {i, 1} z + b _ {i, 1}) + b _ {i, 2})
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
# 4.2.5 Asymmetric Loss
|
| 138 |
+
|
| 139 |
+
The Asymmetric Loss was recently developed to handle unbalanced multi-label classification tasks in the field of computer vision (Ridnik et al., 2021). The asymmetric loss applies a scaling decay factor to the loss in order to focus on harder examples. However, different decay factors are used for instances with positive and negative gold labels: a larger decay factor $(\gamma_{-} > \gamma_{+})$ to the negative examples. Also, it employs a hard lower cutoff $m$ for model confidence scores to discard too-easy examples.
|
| 140 |
+
|
| 141 |
+
Asymmetric loss was used for relation extraction between entities in a given document by (Li et al., 2021), but is still underexplored in the NLP context and was never used for conversational discourse parsing.
|
| 142 |
+
|
| 143 |
+
It allows the model to learn the task despite positive to negative label imbalances, which are often a hindrance to neural network performance. The $AL$ (Asymmetric Loss) function is defined over the positive cases $L_{+}$ and the negative cases $L_{-}$ :
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
A L (\hat {l _ {i}}, l _ {i}) = \left\{ \begin{array}{l l} (1 - \hat {l _ {i}}) ^ {\gamma_ {+}} l o g (\hat {l _ {i}}) & l _ {i} \in L _ {+} \\ l _ {m} ^ {\gamma_ {-}} l o g (1 - l _ {m}) & l _ {i} \in L _ {-} \end{array} \right.
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
$l_{m} = \max (\hat{l}_{i} - m,0)$ , and $m$ is the lower hard cutoff of model confidence scores for negative labels.
|
| 150 |
+
|
| 151 |
+
# 4.2.6 Auxiliary Next Message Prediction Task
|
| 152 |
+
|
| 153 |
+
Incorporating an auxiliary prediction task to the training pipeline often improves results, especially over relatively small datasets for which pretrained models tend to overfit (Chronopoulou et al., 2019; Schick and Schütze, 2021). Drawing inspiration from (Henderson et al., 2020), we incorporate Next Message Prediction (NMP) as an auxiliary task. In
|
| 154 |
+
|
| 155 |
+
the NMP the model maximizes the cosine similarity of two consecutive messages in the conversation tree, and minimizes that of non-consecutive ones. That is, the training objective of this auxiliary task is to minimize $L_{NMP}$ , defined as:
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
L _ {N M P} = \sum_ {i = 1} ^ {k} \sum_ {j = 1} ^ {k ^ {\prime}} S (u _ {i}, u _ {j}) - \sum_ {i = 1} ^ {k} S (u _ {i}, u _ {j})
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
Where $S$ is a similarity function (we use cosine similarity), $k$ is the batch size for the main Discourse Parsing (DP) task, and $k'$ is the number of negative samples, which are simply the other utterances in the batch. We also attempted to add more challenging negative samples, i.e., samples that are sampled from the same conversation tree as $u_i$ and are therefore assumed to belong to the same semantic domain. The final loss function to be minimized in training is:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
L = \alpha L _ {D P} + (1 - \alpha) L _ {N M P}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$L_{DP}$ is the Asymmetric loss described in section 4.2.5, and $\alpha \in [0.95, 0.99]$ is a weighting factor for the different objectives.
|
| 168 |
+
|
| 169 |
+
# 4.2.7 Speakers' Turn Taking
|
| 170 |
+
|
| 171 |
+
We expect that the conversational dynamics in a dialogue of only two speakers are may be different than those in a multi-speaker dialogue. Moreover, even in a multi-speaker dialogue, the discourse between speakers $A$ and $B$ may be different that the discourse between $A$ and $C$ . We therefore add $k + 1$ one-hot vectors representing the speakers of the target utterance $u_{i}$ and the $k$ preceding utterances used for context. That is, given $k = 3$ and the sequence of utterances $u_{i - 3}^{A}u_{i - 2}^{B}u_{i - 1}^{C}u_{i}^{A}$ (postscript denotes the speaker), we get the following vectors:
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
[ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, 0 ], [ 1, 0, 0, 0 ]
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
indicating that $u_{i}$ and $u_{i-3}$ were produced by the same speaker (A), while $u_{i-2}$ and $u_{i-1}$ where produced by two other speakers (B and C). These vectors were concatenated and appended to the final combined representation vector.
|
| 178 |
+
|
| 179 |
+
# 5 Experimental Settings
|
| 180 |
+
|
| 181 |
+
Baselines We compare our N-CoDiP architecture to previously reported results in (Zakharov et al., 2021). We focus on two sets of reported results:
|
| 182 |
+
|
| 183 |
+
1. Exhaustive Grid (X-Grid) The best results reported by Zakharov et al. (2021) achieved using a different model for each label, extensive feature engineering, external resources (LIWC, DPTB discourse labels), an Oracle providing preceding and collocated labels and exhaustive grid-search in a binary classification setting (per label).
|
| 184 |
+
2. Zakharov Transformer (Z-TF) The same Transformer architecture used by (Zakharov et al., 2021) applied in a "clean" setting, that is, without the use of an oracle or special (external) features. The use of this baseline allows a proper evaluation of our model against prior work.
|
| 185 |
+
|
| 186 |
+
Pretrained Models We consider two pretrained models for text representation: the vanilla RoBERTa (Liu et al., 2019) and the RoBERTa-SimCSE that was optimized for sentence embedding (Gao et al., 2021). We indicate the pretrained model that is used in subscript: $CoDiP_V$ for the Vanilla RoBETRa and $CoDiP_{CSE}$ for the SimCSE version.
|
| 187 |
+
|
| 188 |
+
Evaluation Metrics Keeping in line with previous work we use F-score $(\mathrm{F}_1)$ for individual labels. We report both macro and weighted F-scores results aggregated by label category. Macro F-score being the mean score, and weighted being the mean weighted according to the support of each class:
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
F _ {M a c r o} (F _ {1}, \dots , F _ {k}) = \frac {\sum_ {i = 1} ^ {k} F _ {i}}{k}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
F _ {W e i g h t e d} (F _ {1}, \dots , F _ {k}) = \sum_ {i = 1} ^ {k} F _ {i} \cdot w _ {i}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
where $k$ is the number of labels in a particular label category (e.g., Promoting Discourse, Disagreement Strategies). $w_{i}$ is the prior probability of a specific label $l_{i}$ being true in the dataset, which is comprised of $n$ samples:
|
| 199 |
+
|
| 200 |
+
$$
|
| 201 |
+
w _ {i} = \frac {\sum_ {i = 1} ^ {n} \mathbb {1} _ {l _ {i} = 1}}{n}
|
| 202 |
+
$$
|
| 203 |
+
|
| 204 |
+
The prior probabilities are presented in table 3 in Appendix A.
|
| 205 |
+
|
| 206 |
+
Execution Settings We trained the N-CoDiP model for 4 epochs optimized using the AdamW optimizer (Loshchilov and Hutter, 2018) with a
|
| 207 |
+
|
| 208 |
+
<table><tr><td>Category</td><td colspan="2">N-CoDiPALCSE</td><td colspan="2">N-CoDiPBCSE</td><td colspan="2">N-CoDiPV</td><td colspan="2">Z-TF</td><td colspan="2">X-Grid</td></tr><tr><td>All</td><td>0.397†</td><td>0.573</td><td>0.371</td><td>0.563</td><td>0.378</td><td>0.565</td><td>0.113</td><td>0.338</td><td>0.382</td><td>0.606†</td></tr><tr><td>Promoting Discussion</td><td>0.461</td><td>0.709</td><td>0.426</td><td>0.692</td><td>0.439</td><td>0.690</td><td>0.158</td><td>0.546</td><td>0.560†</td><td>0.833†</td></tr><tr><td>Low Responsiveness</td><td>0.312†</td><td>0.337†</td><td>0.276</td><td>0.304</td><td>0.284</td><td>0.309</td><td>0.058</td><td>0.057</td><td>0.308</td><td>0.335</td></tr><tr><td>Tone and Style</td><td>0.346†</td><td>0.370†</td><td>0.320</td><td>0.352</td><td>0.334</td><td>0.361</td><td>0.054</td><td>0.064</td><td>0.304</td><td>0.326</td></tr><tr><td>Disagreement Strategies</td><td>0.422†</td><td>0.507†</td><td>0.408</td><td>0.497</td><td>0.407</td><td>0.499</td><td>0.142</td><td>0.170</td><td>0.370</td><td>0.451</td></tr></table>
|
| 209 |
+
|
| 210 |
+
batch size of 32. We used a linear warm up and decay on the learning rate, with the warm up period consisting of first $30\%$ of the training iterations reaching maximal $\eta = 10^{-5}$ learning rate and decaying back to zero over the remaining $70\%$ of the training iterations. We restrict our experimentation to contexts of up to $k$ utterances and set $k = 4$ . For the Asymmetric loss we used the default parameters $\gamma_{+} = 1; \gamma_{-} = 4; m = 0.05$ .
|
| 211 |
+
|
| 212 |
+
Computational Cost We trained our final implementation of the model 20 times (4 model variations $\times$ 5 fold cross validation), as well as additional implementations during its development, each taking between 2 and 3 hours on a Nvidia GeForce 12GB GPU. The model contains 130,601,503 parameters.
|
| 213 |
+
|
| 214 |
+
# 6 Results and Analysis
|
| 215 |
+
|
| 216 |
+
# 6.1 Results
|
| 217 |
+
|
| 218 |
+
All reported results are the average of a 5-fold cross validation. Partition of the data in each fold was done based on discussion trees rather than conversation branches in order to avoid leakage from the train set to the test set.
|
| 219 |
+
|
| 220 |
+
Macro and weighted F-Score over the whole tagset and by label's categories are provided in Table 1. Prior probabilities and detailed results for each label are omitted for clarity and due to space constraints but are available at Appendix A.
|
| 221 |
+
|
| 222 |
+
The results that were reported by prior work (X-Grid) are presented as a guide, but are shaded since the X-Grid setting does not allow a fair comparison. We expand on this in the discussion.
|
| 223 |
+
|
| 224 |
+
N-CoDiP $_{CSE}$ consistently outperforms all other unified models trained to predict all labels without any prior or external knowledge in both Macro
|
| 225 |
+
|
| 226 |
+
and weighted scores. Moreover, N-CoDiP $_{CSE}$ outperforms X-Grid over three out of the four label categories (Low Responsiveness, Tone & Style, Disagreement Strategies), and obtains a higher Macro average F-score aggregated over all labels.
|
| 227 |
+
|
| 228 |
+
Evaluating the impact of the loss function (Asymmetric vs. Binary Cross-entropy) we find that the asymmetric loss is consistently better. We also find that the most significant improvements are achieved over the Low Responsiveness, Tone & Style categories, for which the priors are relatively low (see Table 3 in Appendix A). This is also evident by comparing the gains in the macro averages vs the gains in the weighted average: 0.026 and 0.01, respectively.
|
| 229 |
+
|
| 230 |
+
Also, for most labels the use of the pretrained RoBERTa-SimCSE achieves better results than the vanilla RoBERTa, gaining 0.019 macro-F points, and only 0.012 points in the weighted score.
|
| 231 |
+
|
| 232 |
+
Table 1: Average F-scores per label category for each model. Values are arranged as (Macro, Weighted) pairs. N-CoDiP architectures differ in the loss function used: Asymmetric Loss (AL) or Binary Cross Entropy (BCE), and the pretrained model used: Contrastive Sentence Embedding (CSE) or the vanilla RoBERTa (V); Z-TF is the BERT architecture used by Zakharov et al. (2021); X-Grid are the best results reported in prior work using an oracle and applying an exhaustive grid search over parameters and models for each of the labels. A $\dagger$ indicates best results overall. Best results achieved by a transformer architecture without an oracle or feature engineering are in bold face.
|
| 233 |
+
|
| 234 |
+
<table><tr><td>Category</td><td colspan="2">N-CoDiPk=1</td><td colspan="2">N-CoDiPk=4</td></tr><tr><td>All</td><td>0.397</td><td>0.573</td><td>0.389</td><td>0.573</td></tr><tr><td>Promoting Disc.</td><td>0.461</td><td>0.709</td><td>0.426</td><td>0.699</td></tr><tr><td>Low Resp.</td><td>0.312</td><td>0.337</td><td>0.298</td><td>0.328</td></tr><tr><td>Tone and Style</td><td>0.346</td><td>0.370</td><td>0.338</td><td>0.328</td></tr><tr><td>Disagreement Str.</td><td>0.422</td><td>0.507</td><td>0.422</td><td>0.506</td></tr></table>
|
| 235 |
+
|
| 236 |
+
Table 2: Average F-scores per label category for the N-CoDiP model given $k = 1$ context length and $k = 4$ context length. Values are (Macro, Weighted) pairs.
|
| 237 |
+
|
| 238 |
+
# 6.2 Discussion
|
| 239 |
+
|
| 240 |
+
N-CoDiP vs. X-grid While N-CoDiP achieves best results in most cases, the X-Grid achieves a higher weighted score on the aggregation of all labels, and significantly outperforms CoDiP in the Promoting Discussion category. It is important to reiterate that the X-Grid setting does not allow a fair comparison. Not only were each of
|
| 241 |
+
|
| 242 |
+
the X-Grid results obtained by a different classifier based on different feature set, it combines heavy feature engineering of external reasources such as LIWC categories (Tausczik and Pennebaker, 2010), DPTB labels (Prasad et al., 2008; Nie et al., 2019), an Oracle providing preceding and collocated labels (classification is binary per label), and an exhaustive grid search over the model family, features, and hyper parameters. In contrast, the rest of the results in Table 1 are achieved using a single unified model without incorporating any auxiliary resources except RoBERTa, and no Oracle hints.
|
| 243 |
+
|
| 244 |
+
N-CoDiP vs. Z-TF Although the results presented above establish the effectiveness of a single unified model, we observe a stark difference in performance between all variants of the N-CoDiP architecture and the Z-TF. This difference begs the question what in the architecture makes such an impact, given both approaches rely on the same pretrained BERT based architecture. We hypothesize that the combination of the multi-head classifier and the Asymmetric loss objective (Sections 4.2.4 and 4.2.5) drive CoDiP performance up. The individual classifiers add another layer which enables the model to learn a unique final hidden representation for each label. We have found this to be quite effective in mitigating the label bias. Indeed, we observe that even though Z-TF is inferior to CoDiP, it does perform reasonably well on the most frequent label (CounterArgument; $p = 0.635$ , see Table 3 in Appendix A). In addition, the asymmetric loss function provides significant gains for less common labels, promoting the hypothesis that the poor Z-TF performance stems from a label imbalance, a common issue in multi-class neural network based classifiers (Xiao et al., 2019).
|
| 245 |
+
|
| 246 |
+
Finally, unlike the autoregressive architecture of the CoDiP models, Z-TF naively uses the Transformer as a non-autoregressive classifier. Consequently, while it processes preceding utterances to provide context to the target utterance, it does not leverage the labels that were predicted for the context.
|
| 247 |
+
|
| 248 |
+
Context length and multi-modality Surprisingly, we found that adding as many context utterances as the encoder can take resulted in degraded performance, comparing to using only the single immediate context $(k = 1)$ . A comparison between context length of 1 and 4 is provided in Table 2. Similarly, we find it surprising that adding the author turn-taking information (see Section 4.2.7) did
|
| 249 |
+
|
| 250 |
+
not yield any improvement. We believe that the ways contexts (and different contextual signals) are integrated and attended to should be further investigated in order to leverage the full potential of the information encoded in the context.
|
| 251 |
+
|
| 252 |
+
The unimpressive contribution of the auxiliary task Incorporating an auxiliary prediction task to the training pipeline is reported to often improve results, especially when fine-tuning over relatively small datasets (Chronopoulou et al., 2019; Henderson et al., 2020; Schick and Schütze, 2021). We experimented with a number of settings for utterance proximity prediction to no avail - results were not improved in any significant way. We plan to explore this further in the future.
|
| 253 |
+
|
| 254 |
+
Broader Impact: Hitting close to home In order to highlight the importance and the broader impact of the discursive framework proposed in this paper, we conclude the discussion with an illustrative account a real-world example that hits close to home. This manuscript is being completed in the wake of the tragic terrorist attack by Hamas that claimed the lives of over 700 civilians and resulted in 251 hostages, from infants to the elderly. The ensuing toxic online discourse has provoked a range of emotional reactions, from defensiveness to aggression. The following series of tweets was posted by an Israeli user (original Hebrew is available in Appendix C; the authors of the paper are not familiar with the user):
|
| 255 |
+
|
| 256 |
+
(i) 'A Czech acquaintance posted graffiti stating no one is free until Palestine is free. I responded that she seems quite free in her protected European home to express strong opinions about a distant conflict she knows little about. I expected her to block me, but she responded.' (ii) $\gg$ She apologized for what happened on 7/10 and said she does not condone terrorism against Israel. However, she feels no one is addressing the suffering of innocent people in Gaza. Instead of an argument, we had a conversation, and she promised to be more thoughtful about what she shares. I gained a better understanding.' (iii) $\gg$ Reaching out for dialogue is more effective than attacking and blocking. This wasn't an attempt at propaganda; I didn't try to convince her I was right. I'm glad I responded, shared my perspective, and listened to hers. We should all talk more and argue less.' The account of this conversation highlights the principles of contentious productive discourse. While
|
| 257 |
+
|
| 258 |
+
the first response can be labeled with Sarcasm, Ridicule and Irrelevance claim as its disagreement strategy under the Intensifying tension category, the interlocutor defuses the tension by applying DoubleVoicing and ViableTransformation, leading to a more productive exchange. The user recounting this exchange ends by reflecting on it, demonstrating the principles of Bakhtinian Dialogism (Bakhtin, 1981) in action.
|
| 259 |
+
|
| 260 |
+
# 7 Conclusion and Future Work
|
| 261 |
+
|
| 262 |
+
Theoretical framework and empirical evidence motivates the need for a discourse annotation schema that reflects discursive moves in contentious discussions. We introduced N-CoDiP, a unified Non-Convergent-Discussion Parser that outperforms previous work in a discourse parsing task based on the scheme that was recently developed and shared by Zakharov et al. (2021).
|
| 263 |
+
|
| 264 |
+
We have demonstrated that using GRN layers, previously used for multi-horizon time-series forecasting by Lim et al. (2021) and an asymmetric loss function, previously used in computer vision by Ridnik et al. (2021) is especially beneficial to the task at hand, given the relatively small dataset, the imbalanced tagset, and the multi-label setting.
|
| 265 |
+
|
| 266 |
+
Future work will take theoretical and computational trajectories. A robust error analysis will be done with respect to the theoretical framework behind the annotation scheme. Computationally, we will investigate better ways to better leverage the abundance of structured unlabeled data (thousands of discussion on CMV and other platforms) as an auxiliary task, and achieve a better integration of the context turn-taking structure with the model.
|
| 267 |
+
|
| 268 |
+
# 8 Limitations
|
| 269 |
+
|
| 270 |
+
The main limitation of the paper is the size of the dataset, given the large and imbalanced tagset and the complex and nuanced discourse annotation scheme. We believe that expanding the dataset and maybe reconsidering some nuances in the annotation scheme would mitigate the issue.
|
| 271 |
+
|
| 272 |
+
# References
|
| 273 |
+
|
| 274 |
+
Khalid Al Khatib, Henning Wachsmuth, Kevin Lang, Jakob Herpel, Matthias Hagen, and Benno Stein. 2018. Modeling deliberative argumentation strategies
|
| 275 |
+
|
| 276 |
+
on wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2545-2555.
|
| 277 |
+
|
| 278 |
+
Ofer Arazy, Lisa Yeo, and Oded Nov. 2013. Stay on the wikipedia task: When task-related disagreements slip into personal and procedural conflicts. Journal of the American Society for Information Science and Technology, 64(8):1634-1648.
|
| 279 |
+
|
| 280 |
+
Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara, and Stergos Afantenos. 2016. Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In 10th International Conference on Language Resources and Evaluation (LREC 2016), pages 2721-2727.
|
| 281 |
+
|
| 282 |
+
Mikhail M Bakhtin. 1981. The dialogic imagination: Four essays by mm bakhtin (m. holquist, ed.; c. emerson & m. holquist, trans.).
|
| 283 |
+
|
| 284 |
+
Brigid Barron. 2003. When smart groups fail. The journal of the learning sciences, 12(3):307-359.
|
| 285 |
+
|
| 286 |
+
Aviv Ben-Haim and Oren Tsur. 2021. Open-mindedness and style coordination in argumentative discussions. In Proceedings of the 16th EACL Conference: Main Volume, pages 1876-1886.
|
| 287 |
+
|
| 288 |
+
Sasha Calhoun, Jean Carletta, Jason M Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver. 2010. The nxt-format switchboard corpus: a rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Language resources and evaluation, 44(4):387-419.
|
| 289 |
+
|
| 290 |
+
Eshwar Chandrasekharan, Shagun Jhaver, Amy Bruckman, and Eric Gilbert. 2022. Quarantined! examining the effects of a community-wide moderation intervention on reddit. ACM Transactions on Computer-Human Interaction (TOCHI), 29(4):1-26.
|
| 291 |
+
|
| 292 |
+
Bodong Chen, Yu-Hui Chang, Fan Ouyang, and Wanying Zhou. 2018. Fostering student engagement in online discussion through social learning analytics. The Internet and Higher Education, 37:21-30.
|
| 293 |
+
|
| 294 |
+
Alexandra Chronopoulou, Christos Baziotis, and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pretrained language models. In Proceedings of NAACL-HLT, pages 2089-2095.
|
| 295 |
+
|
| 296 |
+
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2015. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, ICLR 2016.
|
| 297 |
+
|
| 298 |
+
Pierre Dillenbourg and Frank Fischer. 2007. Computer-supported collaborative learning: The basics. Zeitschrift für Berufs-und Wirtschaftspädagogik, 21:111-130.
|
| 299 |
+
|
| 300 |
+
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910.
|
| 301 |
+
|
| 302 |
+
John J Godfrey, Edward C Holliman, and Jane McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, IEEE International Conference on, volume 1, pages 517-520. IEEE Computer Society.
|
| 303 |
+
Matthew Henderson, Inigo Casanueva, Nikola Mrkšić, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulić. 2020. Convert: Efficient and accurate conversational representations from transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2161-2174.
|
| 304 |
+
Sara Hennessy, Sylvia Rojas-Drummond, Rupert Higham, Ana María Márquez, Fiona Maine, Rosa María Ríos, Rocio García-Carrón, Omar Torreblanca, and María José Barrera. 2016. Developing a coding scheme for analysing classroom dialogue across educational contexts. Learning, Culture and Social Interaction, 9:16-44.
|
| 305 |
+
Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evidence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 219-230.
|
| 306 |
+
Yohan Jo, Shivani Poddar, Byungsoo Jeon, Qinlan Shen, Carolyn Penstein Rose, and Graham Neubig. 2018. Attentive interaction model: Modeling changes in view in argumentation. In NAACL-HLT.
|
| 307 |
+
Daniel Jurafsky, Elizabeth Shriberg, and Debra Biasca. 1997. Switchboard-damsl labeling project coder's manual.
|
| 308 |
+
Taraneh Khazaei, Lu Xiao, and Robert Mercer. 2017. Writing to persuade: Analysis and detection of persuasive discourse. IConference 2017 Proceedings.
|
| 309 |
+
Yifat Ben-David Kolikant and Sarah Pollack. 2015. The dynamics of non-convergent learning with a conflicting other: Internally persuasive discourse as a framework for articulating successful collaborative learning. Cognition and Instruction, 33(4):322-356.
|
| 310 |
+
Yifat Ben-David Kolikant and Sarah Pollack. 2017. Learning to think historically through a conflict-based biethnic collaborative learning environment. In (Re) Constructing Memory: Education, Identity, and Conflict, pages 209-237. Springer.
|
| 311 |
+
Emily Kubin and Christian von Sikorski. 2021. The role of (social) media in political polarization: a systematic review. Annals of the International Communication Association, 45(3):188-206.
|
| 312 |
+
Srijan Kumar, Justin Cheng, and Jure Leskovec. 2017. Antisocial behavior on the web: Characterization and detection. In Proceedings of the 26th International Conference on World Wide Web Companion, pages 947-950.
|
| 313 |
+
John Lawrence and Chris Reed. 2020. Argument mining: A survey. Computational Linguistics, 45(4):765-818.
|
| 314 |
+
Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020.
|
| 315 |
+
|
| 316 |
+
Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2642-2652.
|
| 317 |
+
Jingye Li, Kang Xu, Fei Li, Hao Fei, Yafeng Ren, and Donghong Ji. 2021. Mrn: A locally and globally mention-based reasoning network for document-level relation extraction. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1359-1370.
|
| 318 |
+
Bryan Lim, Sercan Ö Arik, Nicolas Loeff, and Tomas Pfister. 2021. Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting, 37(4):1748-1764.
|
| 319 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988.
|
| 320 |
+
Yang Liu, Kun Han, Zhao Tan, and Yun Lei. 2017. Using context information for dialog act classification in dnn framework. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2170-2178.
|
| 321 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 322 |
+
Philipp Lorenz-Spreen, Lisa Oswald, Stephan Lewandowsky, and Ralph Hertwig. 2022. A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature human behaviour, pages 1-28.
|
| 323 |
+
Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations.
|
| 324 |
+
Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 285-294.
|
| 325 |
+
Jingyan Lu, Ming Ming Chiu, and Nancy Wai Ying Law. 2011. Collaborative argumentation and justifications: A statistical discourse analysis of online discussions. Computers in Human Behavior, 27(2):946-955.
|
| 326 |
+
Elena Musi, Debanjan Ghosh, and Smaranda Muresan. 2018. Changemyview through concessions: Do concessions increase persuasion? Dialogue & Discourse, 9(1):107-127.
|
| 327 |
+
Allen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning sentence representations from explicit discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4497-4510, Florence, Italy. Association for Computational Linguistics.
|
| 328 |
+
|
| 329 |
+
Walter C Parker. 2006. Public discourses in schools: Purposes, problems, possibilities. Educational Researcher, 35(8):11-18.
|
| 330 |
+
Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The penn discourse treebank 2.0. In LREC.
|
| 331 |
+
Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor. 2021. Asymmetric loss for multi-label classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 82-91.
|
| 332 |
+
Tulika Saha, Saurabh Srivastava, Mauajama Firdaus, Sriparna Saha, Asif Ekbal, and Pushpak Bhattacharyya. 2019. Exploring machine learning and deep learning frameworks for task-oriented dialogue act classification. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
|
| 333 |
+
Bishal Santra, Potnuru Anusha, and Pawan Goyal. 2021. Hierarchical transformer for task oriented dialog systems. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5649-5658.
|
| 334 |
+
Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255-269.
|
| 335 |
+
Baruch B Schwarz, Naomi Prusak, Osama Swidan, Adva Livny, Kobi Gal, and Avi Segal. 2018. Orchestrating the emergence of conceptual learning: A case study in a geometry class. International Journal of Computer-Supported Collaborative Learning, 13(2):189-211.
|
| 336 |
+
Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy BarHaim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, et al. 2021. An autonomous debating system. Nature, 591(7850):379-384.
|
| 337 |
+
Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Proceedings of the 25th international conference on world wide web, pages 613-624.
|
| 338 |
+
Adane Nega Tarekegn, Mario Giacobini, and Krzysztof Michalak. 2021. A review of methods for imbalanced multi-label classification. Pattern Recognition, 118:107965.
|
| 339 |
+
Yla R. Tausczik and James W. Pennebaker. 2010. The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods. Journal of Language and Social Psychology, 29(1):24-54.
|
| 340 |
+
Stephanie Teasley, Frank Fischer, Pierre Dillenbourg, Manu Kapur, Michelene Chi, Armin Weinberger, and Karsten Stegmann. 2008. Cognitive convergence in
|
| 341 |
+
|
| 342 |
+
collaborative learning. Proceedings of the Eighth International Conference for the Learning Sciences - ICLS 2008, 3:360-367.
|
| 343 |
+
Stefan Trausan-Matu, Mihai Dascalu, and Traian Rebedea. 2014. Polycafe – automatic support for the polyphonic analysis of csc1 chats. International Journal of Computer-Supported Collaborative Learning, 9(2):127-156.
|
| 344 |
+
Alyssa Friend Wise and Ming Ming Chiu. 2011. Analyzing temporal patterns of knowledge construction in a role-based online discussion. International Journal of Computer-Supported Collaborative Learning, 6(3):445-470.
|
| 345 |
+
Lu Xiao and Taraneh Khazaei. 2019. Changing others' beliefs online: Online comments' persuasiveness. In Proceedings of the 10th International Conference on Social Media and Society, pages 92-101.
|
| 346 |
+
Zheng Xiao, L Wang, and JY Du. 2019. Improving the performance of sentiment classification on imbalanced datasets with transfer learning. IEEE Access, 7:28281-28290.
|
| 347 |
+
Wenshuo Yang, Jiyi Li, Fumiyo Fukumoto, and Yanming Ye. 2020. Hscnn: A hybrid-siamese convolutional neural network for extremely imbalanced multi-label text classification. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 6716-6722.
|
| 348 |
+
Stepan Zakharov, Omri Hadar, Tovit Hakak, Dina Grossman, Yifat Ben-David Kolikant, and Oren Tsur. 2021. Discourse parsing for contentious, non-convergent online discussions. In Proceedings of the International AAAI Conference on Web and Social Media, volume 15, pages 853-864.
|
| 349 |
+
Piotr Želasko, Raghavendra Pappagari, and Najim Dehak. 2021. What helps transformers recognize conversational structure? importance of context, punctuation, and labels in dialog act recognition. Transactions of the Association for Computational Linguistics, 9:1179-1195.
|
| 350 |
+
Justine Zhang, Jonathan Chang, Cristian Danescu-Niculescu-Mizil, Lucas Dixon, Yiqing Hua, Dario Taraborelli, and Nithum Thain. 2018. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1350-1361.
|
| 351 |
+
|
| 352 |
+
# A F-scores by Label
|
| 353 |
+
|
| 354 |
+
<table><tr><td>Label/Category</td><td>N-CoDiP</td><td>N-CoDiP BCE</td><td>N-CoDiP BASE</td><td>Z-TF</td><td>X-Grid</td><td>Priors</td></tr><tr><td>1. Promotes discussion</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>ViableTransformation</td><td>0.118</td><td>0.09</td><td>0.092</td><td>0</td><td>0.158†</td><td>0.01</td></tr><tr><td>Answer</td><td>0.413</td><td>0.366</td><td>0.397</td><td>0.522†</td><td>0.522†</td><td>0.014</td></tr><tr><td>Extension</td><td>0.286</td><td>0.258</td><td>0.263</td><td>0.507</td><td>0.549†</td><td>0.022</td></tr><tr><td>AttackValidity</td><td>0.506</td><td>0.435</td><td>0.48</td><td>0.143</td><td>0.51†</td><td>0.028</td></tr><tr><td>Moderation</td><td>0.353</td><td>0.277</td><td>0.326</td><td>0.027</td><td>0.42†</td><td>0.036</td></tr><tr><td>RequestClarification</td><td>0.488</td><td>0.482</td><td>0.471</td><td>0.160</td><td>0.731†</td><td>0.038</td></tr><tr><td>Personal</td><td>0.646</td><td>0.644</td><td>0.654†</td><td>0.066</td><td>0.396</td><td>0.046</td></tr><tr><td>Clarification</td><td>0.524</td><td>0.466</td><td>0.459</td><td>0</td><td>0.817†</td><td>0.109</td></tr><tr><td>CounterArgument</td><td>0.818</td><td>0.813</td><td>0.805</td><td>0.775</td><td>0.939†</td><td>0.635</td></tr><tr><td>2. Low responsiveness</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>NoReasonDisagreement</td><td>0.349</td><td>0.284</td><td>0.266</td><td>0</td><td>0.4†</td><td>0.01</td></tr><tr><td>AgreeToDisagree</td><td>0.39†</td><td>0.261</td><td>0.3</td><td>0</td><td>0.2</td><td>0.014</td></tr><tr><td>Repetition</td><td>0.118</td><td>0.118</td><td>0.136</td><td>0</td><td>0.161†</td><td>0.016</td></tr><tr><td>BAD</td><td>0.217</td><td>0.256</td><td>0.257†</td><td>0</td><td>0.114</td><td>0.018</td></tr><tr><td>NegTransformation</td><td>0.169</td><td>0.131</td><td>0.151</td><td>0</td><td>0.406†</td><td>0.024</td></tr><tr><td>Convergence</td><td>0.630†</td><td>0.606</td><td>0.593</td><td>0.108</td><td>0.565</td><td>0.028</td></tr><tr><td>3. Tone and Style</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>WQualifiers</td><td>0.351†</td><td>0.274</td><td>0.343</td><td>0.029</td><td>0.118</td><td>0.024</td></tr><tr><td>Ridicule</td><td>0.236†</td><td>0.193</td><td>0.207</td><td>0.029</td><td>0.11</td><td>0.029</td></tr><tr><td>Sarcasm</td><td>0.212</td><td>0.216†</td><td>0.209</td><td>0</td><td>0.164</td><td>0.048</td></tr><tr><td>Aggressive</td><td>0.27†</td><td>0.251</td><td>0.265</td><td>0</td><td>0.17</td><td>0.051</td></tr><tr><td>Positive</td><td>0.532</td><td>0.541†</td><td>0.515</td><td>0.19</td><td>0.336</td><td>0.058</td></tr><tr><td>Complaint</td><td>0.475†</td><td>0.449</td><td>0.467</td><td>0.077</td><td>0.343</td><td>0.064</td></tr><tr><td>4. Disagreement Strategies</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>Alternative</td><td>0.192†</td><td>0.178</td><td>0.184</td><td>0</td><td>0.133</td><td>0.018</td></tr><tr><td>RephraseAttack</td><td>0.179</td><td>0.132</td><td>0.183†</td><td>0</td><td>0.077</td><td>0.022</td></tr><tr><td>DoubleVoicing</td><td>0.162</td><td>0.146</td><td>0.179†</td><td>0</td><td>0.179†</td><td>0.026</td></tr><tr><td>Softening</td><td>0.293</td><td>0.265</td><td>0.288</td><td>0.014</td><td>0.379†</td><td>0.029</td></tr><tr><td>Sources</td><td>0.779</td><td>0.774</td><td>0.746</td><td>0.730</td><td>0.884†</td><td>0.045</td></tr><tr><td>AgreeBut</td><td>0.473</td><td>0.481†</td><td>0.459</td><td>0</td><td>0.106</td><td>0.058</td></tr><tr><td>Irrelevance</td><td>0.286†</td><td>0.262</td><td>0.22</td><td>0</td><td>0.172</td><td>0.059</td></tr><tr><td>Nitpicking</td><td>0.760</td><td>0.763</td><td>0.786</td><td>0.447</td><td>0.79†</td><td>0.061</td></tr><tr><td>DirectNo</td><td>0.458†</td><td>0.443</td><td>0.412</td><td>0</td><td>0.259</td><td>0.08</td></tr><tr><td>CriticalQuestion</td><td>0.636</td><td>0.635</td><td>0.618</td><td>0.224</td><td>0.722†</td><td>0.128</td></tr></table>
|
| 355 |
+
|
| 356 |
+
Table 3: Mean 5-fold cross validation F-scores for the individual labels in the tag-set. N-CoDiP architectures differ in the loss function used: Asymmetric Loss (AL) or Binary Cross Entropy (BCE), and the pretrained model used: Contrastive Sentence Embedding (CSE) or the vanilla RoBERTa (V); Z-TF is the BERT architecture used by Zakharov et al. (2021); X-Grid are the best results reported in prior work using an oracle and applying an exhaustive grid search over parameters and models for each of the labels. A $\dagger$ indicates best results overall. Best results achieved by a transformer architecture without an oracle or feature engineering are in bold face. Prior probabilities included.
|
| 357 |
+
|
| 358 |
+
<table><tr><td>Description</td><td>Tag</td></tr><tr><td>1. Discursive moves that potentially promote the discussion</td><td></td></tr><tr><td>Moderating/regulating, e.g. “let's get back to the topic”</td><td>Moderation</td></tr><tr><td>Request for clarification</td><td>RequestClarification</td></tr><tr><td>Attack on the validity of the argument (“Who says?”)</td><td>AttackValidity</td></tr><tr><td>Clarification of previous statement (utterance)</td><td>Clarification</td></tr><tr><td>Informative answer of a question asked (rather than clarifying)</td><td>Answer</td></tr><tr><td>A disagreement which is reasoned, a refutation.
|
| 359 |
+
Can be accompanied by disagreement strategies</td><td>CounterArgument</td></tr><tr><td>Building/extending previous argument. The speaker takes the idea of the previous speaker and extends it.</td><td>Extension</td></tr><tr><td>A viable transformation of the discussion topic</td><td>ViableTransformation</td></tr><tr><td>Personal statement “this happened to me”)</td><td>Personal</td></tr><tr><td>2. Moves with low responsiveness</td><td></td></tr><tr><td>Severe low responsiveness: continuous squabbling</td><td>BAD</td></tr><tr><td>Repeating previous argument without any real variation</td><td>Repetition</td></tr><tr><td>Response to ancillary topic / derailing the discussion</td><td>NegTransformation</td></tr><tr><td>Negation/disagreement without reasoning</td><td>NoReasonDisagreement</td></tr><tr><td>Convergence towards previous speaker</td><td>Convergence Agreement</td></tr><tr><td>The issue is deemed unsolvable by the speaker</td><td>AgreeToDisagree</td></tr><tr><td>3. Tone and style</td><td></td></tr><tr><td>3.1 Negative tone and style</td><td></td></tr><tr><td>Aggressive and Blatant “this is stupid”</td><td>Aggressive</td></tr><tr><td>Ridiculing the partner (or her argument)</td><td>Ridicule</td></tr><tr><td>Complaining about a negative approach “you were rude to me”</td><td>Complaint</td></tr><tr><td>Sarcasm/ cynicism /patronizing</td><td>Sarcasm</td></tr><tr><td>3.2 Positive tone and style</td><td></td></tr><tr><td>Attempts to reduce tension: respectful, flattering, etc.</td><td>Positive</td></tr><tr><td>Weakening qualifiers e.g. “I'm not an expert in this topic...”</td><td>WQualifiers</td></tr><tr><td>4. Disagreement strategies</td><td></td></tr><tr><td>4.1 Easing tension</td><td></td></tr><tr><td>Softening the blow of a disagreement.</td><td>Softening</td></tr><tr><td>Partial disagreement “I disagree only with one part of your text”</td><td>AgreeBut</td></tr><tr><td>Explicitly taking into account other participants' voices</td><td>DoubleVoicing</td></tr><tr><td>Using an external source to support a claim</td><td>Sources</td></tr><tr><td>4.2 Intensifying tension</td><td></td></tr><tr><td>Reframing or paraphrasing the previous comment</td><td>RephraseAttack</td></tr><tr><td>Critical question, phrasing the (counter) argument as a question</td><td>CriticalQuestion</td></tr><tr><td>Offering an alternative without direct refutation</td><td>Alternative</td></tr><tr><td>Direct disagreement (“I disagree”, “this is simply not true”)</td><td>DirectNo</td></tr><tr><td>Refutation focuses on the relevance of previous claim</td><td>Irrelevance</td></tr><tr><td>Breaking previous argument to pieces without real coherence</td><td>Nitpicking</td></tr></table>
|
| 360 |
+
|
| 361 |
+
Table 4: Copied from Zakharov et al. (2021).
|
| 362 |
+
|
| 363 |
+
# C Original Account of a Productive Discussion
|
| 364 |
+
|
| 365 |
+
The original account of the productive discourse is provided in Figure 3. The English translation is presented in the Ethics and Broader Impact section.
|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+
20h
|
| 372 |
+
|
| 373 |
+
··
|
| 374 |
+
|
| 375 |
+
no one is free until 10000000000000000000000000000000000000000000000000000000000000000000
|
| 376 |
+
|
| 377 |
+
.
|
| 378 |
+
|
| 379 |
+
nyn nn,ON IN NIN DIONN N
|
| 380 |
+
|
| 381 |
+
16
|
| 382 |
+
|
| 383 |
+
14
|
| 384 |
+
|
| 385 |
+
1.1K
|
| 386 |
+
|
| 387 |
+
44K
|
| 388 |
+
|
| 389 |
+
#
|
| 390 |
+
|
| 391 |
+
#
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+
20h
|
| 398 |
+
|
| 399 |
+
y 7/10n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn n nn
|
| 400 |
+
|
| 401 |
+
3
|
| 402 |
+
|
| 403 |
+
1
|
| 404 |
+
|
| 405 |
+
600
|
| 406 |
+
|
| 407 |
+
11 9.5K
|
| 408 |
+
|
| 409 |
+
#
|
| 410 |
+
|
| 411 |
+
#
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
|
| 415 |
+

|
| 416 |
+
|
| 417 |
+
. dion7i jyjy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy jy
|
| 418 |
+
|
| 419 |
+
n nn nnnn nnnn nn nnnn nn nnnn nn
|
| 420 |
+
|
| 421 |
+
4:50 PM $\cdot$ Nov 11, 2023 $\cdot$ 9,554 Views
|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
Figure 3: Original Hebrew account of productive sidcourse
|
| 425 |
+
|
| 426 |
+
6
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+
4
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
|
| 434 |
+
735
|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
|
| 438 |
+

|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dc44fbf9f2a285f6a4e78bd9b35ff2b4d99157e05d47b0ff85e968ca75dac1be
|
| 3 |
+
size 726984
|
adeeperautoregressiveapproachtononconvergentdiscourseparsing/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:312650a755dd0e00c680be16fb1b888493cb679f88c4a20287756c22acbcf0ce
|
| 3 |
+
size 451459
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e8677795f9f39133c3f2252a7bb909b34be706c415845f701e661a02418d02a6
|
| 3 |
+
size 105356
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0e8d31dd4253a5a6abe3a8dda6f77d9ae35f996141c5ab1436ca98b48b8d8526
|
| 3 |
+
size 125136
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/b6e1a2bb-ab7c-450f-97b5-787f451f20d1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1e7270d78553d23061e371b4a2fbcca4d01b1c2b84d53266c380bfbfb5f99490
|
| 3 |
+
size 588272
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/full.md
ADDED
|
@@ -0,0 +1,378 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?
|
| 2 |
+
|
| 3 |
+
Aniket Pramanick<sup>1</sup>, Yufang Hou<sup>2</sup>, Saif M. Mohammad<sup>3</sup>, Iryna Gurevych<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Ubiquitous Knowledge Processing Lab (UKP Lab)
|
| 6 |
+
|
| 7 |
+
Department of Computer Science and Hessian Center for AI (hessian.AI)
|
| 8 |
+
|
| 9 |
+
$^{2}$ IBM Research Europe, Ireland
|
| 10 |
+
|
| 11 |
+
$^{3}$ National Research Council Canada
|
| 12 |
+
|
| 13 |
+
www.ukp.tu-darmstadt.de, yhou@ie.ibm.com, saif.mohammad@nrc-cnrc.gc.ca
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Experts in a field sometimes conduct historical studies to synthesize and document the key research ideas, topics of interest, methods, and datasets that shaped a field of study. They document how new research topics eclipsed older ones and contributed to shaping the trajectory of the research area (Kuhn, 1970). Aspiring scientists learn the craft of their discipline by delving into the examination of past scientific accomplishments documented in research papers. However, conducting such a historical study is challenging: Experts in a field rely on years of experience and peruse large amounts of past published articles to determine the chronological progression of a research field. Further, the exponential growth of scientific publications in recent years has rendered it arduous even for domain experts to stay current. Therefore, an automated
|
| 22 |
+
|
| 23 |
+
method to track the temporal evolution of research topics can be beneficial in offering an overview of the field and assisting researchers in staying abreast of advancements more efficiently.
|
| 24 |
+
|
| 25 |
+
In this work, we propose a systematic framework to examine the evolutionary journey of research topics within the realm of Natural Language Processing (NLP), harnessing causal discovery and inference techniques. Prior research on historical analysis of NLP has predominantly concentrated on scrutinizing metadata associated with research papers (Hall et al., 2008; Mohammad, 2019; Uban et al., 2021; Singh et al., 2023; Wahle et al., 2023) such as number of citations, title, author profile, affiliation, and publication venue. These studies have examined the research trends through unigram or bigram frequency analysis, but they do not provide insights into the underlying causes propelling these research topics.
|
| 26 |
+
|
| 27 |
+
Our study centers on four distinct fundamental types of entities in NLP research: tasks representing well defined problems; methods, signifying the solutions or approaches employed to tackle the tasks; datasets, indicating the relevant textual resources such as corpora and lexicons; and metrics, encompassing the evaluation techniques tailored to specific tasks. We abbreviate these types as TDMM for short. Specifically, we examine the interplay between an NLP task that is commonly viewed as a focused research topic (e.g., Machine Translation) and the key entities that exert pivotal influence on the target task (such as “BLEU” (Papineni et al., 2002) or “Transformers” (Vaswani et al., 2017)).
|
| 28 |
+
|
| 29 |
+
Our goal is to identify the TDMM entities $(E)$ associated with a specific task $(t)$ and assess their causal influence on the task's research trends (TDMM-Task causal analysis). Specifically, we address the following key research questions associated with a task entity $t$ : (a) Which entities $E$ effectively indicate the research trends for this task
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1: Evolution of Machine Translation (MT) research. Blue line: Number of MT papers (1979-2022). Tables show the top causal entities/types for different periods (excluding 1979-1989 due to limited MT papers).
|
| 33 |
+
|
| 34 |
+
$t?$ (b) Are there discernible causal relationships between $t$ and $E?$ (c) What is the extent of the causal impact exerted by $E$ on $t?$
|
| 35 |
+
|
| 36 |
+
Unlike Uban et al. (2021) and Koch et al. (2021) that heavily rely on manual annotations and have limited coverage, our analysis is based on TDMM entities automatically extracted from 55K papers in the ACL Anthology<sup>2</sup>. Our framework not only recognizes the key entities driving the research direction of a research topic but also measures the causal effects of these entities on the target topic in an end-to-end fashion. Figure 1 shows the most influential entities for Machine Translation (MT) in different time periods. For instance, "statistical models" used to be the popular method for MT in 1990-2002, and the evaluation metric "BLEU" is one of the top causal entities driving the MT research in 2003-2017. In the era of pre-trained large language models (LLMs) starting from 2018, "transformer" has become the popular method for MT. For another research topic of "Speech recognition", our framework uncovers the influential role of "language modeling" between 1979 to 2022, where speech recognition models utilize probability scores from language models to recognize coherent text from speech (Negri et al., 2014).
|
| 37 |
+
|
| 38 |
+
In this work, we analyze 16 tasks from a diverse set of research areas identified by ACL 2018 organizers. Our framework is versatile and applicable to other tasks and domains, benefiting both young and experienced researchers. It can aid in litera
|
| 39 |
+
|
| 40 |
+
ture surveys by identifying related research areas and enable young researchers to delve into new research focuses by establishing connections among different research areas.
|
| 41 |
+
|
| 42 |
+
In summary, we make three-fold contributions in this study: Firstly, we propose a framework to quantify research activities, including (1) trends and stability of an NLP research task, and (2) relation intensity between TDMM entities and NLP research tasks. Secondly, we employ causal analysis algorithms to uncover causal structures and measure effects between tasks and related TDMM entities (TDMM-Task causal analysis). To the best of our knowledge, this represents the first historical study of a scientific research anthology from a causal perspective. Finally, through extensive experiments on the ACL Anthology, we offer an empirical overview of the NLP research landscape. In the following sections, we will refer to TDMM-Task causal analysis as causal analysis.
|
| 43 |
+
|
| 44 |
+
# 2 Related Work
|
| 45 |
+
|
| 46 |
+
Scientific Trends Analysis The analysis of scientific trends has been a research focus since Hall et al. (2008). In the field of "scientometrics", extensive literature explores citation patterns and utilizes topological measures in citation networks for trend analysis (Small, 2006; Shibata et al., 2008; Boyack and Klavans, 2022).
|
| 47 |
+
|
| 48 |
+
Another line of research focuses on metadata and content analysis. For instance, Prabhakaran et al. (2016) employed rhetorical framing to exam
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Figure 2: System architecture.
|
| 52 |
+
|
| 53 |
+
ine trend patterns. Grudin (2009), Liu et al. (2015), and Mohammad (2019) investigated the interaction between the topics in publications, research grants, author profiles, highly impactful papers, and dataset usage patterns. Additionally, Koch et al. (2021) studied dataset usage patterns among different research communities, while Uban et al. (2021) analyzed relationships between NLP research topics based on their co-occurrence in text and the degree of correlation between their popularity over time. In our work, we develop entity recognition models to extract TDMM entities from NLP research papers and focus on analyzing the causal relations between a task entity and its related TDMM entities.
|
| 54 |
+
|
| 55 |
+
Causality in NLP Existing works on NLP applying causal analysis algorithms mainly focus on two directions. The first line of work discovers causal relations among textual features or expressions of events in texts and uses them in various downstream tasks, such as question answering (Oh et al., 2016), commonsense reasoning (Bosselut et al., 2019; Sap et al., 2019), and relation extraction (Do et al., 2011; Mirza and Tonelli, 2014; Dunietz et al., 2017).
|
| 56 |
+
|
| 57 |
+
In another avenue of this field, researchers represent causal elements using textual features (Jin et al., 2021; Fong and Grimmer, 2016; Veitch et al., 2020; Keith et al., 2020) and define the causal graph structure based on domain knowledge. Our work falls within this line of research, where we employ causal algorithms to analyze the trends in NLP research topics and the underlying causes.
|
| 58 |
+
|
| 59 |
+
# 3 Data Collection
|
| 60 |
+
|
| 61 |
+
ACL Anthology Corpus Following prior work by Mohammad (2020), we utilize ACL Anthology as the source of NLP Research papers. For this work, we collect 55,366 NLP papers that belong to the "ACL Events" category from the ACL anthology published between 1979 and 2022. For each paper, we use GROBID (GRO, 2008-2022) and the PDF table parser from Hou et al. (2019) to extract sentences from each of the individual sections as well as from the table and figure captions. In a post-processing step, we remove all the URLs from the extracted sentences. On average, we have 1,258 papers per year and 1,117 sentences per paper.
|
| 62 |
+
|
| 63 |
+
It is worth noting that certain NLP paper preprints may become accessible on preprint servers before they are officially published in the ACL Anthology. However, we argue that the peer review process in ACL Anthology serves as a robust quality assurance mechanism. Hence, we consider ACL Anthology a more reliable source compared to preprint servers.
|
| 64 |
+
|
| 65 |
+
TDMM Entity Extraction To identify tasks, datasets, metrics, and methods entities from NLP papers, we developed two entity taggers based on Flair (Akbik et al., 2018). The first tagger is based on the TDMSci annotations (Hou et al., 2021) for recognizing task, dataset, and metric entities. The second tagger is trained using the SciERC dataset
|
| 66 |
+
|
| 67 |
+
<table><tr><td>Period</td><td>Years</td><td>Key Research Themes</td></tr><tr><td>Early Years</td><td>1979–1989</td><td>Foundational work in syntactic parsing, machine translation, and information retrieval.</td></tr><tr><td>Formative Years</td><td>1990–2002</td><td>Advances in language modeling, named entity recognition, and discourse analysis (research focus shifted towards data-driven approaches).</td></tr><tr><td>Statistical Revolution & Neural Networks</td><td>2003–2017</td><td>Focus on statistical techniques (text classification, statistical machine translation, etc.) and resurgence of neural networks (word embeddings, neural machine translation, etc.)</td></tr><tr><td>Deep Learning Era</td><td>2018–2022</td><td>Dominance of transformer-based architectures (BERT and their variants).</td></tr></table>
|
| 68 |
+
|
| 69 |
+
(Luan et al., 2018) to extract method entities. On the testing datasets of TDMSci and SciERC, the two taggers achieve a micro-average F1 of 0.77 and 0.78 for the type partial match (Segura-Bedmar et al., 2013), respectively. In type partial match, a predicted entity is considered correct if it partially overlaps with a gold entity and has the same type. For example, "Penn Treebank" is counted as a correct prediction even if the corresponding gold annotation is "Penn Treebank dataset".
|
| 70 |
+
|
| 71 |
+
To further improve the precision of the TDMM taggers, we include only entities that appear in more than five papers in the dataset. For each paper, we collect the most frequent task mentions appearing in the title, abstract, experiment section, table, and figure captions to approximate the tasks that the paper has done research on.
|
| 72 |
+
|
| 73 |
+
Taxonomy for Periods of Reference In order to facilitate in-depth analysis, in this paper, we adopt a taxonomy that partitions our reference time frame (1979-2022) into four distinct intervals. Table 1 illustrates the defined intervals. These intervals have been designed to approximate the overarching trends observed in NLP research throughout the years, aligning with our perspective on the field's evolution. It is important to acknowledge that the exact boundaries and thematic emphases may differ based on varying perspectives and specific research areas within NLP. However, we highlight that our framework and methodologies are highly adaptable, allowing end users to effortlessly apply them to any desired time interval or a specific analysis.
|
| 74 |
+
|
| 75 |
+
# 4 Entity Influence in NLP Research: A Regression Analysis
|
| 76 |
+
|
| 77 |
+
Before conducting the causal analysis, we aim to identify the key variables that significantly impact the evolution of NLP Research. Specifically, we investigate which types of entities exert the most
|
| 78 |
+
|
| 79 |
+
Table 1: Chronological Periods of NLP Research.
|
| 80 |
+
|
| 81 |
+
<table><tr><td>Variables</td><td>R-Squared (↑)</td></tr><tr><td>unique tasks</td><td>0.87</td></tr><tr><td>+ unique datasets</td><td>0.91</td></tr><tr><td>+ unique methods</td><td>0.93</td></tr><tr><td>+ unique metrics</td><td>0.97</td></tr></table>
|
| 82 |
+
|
| 83 |
+
Table 2: Variable Selection for Regression.
|
| 84 |
+
|
| 85 |
+
influence on the research direction of NLP. To achieve this understanding, we employ Multiple Linear Regression (see Appendix D for details), a widely utilized tool in economics research (Barrios and Hochberg, 2020). Figure 2 (step1/step2) illustrates the framework.
|
| 86 |
+
|
| 87 |
+
Our analysis assumes that if the TDMM entities have played a role in the emergence or disappearance of task entities, this influence will be reflected in the number of unique task entities in subsequent years, which can be captured through regression analysis. While the study does not provide specific information on the precise influence of each TDMM entity on individual task entities, the partial regression coefficients shed light on the types of entities responsible for influencing the overall task entity landscape.
|
| 88 |
+
|
| 89 |
+
Method. Mathematically, we predict the number of task entities $Y^{t}$ in a given year $t$ as a function of the cumulative counts of all types of entities $\{X_{i}^{t - 1}\}$ (TDMM entities) until that year, $t_{-1}$ , given by $Y^{t} = r_{0} + \sum_{i}r_{i}X_{i}^{t - 1}$ . $\{r_i\}$ quantifies the relationship strength between the predicted variable (number of task entities) and the independent variables (number of TDMM entities).
|
| 90 |
+
|
| 91 |
+
Evaluation. We evaluate the regression model using the $R^2$ measure (coefficient of determination) to assess the goodness of fit. Additionally, we perform a null hypothesis test to determine the statistical significance of the partial regression co
|
| 92 |
+
|
| 93 |
+
efficients.
|
| 94 |
+
|
| 95 |
+
# Results and Discussion.
|
| 96 |
+
|
| 97 |
+
1) Optimized Number of Variables. In our initial experiment, we determine the optimal number of variables and summarize the corresponding $R^2$ values in Table 2. Additionally, all regression coefficients are statistically significant at $5\%$ level, indicating their strong relationship with the predicted variable. Discussion: The overall results indicate that the model achieves a good fit to the data when all four variables (number of tasks, datasets, metrics, and method entities) are used to predict the number of task entities in subsequent years. We also explore the possibility of reducing the number of variables while maintaining similar performance. Interestingly, using only one variable results in a significant drop of 0.1 in the $R^2$ value ( $R^2$ value 0.87), indicating a poor fit to the model. Conversely, increasing the number of variables improves the model fit, suggesting the significance of all four variables in analyzing research trends ( $R^2$ value 0.97). It is worth noting that we exhaustively explored various combinations of variables, including those presented in the table, and consistently obtained similar results.
|
| 98 |
+
|
| 99 |
+
2) Influence of the Variables. In the second experiment, we assess the association between the target variable and each independent variable. In Table 3, we present the regression coefficients corresponding to each entity type. Larger values of regression coefficients indicate a stronger relationship between the target variable and the respective independent variable. Discussion: Overall, we note that the gradual emergence of newer tasks has been a driving force behind research progress. However, when we analyze the trends within each year interval, we uncover more nuanced patterns. During the Early Years (1979-1989), when NLP was in its nascent stage as an independent research field, the focus was on creating new datasets to fuel research advancements. In the Formative Years (1990-2002), we witnessed the introduction of new methods, particularly data-driven approaches, which played a crucial role in shaping the field. Subsequently, from 2003 to 2017, statistical methods underwent a revolution, and later in the same period, neural network methods experienced a resurgence, indicating significant shifts in research trends. Now, in the present Deep Learning Era (2018-2022), we observe a rapid creation of
|
| 100 |
+
|
| 101 |
+
<table><tr><td rowspan="2">Years</td><td colspan="4">Partial Regression Coefficient</td></tr><tr><td>Tasks</td><td>Datasets</td><td>Methods</td><td>Metrics</td></tr><tr><td>1979–1989</td><td>0.35</td><td>2.24</td><td>0.21</td><td>0.02</td></tr><tr><td>1990–2002</td><td>0.82</td><td>0.89</td><td>2.86</td><td>0.81</td></tr><tr><td>2003–2017</td><td>5.37</td><td>6.26</td><td>7.00</td><td>0.69</td></tr><tr><td>2018–2022</td><td>1.47</td><td>3.38</td><td>1.79</td><td>0.41</td></tr><tr><td>1979 - 2022</td><td>3.50</td><td>1.07</td><td>2.92</td><td>0.54</td></tr></table>
|
| 102 |
+
|
| 103 |
+
Table 3: Variables Influencing NLP task entities.
|
| 104 |
+
|
| 105 |
+
newer datasets in a relatively short span of time, driven by the research needs and the data requirements of deep learning models. These highlight key factors influencing research trajectory over time.
|
| 106 |
+
|
| 107 |
+
# 5 Causal Methodology for NLP Research Analysis
|
| 108 |
+
|
| 109 |
+
Drawing on the insights gained from the Regression Analysis (Section 4), we now establish the cornerstone of our study by defining three causal variables that drive the causal analysis in the subsequent sections. Using causal discovery and inference techniques, we analyze the causal relationships among the variables and measure the impact of TDMM entities on target task entities based on these relationships. Figure 2 illustrates the architecture that underpins our framework.
|
| 110 |
+
|
| 111 |
+
# 5.1 Causal Variables
|
| 112 |
+
|
| 113 |
+
Task Frequency Shift Value: Distinguishing from the previous approaches (Tan et al., 2017; Prabhakaran et al., 2016), that rely on word frequencies, we define task frequency $f(y)_t$ as the number of published papers focusing on a specific task $y$ in a given year $t$ , normalized by the total number of papers published on the same year. The task frequency shift value $\Delta freq_{t_1}^{t_2}(y)$ captures the average change in the number of published papers on $y$ between two years $t_1 < t_2$ . This value serves as a measure of the research trends associated with the task during that time interval, indicating whether it experienced growth or decline. The frequency shift value is given by: $\Delta freq_{t_1}^{t_2}(y) = \frac{f(y)_{t_2} - f(y)_{t_1}}{t_2 - t_1}$ .
|
| 114 |
+
|
| 115 |
+
Task Stability Value: We introduce the concept of task stability value to measure the change in the research context of a given task, $y$ , between two years, $t_1 < t_2$ . This value quantifies the overlap in neighboring TDMM entities that appear in the same publication as $y$ within the specified time interval. To calculate task stability, we adapt
|
| 116 |
+
|
| 117 |
+
the semantic stability approach of Wendlandt et al. (2018) to our setting and define it specifically for task entities. Initially, we represent each paper in our dataset as a sequence of TDMM entity mentions, removing non-entity tokens. We then employ "Skip-gram with negative sampling" (Mikolov et al., 2013) to obtain embeddings from this representation. Formally, let $e_1, e_2, \ldots, e_n$ be this entity representation of a paper, and the objective of skip-gram is to maximize the mean log probability $\frac{1}{n} \sum_{i=1}^{n} \sum_{-c \leq j \leq c} \log p(e_{i+j}|e_i)$ , where $c$ is called the context window size. Finally, the task stability value $\Delta stability_{t_1}^{t_2}(y)$ of $y$ between $t_1$ and $t_2$ is computed as the percentage overlap between the nearest $l$ neighboring entities of the given task in two representation spaces. The stability value is given by: $\Delta stability_{t_1}^{t_2}(y) = \frac{|\mathcal{N}_{t_1}^l(y) \cap \mathcal{N}_{t_2}^l(y)|}{|\mathcal{N}_{t_1}^l(y) \cup \mathcal{N}_{t_2}^l(y)|}$ , where $\mathcal{N}_t^l(y)$ is the set of $l$ neighbours of $y$ in the representation space of year $t$ . In this study, we consider the context window $c$ to encompass the entire document, and we set the value of $l$ to 5.
|
| 118 |
+
|
| 119 |
+
Entity Change Value: We use entity change value to track emerging and disappearing of specific TDMM entities associated with a task, quantifying these changes and capturing related entity occurrences within a specific time period. Put simply, we measure the difference in the co-occurrence frequency of a TDMM entity $x$ and a task $y$ between two years $t_1$ and $t_2$ . When we identify a significant change in the co-occurrence frequency of $x$ and $y$ over this period, it likely signals a shift in the relation between $x$ and $y$ and, in turn, a shift in NLP Research trends. We define entity change value $\delta_y(x)\frac{t_2}{t_1}$ of an entity $x$ of type $\tau(x) \in \{\text{task}, \text{dataset}, \text{metric}, \text{method}\}$ with respect to a task $y$ as the absolute difference in frequencies of $x$ co-occurring with $y$ in the same sentence, between years $t_1$ and $t_2$ normalized by the total number of entities of the same type as $x$ that co-occur with $y$ in both years. The entity change value is given by: $\delta_y(x)\frac{t_2}{t_1} = \frac{|C_{t_1}(x,y) - C_{t_2}(x,y)|}{\sum_{\forall e:\tau(e) = \tau(x)} (C_{t_1}(e,y) + C_{t_2}(e,y))}$ , where the frequency of $x$ co-occurring with $y$ in year $t$ is given by $C_t(x,y)$ .
|
| 120 |
+
|
| 121 |
+
In summary, we quantify task trends and research context changes using task frequency change and task stability values. Below we explore the relationship between entity change values and these two variables and estimate the causal impact of TDMM entities on task research landscapes.
|
| 122 |
+
|
| 123 |
+
# 5.2 Causal Algorithms
|
| 124 |
+
|
| 125 |
+
Causal Structure Discovery To uncover the causal structure among variables from observational data, we employ DirectLinGAM (Shimizu et al., 2011), which assumes a non-Gaussian data-generating process. Since the variables in Section 5.1 come from non-Gaussian frequency distributions, DirectLinGAM is suitable. It uses an entropy-based measure to subtract the effect of each independent variable successively. Unlike PC-Stable (Colombo and Maathuis, 2014), it does not require iterative search or algorithmic parameters. We apply DirectLiNGAM with a $5\%$ significance level for causal discovery (see Appendix C for details).
|
| 126 |
+
|
| 127 |
+
Causal Inference Once the causal structure between the variables has been established, we leverage this structure to assess the causal effects. Specifically, we measure the causal effects by the entity change value of entity $x$ on the frequency shift and subsequently on the stability values associated with a given task $y$ . For this purpose, we use the probability density function instead of probability mass, as all our causal variables are continuous in nature. We measure the causal effects in two steps: first, we estimate the probability density of the entity change variable using a linear regression model. In the next step, we regress the frequency shift and stability against the entity change value, weighted by the inverse probability densities obtained in the previous step. We model the functional form of this regression using a spline to avoid bias due to misspecification. Finally, we calculate the causal effect as Veitch and Zaveri (2020): $\mu(\Delta freq_{t_1}^{t_2}(y)) = \mathbb{E}[\Delta freq_{t_1}^{t_2}(y)|\delta_y(x)^{t_2}]$ and similarly, $\mu(\Delta stability_{t_1}^{t_2}(y)) = \mathbb{E}[\Delta stability_{t_1}^{t_2}(y)|\delta_y(x)^{t_2}]$ .
|
| 128 |
+
|
| 129 |
+
# 6 Results and Analysis
|
| 130 |
+
|
| 131 |
+
Correlation-based measures provide a simple way to quantify the association between variables. However, they fall short of explaining complex cause-effect relationships and can yield misleading results. Causality is essential for gaining a deeper understanding of variable relationships, enhancing the robustness and reliability of our findings beyond the limitations of correlation. We discuss more about the importance of causal methods over correlation-based measures in Section 7. In this
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 3: Causal Graph of TDMM Entities (entity change values) and Task Entity Frequency Shift.
|
| 135 |
+
|
| 136 |
+
section, our focus is on uncovering relationships among causal variables (Section 6.1) and measuring the impact of TDMM entities on target task entities (Section 6.2).
|
| 137 |
+
|
| 138 |
+
# 6.1 Causal Relation between the Variables
|
| 139 |
+
|
| 140 |
+
Figure 3 shows the discovered causal graph for the frequency shift of task entities. Overall, we observe that the entity change values of associated tasks, datasets, metrics, and methods have a direct causal effect on the frequency shift values of the target tasks. Since frequency shift value quantifies the trend in NLP research, we infer from the causal graph that the trend of a task is governed primarily by the life cycles of its associated TDMM entities. We see similar causal relation on task stability value (see Figure 4, Appendix A). Evaluation: We perform a sensitivity analysis of the causal graph by adding Gaussian noise with zero mean and unit variance to the entity change values in the data (Cinelli et al., 2019). This gives an estimate of the robustness of the graph in the presence of unobserved confounders. We observe that the graph is stable to unobserved confounding, giving all edge probabilities greater than 0.5.
|
| 141 |
+
|
| 142 |
+
# 6.2 Causal Impact of the Variables
|
| 143 |
+
|
| 144 |
+
The organizers of ACL $2018^{4}$ categorize NLP research into 21 areas, and provide a set of popular tasks for each area. Out of those, we curate 16 areas and select one task from each based on its frequency of occurrence in our corpus. We estimate the effect of TDMM entities (entity change value) behind the development of these tasks (frequency shift value) (see Section 5.1) and summarize the results in Table 4. Since we do not have confounders (Section 6.1), evaluating the causal effect reduces to estimating the conditional expectation of the frequency shift values given the entity change values. We present detailed results in Appendix A.2. We examine the results by addressing the following set of inquiries.
|
| 145 |
+
|
| 146 |
+
# Q1. What role do the methodologies play in causally driving the shift in NLP tasks?
|
| 147 |
+
|
| 148 |
+
New methodologies have a significant influence on research in various areas of Natural Language Processing (NLP). In the field of Language Modeling, we observe a shift in influence between different methodologies over time.
|
| 149 |
+
|
| 150 |
+
Between 2003 and 2017, Recurrent Neural Networks (RNNs) had the most decisive impact on Language Modeling research. However, this trend shifted with the emergence of Transformers, which have since become the dominant influence in research on this task.
|
| 151 |
+
|
| 152 |
+
Dialogue Systems, which involve automatic response generation, are closely related to Language Modeling. Therefore, research in this area is highly influenced by Generative Models. From 1990 to 2002, Probabilistic Models played a crucial role in shaping Dialogue Systems research, while RNNs took the lead between 2003 and 2017.
|
| 153 |
+
|
| 154 |
+
Machine Translation, another task related to Language Modeling, requires the generation of the translated text. Naturally, we observe the influence of similar entities in Machine Translation research. Probabilistic Models had the most decisive impact between 1990 and 2002. In recent years (2018-2022), Transformers have emerged as the dominant influence in this research area.
|
| 155 |
+
|
| 156 |
+
In the field of Speech Recognition, Hidden Markov Models (HMMs) have shown a significant influence. HMMs have played a crucial role in shaping Speech Recognition research between 1979 to 2002.
|
| 157 |
+
|
| 158 |
+
Named Entity Recognition (NER) has also been influenced by Hidden Markov Models, particularly in its early days (1990-2002), as NER is often formulated as a sequence tagging problem. Various parser algorithms were employed to solve the problem in the period between 2003 and 2017.
|
| 159 |
+
|
| 160 |
+
For Semantic Parsing, parser algorithms have been instrumental and have had a significant impact on research in this area. Between 1979 and 1989, Grammar Induction techniques were used to elicit the underlying semantic parse trees.
|
| 161 |
+
|
| 162 |
+
From 1990 to 2002, researchers employed various statistical models in Morphological Analysis, which is evident from our results.
|
| 163 |
+
|
| 164 |
+
In Semantic Role Labeling, Support Vector Machines and Neural Network Models have been widely used to solve this task.
|
| 165 |
+
|
| 166 |
+
In Co-reference Resolution, Neural Network
|
| 167 |
+
|
| 168 |
+
<table><tr><td rowspan="2">Task</td><td colspan="5">Primary Cause</td></tr><tr><td>1979-1989</td><td>1990-2002</td><td>2003-2017</td><td>2018-2022</td><td>1979-2022</td></tr><tr><td>Language Modeling</td><td>-</td><td>-</td><td>Recurrent Neural NetworksM</td><td>TransformersM</td><td>TransformersM</td></tr><tr><td>Dialogue System</td><td>-</td><td>Probabilistic Generative ModelsM</td><td>Recurrent Neural NetworksM</td><td>MultiWozD</td><td>MultiWozD</td></tr><tr><td>Machine Translation</td><td>-</td><td>Probabilistic Generative ModelsM</td><td>WMT DataD</td><td>TransformersM</td><td>TransformersM</td></tr><tr><td>Speech Recognition</td><td>Hidden Markov ModelsM</td><td>Hidden Markov ModelsM</td><td>Machine TranslationT</td><td>Machine TranslationT</td><td>Hidden Markov ModelsM</td></tr><tr><td>Named Entity Recognition</td><td>-</td><td>Hidden Markov ModelsM</td><td>POS TaggingT</td><td>Relation ExtractionT</td><td>POS TaggingT</td></tr><tr><td>POS Tagging</td><td>-</td><td>Text ClassificationT</td><td>Parser AlgorithmsM</td><td>Word SegmentationT</td><td>Word SegmentationT</td></tr><tr><td>Semantic Parsing</td><td>Grammar InductionM</td><td>Parser AlgorithmsM</td><td>Parser AlgorithmsM</td><td>Dependency ParsingT</td><td>Parser AlgorithmsM</td></tr><tr><td>Morphological Analysis</td><td>-</td><td>Statistical ModelsM</td><td>Dependency ParsingT</td><td>UD TreebankD</td><td>Statistical ModelsM</td></tr><tr><td>Semantic Role Labeling</td><td>-</td><td>-</td><td>Support Vector MachinesM</td><td>Neural Network ModelsM</td><td>Support Vector MachinesM</td></tr><tr><td>Co-reference Resolution</td><td>-</td><td>MUC-VI Text CollectionD</td><td>Integer Linear ProgrammingM</td><td>Neural Network ModelsM</td><td>Neural Network ModelsM</td></tr><tr><td>Word Sense Disambiguation</td><td>-</td><td>WordnetD</td><td>Maximum Entropy ModelsM</td><td>Neural Network ModelsM</td><td>WordnetD</td></tr><tr><td>Sentiment Analysis</td><td>-</td><td>-</td><td>Twitter DatasetD</td><td>Text ClassificationT</td><td>Text ClassificationT</td></tr><tr><td>Argument Mining</td><td>-</td><td>-</td><td>Text ClassificationT</td><td>Sentiment AnalysisT</td><td>Sentiment AnalysisT</td></tr><tr><td>Question Answering</td><td>Parsing AlgorithmsM</td><td>Information ExtractionT</td><td>Information ExtractionT</td><td>Pre-Trained LLMsM</td><td>Information ExtractionT</td></tr><tr><td>Textual Entailment</td><td>-</td><td>-</td><td>Statistical ModelsM</td><td>Pre-Trained LLMsM</td><td>Pre-Trained LLMsM</td></tr><tr><td>Summarization</td><td>-</td><td>WordnetD</td><td>Sentence CompressionT</td><td>Pre-Trained LLMsM</td><td>Pre-Trained LLMsM</td></tr></table>
|
| 169 |
+
|
| 170 |
+
Table 4: Causal analysis identifies the main drivers (Methods, Tasks, Datasets) of frequency shifts in NLP tasks across four periods, with "- " indicating insufficient data for analysis.
|
| 171 |
+
|
| 172 |
+
models have gained prominence starting in 2018. However, from 2003 to 2017, Integer Linear Programming was also utilized to address this problem.
|
| 173 |
+
|
| 174 |
+
Pre-trained Language Models (LLMs) have demonstrated superior performance in several NLP tasks, including Question Answering. Researchers have also explored parsing algorithms to parse questions and align them with potential answers.
|
| 175 |
+
|
| 176 |
+
Furthermore, Textual Entailment and Summarization have been heavily influenced by pre-trained LLMs between 2018 and 2022, as evident from our results.
|
| 177 |
+
|
| 178 |
+
# Q2. How have changes in data availability contributed to the NLP Research Tasks?
|
| 179 |
+
|
| 180 |
+
High-quality datasets play a crucial role in advancing NLP research. While new methodologies are important, they cannot fully propel the field forward without the support of high-quality datasets. Researchers understand the significance of dataset quality and actively curate datasets to drive advancements in the field. Our findings further confirm the prevalence of this trend, highlighting the strong emphasis on dataset quality in NLP research.
|
| 181 |
+
|
| 182 |
+
In the early stages of deep neural models, such as Recurrent Neural Networks (RNNs), the creation of large datasets became essential for efficient model training. Between 2018 and 2022, several datasets were curated, with MultiWoz being the most widely used dataset for research in Dialogue Systems.
|
| 183 |
+
|
| 184 |
+
In the domain of Machine Translation, the significance of datasets in shaping research direction cannot be overlooked. The influence of WMT datasets on Machine Translation research is evident from our findings.
|
| 185 |
+
|
| 186 |
+
For Morphological Analysis, the Universal De
|
| 187 |
+
|
| 188 |
+
pendency Treebank dataset is frequently used as a benchmark, indicating its importance in driving research in this area.
|
| 189 |
+
|
| 190 |
+
During the period of 1990-2002, the creation of the MUC-VI dataset played a crucial role in advancing research in Co-reference resolution.
|
| 191 |
+
|
| 192 |
+
In the field of Sentiment Analysis, the Twitter dataset holds significant importance in driving research in this domain.
|
| 193 |
+
|
| 194 |
+
Overall, our analysis underscores the vital role of datasets in shaping and driving research across various NLP tasks.
|
| 195 |
+
|
| 196 |
+
# Q3. Do evaluation metrics drive paradigm shifts in NLP research?
|
| 197 |
+
|
| 198 |
+
Most NLP tasks rely on a standard set of metrics borrowed from other domains, such as machine learning and computer vision, to evaluate system performance. However, there is limited research dedicated to improving these metrics within the field of NLP, as it often requires theoretical knowledge beyond the scope of NLP itself. Despite this, our analysis in Table 5 reveals some noteworthy exceptions. Metrics explicitly designed for evaluating NLP tasks, such as BLEU and METEOR, have demonstrated significant impact in advancing Machine Translation research. Similarly, the metric ROUGE has influenced research in the field of Summarization. While perplexity scores are commonly used to measure the generalization capabilities of probability distributions, they are predominantly utilized for evaluating language models in NLP tasks.
|
| 199 |
+
|
| 200 |
+
# Q4. What is the causal impact of cross-pollination of ideas between related NLP tasks?
|
| 201 |
+
|
| 202 |
+
We consistently observe a pattern of related NLP tasks evolving in tandem, borrowing ideas and tech-
|
| 203 |
+
|
| 204 |
+
niques from one another. This trend is clearly reflected in our findings. For instance, Speech Recognition and Machine Translation are linked as researchers explore end-to-end systems that translate speech, and our results show that Machine Translation has had the greatest influence on Speech Recognition research between 2003 and 2022.
|
| 205 |
+
|
| 206 |
+
Named Entity Recognition (NER) is commonly approached as a sequence tagging problem, and it is influenced by related tasks such as POS Tagging (2003-2017) and Relation Extraction (2018-2022), as these problems are often jointly solved. Similarly, POS Tagging initially posed as a text classification problem (1990-2002), is significantly impacted by the word segmentation task, as evident from our results in the period of 2018-2022.
|
| 207 |
+
|
| 208 |
+
In recent years (2018-2022), dependency and semantic parsing have been jointly solved using the same neural model, highlighting the influence of dependency parsing on research in semantic parsing. Sentiment Analysis has garnered considerable research interest and is commonly framed as a text classification problem. Additionally, Argument Mining, which involves understanding the sentiments behind arguments, is influenced by sentiment analysis. Furthermore, the classification of various argument components, such as claims and evidence, is often approached as text classification problems, as evidenced by our results.
|
| 209 |
+
|
| 210 |
+
# 7 Discussion: Correlation and Causation
|
| 211 |
+
|
| 212 |
+
correlation does not imply causation
|
| 213 |
+
|
| 214 |
+
- Pearson (1892)
|
| 215 |
+
|
| 216 |
+
Causation and correlation, although related, are distinct concepts. While they can coexist, correlation does not simply imply causation. Causation signifies a direct cause-and-effect relationship, where one action leads to a specific outcome. In contrast, correlation simply indicates that two actions are related in some way, without one necessarily causing the other.
|
| 217 |
+
|
| 218 |
+
In our work, we focus on causal inference from data. While correlation-based measures provide a straightforward method for quantifying associations between variables, they often fall short when it comes to explaining complex cause-and-effect relationships.
|
| 219 |
+
|
| 220 |
+
To demonstrate the effectiveness of our framework, we establish a simple baseline using a PMI-based correlation measure (Bouma, 2009). For this analysis, we select Machine Translation as our tar
|
| 221 |
+
|
| 222 |
+
get task entity due to its prominent presence in our corpus and the NLP research landscape. We calculate the PMI scores of Machine Translation with all other TDMM entities. The PMI score represents the probabilities of co-occurrence between two entities in sentences from research papers, normalized by their individual occurrence probabilities.
|
| 223 |
+
|
| 224 |
+
Interestingly, we find that accuracy, an entity of type metric, has the highest PMI score with Machine Translation among all other entities. However, it is important to note that accuracy is a widely used metric across various NLP tasks, and it is not specifically developed for machine translation, nor has machine translation influenced the concept of accuracy. This observation emphasizes the insufficiency of relying solely on correlation-based metrics to understand and analyze research influence on an entity.
|
| 225 |
+
|
| 226 |
+
We observe that relying solely on correlations can lead to misleading results and interpretations. Therefore, in order to understand the influence of associated TDMM entities on NLP Task entities, we utilize causal algorithms that enable us to gain insights into the cause-and-effect dynamics among the variables we study.
|
| 227 |
+
|
| 228 |
+
# 8 Concluding Remarks
|
| 229 |
+
|
| 230 |
+
In this paper, we retrospectively study NLP research from a causal perspective, quantifying research trends of task entities and proposing a systematic framework using causal algorithms to identify key reasons behind the emergence or disappearance of NLP tasks. Our analysis reveals that tasks and methods are the primary drivers of research in NLP, with datasets following their influence, while metrics have minimal impact. It is important to note that in our analysis, we have structured the reference time into four distinct intervals (see Table 1); however, it can be applied to diverse timeframes, ranging from longer periods to brief intervals, including single years. This adaptability, in the context of rapid recent advancements in NLP, allows to zoom in on local trends and developments that might otherwise go unnoticed (such as the influence of in-context learning on NLP tasks).
|
| 231 |
+
|
| 232 |
+
We believe our causal analysis enhances understanding of the interplay of research entities in NLP, contributing to the growing body of work on causality and NLP (Feder et al., 2021). We provide with additional analysis and insights in Appendix B.
|
| 233 |
+
|
| 234 |
+
# Limitations
|
| 235 |
+
|
| 236 |
+
This work is centered on NLP research papers from ACL Anthology, with a focus on papers from the "ACL Events" category. The "ACL Events" category encompasses major conferences, workshops, and journals, including ACL, NAACL, EMNLP, EACL, AACL, CL, and TACL. We also include papers published at COLING from the "non-ACL Events" category. Nevertheless, it is important to acknowledge the presence of NLP papers beyond ACL Anthology in AI journals, regional conferences, and preprint servers. Furthermore, we recognize that certain NLP papers may become available on preprint servers before their official publication in peer-reviewed venues. In this study, we focus on ACL Anthology, which can introduce a time lag when assessing the early impact of influential papers released as preprints (e.g., BERT) or only on preprint servers (e.g., RoBERTa). To address such challenges, we leave the curation and inclusion of NLP research papers from these alternative sources for future works.
|
| 237 |
+
|
| 238 |
+
Our framework requires research papers tagged with entities as input. Hence, the quality of the tags plays a crucial role in the causal inference of our proposed method. The taggers generate noisy outputs and, thus, might require human intervention to denoise the tags. Moreover, causal algorithms require a large amount of data to produce statistically significant results. Hence, research areas that are less explored or newly emerging may not always be suitable for this framework to be applied. Additionally, we highlight that in this work, we do not consider extra-linguistic factors like author affiliations, funding, gender, etc. We leave them for future research work.
|
| 239 |
+
|
| 240 |
+
# Ethics Statement
|
| 241 |
+
|
| 242 |
+
In this work, we use publicly available data from ACL Anthology and do not involve any personal data. It is important to recognize that, while our framework is data-driven, individual perspectives toward research are inherently subjective. Decisions involving science should consider data as well as ethical, social, and other qualitative factors. Furthermore, we underscore that the low influence of TDMM entities in our analysis should not be the sole reason for devaluing research papers or reducing their investments. Ethical and academic considerations should guide decisions on research evaluation and resource allocation.
|
| 243 |
+
|
| 244 |
+
# Acknowledgements
|
| 245 |
+
|
| 246 |
+
We thank Ilia Kuznetsov for his feedback on the initial version of this work. We appreciate all the anonymous reviewers for their helpful comments and suggestions for further analysis. This work has been funded by the German Research Foundation (DFG) as part of the Research Training Group KRITIS No. GRK 2222.
|
| 247 |
+
|
| 248 |
+
# References
|
| 249 |
+
|
| 250 |
+
2008-2022. Grobid. https://github.com/kermitt2/grobid.
|
| 251 |
+
Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 252 |
+
John M Barrios and Yael Hochberg. 2020. Risk perception through the lens of politics in the time of the pandemic. Working Paper 27008, National Bureau of Economic Research.
|
| 253 |
+
Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762-4779, Florence, Italy. Association for Computational Linguistics.
|
| 254 |
+
Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. Proceedings of GSCL, 30:31-40.
|
| 255 |
+
Kevin W. Boyack and Richard Klavans. 2022. An improved practical approach to forecasting exceptional growth in research. Quantitative Science Studies, 3(3):672-693.
|
| 256 |
+
Carlos Cinelli, Daniel Kumor, Bryant Chen, Judea Pearl, and Elias Bareinboim. 2019. Sensitivity analysis of linear structural causal models. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1252-1261. PMLR.
|
| 257 |
+
Diego Colombo and Marloes H. Maathuis. 2014. Order-independent constraint-based causal structure learning. Journal of Machine Learning Research, 15(116):3921-3962.
|
| 258 |
+
Quang Do, Yee Seng Chan, and Dan Roth. 2011. Minimally supervised event causality identification. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 294-303, Edinburgh, Scotland, UK. Association for Computational Linguistics.
|
| 259 |
+
|
| 260 |
+
Jesse Dunietz, Lori Levin, and Jaime Carbonell. 2017. The BECuSE corpus 2.0: Annotating causality and overlapping relations. In Proceedings of the 11th Linguistic Annotation Workshop, pages 95-104, Valencia, Spain. Association for Computational Linguistics.
|
| 261 |
+
Amir Feder, Katherine A. Keith, Emaad Manzoor, Reid Pryzant, Dhanya Sridhar, Zach Wood-Doughty, Jacob Eisenstein, Justin Grimmer, Roi Reichart, Margaret E. Roberts, Brandon M. Stewart, Victor Veitch, and Diyi Yang. 2021. Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. CoRR, abs/2109.00725.
|
| 262 |
+
Christian Fong and Justin Grimmer. 2016. Discovery of treatments from text corpora. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1600-1609, Berlin, Germany. Association for Computational Linguistics.
|
| 263 |
+
Jonathan Grudin. 2009. AI and HCI: Two fields divided by a common focus. AI Magazine, 30(4):48-48.
|
| 264 |
+
David Hall, Daniel Jurafsky, and Christopher D. Manning. 2008. Studying the history of ideas using topic models. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 363-371, Honolulu, Hawaii. Association for Computational Linguistics.
|
| 265 |
+
Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, and Debasis Ganguly. 2019. Identification of tasks, datasets, evaluation metrics, and numeric scores for scientific leaderboards construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5203-5213, Florence, Italy. Association for Computational Linguistics.
|
| 266 |
+
Yufang Hou, Charles Jochim, Martin Gleize, Francesca Bonin, and Debasis Ganguly. 2021. TDMSci: A specialized corpus for scientific literature entity tagging of tasks datasets and metrics. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 707-714, Online. Association for Computational Linguistics.
|
| 267 |
+
Zhijing Jin, Zeyu Peng, Tejas Vaidhya, Bernhard Schoelkopf, and Rada Mihalcea. 2021. Mining the cause of political decision-making from social media: A case study of COVID-19 policies across the US states. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 288–301, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 268 |
+
Katherine Keith, David Jensen, and Brendan O'Connor. 2020. Text and causal inference: A review of using text to remove confounding from causal estimates. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5332-5344, Online. Association for Computational Linguistics.
|
| 269 |
+
|
| 270 |
+
Bernard Koch, Emily Denton, Alex Hanna, and Jacob Gates Foster. 2021. Reduced, reused and recycled: The life of a dataset in machine learning research. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
|
| 271 |
+
Thomas S Kuhn. 1970. The structure of scientific revolutions, volume 111. Chicago University of Chicago Press.
|
| 272 |
+
Shixia Liu, Yang Chen, Hao Wei, J. Yang, Kun Zhou, and Steven Mark Drucker. 2015. Exploring topical lead-lag across corpora. IEEE Transactions on Knowledge and Data Engineering, 27:115-129.
|
| 273 |
+
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3219-3232, Brussels, Belgium. Association for Computational Linguistics.
|
| 274 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26. Curran Associates, Inc.
|
| 275 |
+
Paramita Mirza and Sara Tonelli. 2014. An analysis of causality between events and its relation to temporal information. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2097-2106, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
|
| 276 |
+
Saif M. Mohammad. 2019. The state of NLP literature: A diachronic analysis of the acl anthology. ArXiv, abs/1911.03562.
|
| 277 |
+
Saif M. Mohammad. 2020. Examining citations of natural language processing literature. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5199-5209, Online. Association for Computational Linguistics.
|
| 278 |
+
Matteo Negri, Marco Turchi, José G. C. de Souza, and Daniele Falavigna. 2014. Quality estimation for automatic speech recognition. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1813-1823, Dublin, Ireland. Dublin City University and Association for Computational Linguistics.
|
| 279 |
+
Jong-Hoon Oh, Kentaro Torisawa, Chikara Hashimoto, Ryu Iida, Masahiro Tanaka, and Julien Kloetzer. 2016. A semi-supervised learning approach to why question answering. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, page 3022-3029. AAAI Press.
|
| 280 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the
|
| 281 |
+
|
| 282 |
+
40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 283 |
+
Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.
|
| 284 |
+
Karl Pearson. 1892. The grammar of science. Nature, 46(1185):247-247.
|
| 285 |
+
Vinodkumar Prabhakaran, William L. Hamilton, Dan McFarland, and Dan Jurafsky. 2016. Predicting the rise and fall of scientific topics from trends in their rhetorical framing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1170-1180, Berlin, Germany. Association for Computational Linguistics.
|
| 286 |
+
Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if-then reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):3027-3035.
|
| 287 |
+
Isabel Segura-Bedmar, Paloma Martínez, and María Herrero-Zazo. 2013. SemEval-2013 task 9: Extraction of drug-drug interactions from biomedical texts (DDIExtraction 2013). In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 341–350, Atlanta, Georgia, USA. Association for Computational Linguistics.
|
| 288 |
+
Naoki Shibata, Yuya Kajikawa, Yoshiyuki Takeda, and Katsumori Matsushima. 2008. Detecting emerging research fronts based on topological measures in citation networks of scientific publications. Technovation, 28(11):758-775.
|
| 289 |
+
Shohei Shimizu, Takanori Inazumi, Yasuhiro Sogawa, Aapo Hyvarinen, Yoshinobu Kawahara, Takashi Washio, Patrik O Hoyer, Kenneth Bollen, and Patrik Hoyer. 2011. Directlingam: A direct method for learning a linear non-gaussian structural equation model. Journal of Machine Learning Research JMLR, 12(Apr):1225-1248.
|
| 290 |
+
Janvijay Singh, Mukund Rungta, Diyi Yang, and Saif Mohammad. 2023. Forgotten knowledge: Examining the citational amnesia in NLP. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6192-6208, Toronto, Canada. Association for Computational Linguistics.
|
| 291 |
+
Henry Small. 2006. Tracking and predicting growth areas in science. Scientometrics, 68(3):595-610.
|
| 292 |
+
Chenhao Tan, Dallas Card, and Noah A. Smith. 2017. Friendships, rivalries, and trysts: Characterizing relations between ideas in texts. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages
|
| 293 |
+
|
| 294 |
+
773-783, Vancouver, Canada. Association for Computational Linguistics.
|
| 295 |
+
Ana Sabina Uban, Cornelia Caragea, and Liviu P. Dinu. 2021. Studying the evolution of scientific topics and their relationships. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 1908–1922, Online. Association for Computational Linguistics.
|
| 296 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
|
| 297 |
+
Victor Veitch, Dhanya Sridhar, and David Blei. 2020. Adapting text embeddings for causal inference. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of Proceedings of Machine Learning Research, pages 919-928. PMLR.
|
| 298 |
+
Victor Veitch and Anisha Zaveri. 2020. Sense and sensitivity analysis: Simple post-hoc analysis of bias due to unobserved confounding. In Advances in Neural Information Processing Systems, volume 33, pages 10999-11009. Curran Associates, Inc.
|
| 299 |
+
Jan Philip Wahle, Terry Ruas, Mohamed Abdalla, Bela Gipp, and Saif M. Mohammad. 2023. We are who we cite: Bridges of influence between natural language processing and other academic fields. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, Singapore. Association for Computational Linguistics.
|
| 300 |
+
Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors influencing the surprising instability of word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2092-2102, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 301 |
+
Christopher KI Williams and Carl Edward Rasmussen. 2006. Gaussian processes for machine learning, volume 2. MIT press Cambridge, MA.
|
| 302 |
+
Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai, and Heng Tao Shen. 2018. The gap of semantic parsing: A survey on automatic math word problem solvers. CoRR, abs/1808.07290.
|
| 303 |
+
|
| 304 |
+
# A Appendix: Additional Results
|
| 305 |
+
|
| 306 |
+
# A.1 Causal Relation
|
| 307 |
+
|
| 308 |
+
In Figure 4, we observe that entity change values of tasks, datasets, metrics and methods have direct causal influence on task stability value.
|
| 309 |
+
|
| 310 |
+

|
| 311 |
+
Figure 4: Causal Graph: The graph shows that the emergence and disappearance of TDMM entities (entity change values) have a direct causal effect on the stability of task entities.
|
| 312 |
+
|
| 313 |
+
# A.2 Causal Effects
|
| 314 |
+
|
| 315 |
+
In Table 5, we observe the entities (Tasks, Datasets, Methods, and Metrics) that influence research on a given NLP Task.
|
| 316 |
+
|
| 317 |
+
# B Appendix: Supplementary Analysis
|
| 318 |
+
|
| 319 |
+
In addition to the primary results presented in the paper (Section 6), in this section, we describe the supplementary analysis.
|
| 320 |
+
|
| 321 |
+
# B.1 NLP Tasks and Their Dataset Evolution
|
| 322 |
+
|
| 323 |
+
Frequently Pursued NLP Tasks. From Table 5 in our paper, we observe that overall (from 1979-2022), among all the tasks, "Text Classification" (column 6) holds a remarkable position. This prominence stems from the frequent usage of various NLP tasks being framed or aligned as "Text Classification" or borrowing concepts from it to address other tasks such as "Sentiment Analysis" or "Word Sense Disambiguation." Additionally, our framework offers the flexibility to perform a similar analysis between any chosen periods.
|
| 324 |
+
|
| 325 |
+
Evolution of Datasets in NLP Tasks. Referring to Table 5 in our paper, in the context of "Speech Recognition," we observe a shift in influential datasets over different periods. Between 1990-2002, the "WSJ Corpus" took the lead, while in the subsequent period of 2003-2017, the "ATIS Dataset" had more influence. Interestingly, between 2018-2022, the trend shifted once again to the "Switchboard Dataset".
|
| 326 |
+
|
| 327 |
+
A similar trend is reflected in the "Summarization" task as well: in the years 1990-2002, "Word-net" played a significant role, while the "Gigaword Dataset" took over in 2003-2017. However, in the most recent period of 2018-2022, "Pubmed" emerged as the notable dataset for the "Summarization" task.
|
| 328 |
+
|
| 329 |
+
Common Datasets Across NLP Tasks. We observe from Table 5 (column 6) that across the entire span from 1979 to 2022, the "Penn Tree
|
| 330 |
+
|
| 331 |
+
bank” dataset emerged as a pivotal influence, significantly impacting tasks such as “Language Modeling,” “POS Tagging,” and “Semantic Parsing.” Using our framework, a similar analysis could also be done between any chosen periods.
|
| 332 |
+
|
| 333 |
+
# B.2 Entity Influence on Task Frequency and Stability
|
| 334 |
+
|
| 335 |
+
Influence of Research Entities on Task Stability. We measure the causal effect of research entities on Task Stability Value (see Section 5.1). From the resulting causal graph (Figure 4), we observe that the entity change values of associated tasks, datasets, metrics, and methods directly impact the stability value of the target task, similar to the task frequency shift value.
|
| 336 |
+
|
| 337 |
+
Correlations Between Task Frequency Change and Stability. We observe a slightly positive correlation between frequency change and stability of research tasks with a Pearson coefficient of 0.08. This is because when a new task emerges, initially, a few researchers start working on it, which gradually increases its frequency of appearance. At the same time, researchers experiment with various methods and datasets to solve these newly emerged tasks, causing high instability (e.g., Math Problem Solving (Zhang et al., 2018)). On the contrary, the opposite is not always true: well-defined tasks are often the most researched, and yet researchers always explore new ideas on these tasks, which harms stability.
|
| 338 |
+
|
| 339 |
+
Overview and Insights. Our analysis shows that research in NLP is primarily driven by tasks and methods; the influence of datasets follows them, and metrics have minimum impact. Our analysis of frequency shift values reveals the gradual paradigm shift in NLP research. Initially, the focus was on practical problems such as Speech Recognition and Machine Translation. However, over time, researchers ventured into more complex areas like textual entailment and argument mining, necessitating domain knowledge and extensive data rea
|
| 340 |
+
|
| 341 |
+
<table><tr><td rowspan="2">Task</td><td colspan="5">Primary Cause</td></tr><tr><td>1979-1989</td><td>1990-2002</td><td>2003-2017</td><td>2018-2022</td><td>1979-2022</td></tr><tr><td rowspan="4">Language Modeling</td><td>-</td><td>-</td><td>Recurrent Neural NetworksM</td><td>TransformersM</td><td>TransformersM</td></tr><tr><td>-</td><td>-</td><td>Machine TranslationT</td><td>Text GenerationT</td><td>Text GenerationT</td></tr><tr><td>-</td><td>-</td><td>Penn TreebankD</td><td>Perplexitym</td><td>Perplexitym</td></tr><tr><td>-</td><td>-</td><td>Perplexitym</td><td>SuperGLUED</td><td>Penn TreebankD</td></tr><tr><td rowspan="4">Dialogue System</td><td>-</td><td>-</td><td>Recurrent Neural NetworksM</td><td>MultiWozD</td><td>MultiWozD</td></tr><tr><td>-</td><td>-</td><td>MultiWozD</td><td>TransformersM</td><td>TransformersM</td></tr><tr><td>-</td><td>-</td><td>Language GenerationT</td><td>Response GenerationT</td><td>Response GenerationT</td></tr><tr><td>-</td><td>-</td><td>Perplexitym</td><td>Rougem</td><td>Rougem</td></tr><tr><td rowspan="4">Machine Translation</td><td>-</td><td>Probabilistic Generative ModelsM</td><td>WMT DataD</td><td>TransformersM</td><td>TransformersM</td></tr><tr><td>-</td><td>Speech RecognitionT</td><td>BLEUm</td><td>METEORm</td><td>METEORm</td></tr><tr><td>-</td><td>Perplexitym</td><td>Attention MechanismM</td><td>Language ModelingT</td><td>Language GenerationT</td></tr><tr><td>-</td><td>Penn TreebankD</td><td>Language GenerationT</td><td>WMT DataD</td><td>WMT DataD</td></tr><tr><td rowspan="4">Speech Recognition</td><td>Hidden Markov ModelsM</td><td>Hidden Markov ModelsM</td><td>Machine TranslationT</td><td>Machine TranslationT</td><td>Hidden Markov ModelsM</td></tr><tr><td>Machine TranslationT</td><td>WSJ CorpusD</td><td>Hidden Markov ModelsM</td><td>Acoustic ModelsM</td><td>Language ModelingT</td></tr><tr><td>Perplexitym</td><td>Perplexitym</td><td>ATIS DatasetD</td><td>Switchboard DatasetD</td><td>Perplexitym</td></tr><tr><td>-</td><td>Language ModelingT</td><td>Word Error Ratem</td><td>Word Error Ratem</td><td>ATIS DatasetD</td></tr><tr><td rowspan="4">Named Entity Recognition</td><td>-</td><td>Hidden Markov ModelsM</td><td>POS TaggingT</td><td>Relation ExtractionT</td><td>POS TaggingT</td></tr><tr><td>-</td><td>Information ExtractionT</td><td>Conditional Random FieldsM</td><td>Wikipedia CorpusD</td><td>Random FieldsM</td></tr><tr><td>-</td><td>Genia CorpusD</td><td>PubmedD</td><td>Pre-Trained LLMsM</td><td>Conditional OnontotesD</td></tr><tr><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">POS Tagging</td><td>-</td><td>Text ClassificationT</td><td>Parser AlgorithmsM</td><td>Word SegmentationT</td><td>Word SegmentationT</td></tr><tr><td>-</td><td>Discriminative ModelsM</td><td>Word SegmentationT</td><td>Neural Network ModelsM</td><td>Neural Network ModelsM</td></tr><tr><td>-</td><td>Penn TreebankD</td><td>Penn TreebankD</td><td>Penn TreebankD</td><td>Penn TreebankD</td></tr><tr><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Word Sense Disambiguation</td><td>-</td><td>WordnetD</td><td>Maximum Entropy ModelsM</td><td>Neural Network ModelsM</td><td>WordnetD</td></tr><tr><td>-</td><td>Semantic TaggingT</td><td>Text ClassificationT</td><td>Text ClassificationT</td><td>Neural Network ModelsM</td></tr><tr><td>-</td><td>Discriminative ModelsM</td><td>WordnetD</td><td>WordnetD</td><td>Text ClassificationT</td></tr><tr><td>-</td><td>Accuracym</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Morphological Analysis</td><td>-</td><td>Statistical ModelsM</td><td>Dependency ParsingT</td><td>UD TreebankD</td><td>Statistical ModelsM</td></tr><tr><td>-</td><td>Word SegmentationT</td><td>Statistical ModelsM</td><td>Pre-Trained LLMsM</td><td>Dependency ParsingT</td></tr><tr><td>-</td><td>UD TreebankD</td><td>UD TreebankD</td><td>LemmatizationT</td><td>UD TreebankD</td></tr><tr><td>-</td><td>Accuracym</td><td>Accuracym</td><td>F1 Scorem</td><td>Accuracym</td></tr><tr><td rowspan="4">Semantic Parsing</td><td>Grammar InductionM</td><td>Parser AlgorithmsM</td><td>Parser AlgorithmsM</td><td>Dependency ParsingT</td><td>Parser AlgorithmsM</td></tr><tr><td>Information RetrievalT</td><td>Information ExtractionT</td><td>Dependency ParsingT</td><td>Parser AlgorithmsM</td><td>Penny TreebankD</td></tr><tr><td>Accuracym</td><td>Penn TreebankD</td><td>Penn TreebankD</td><td>Penn TreebankD</td><td>Dependence ParsingT</td></tr><tr><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Semantic Role Labeling</td><td>-</td><td>-</td><td>Support Vector MachinesM</td><td>Neural Network ModelsM</td><td>Support Vector MachinesM</td></tr><tr><td>-</td><td>-</td><td>Relation ExtractionT</td><td>Named Entity RecognitionT</td><td>Named Entity RecognitionT</td></tr><tr><td>-</td><td>-</td><td>PropbankD</td><td>PropbankD</td><td>PropbankD</td></tr><tr><td>-</td><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Co-reference Resolution</td><td>-</td><td>MUC-VI Text CollectionD</td><td>Integer Linear ProgrammingM</td><td>Neural Network ModelsM</td><td>Neural Network ModelsM</td></tr><tr><td>-</td><td>Discriminator ModelsM</td><td>OnontotesD</td><td>OnontotesD</td><td>OnontotesD</td></tr><tr><td>-</td><td>Word Sense DisambiguationT</td><td>Mention DetectionT</td><td>Mention DetectionT</td><td>Mention DetectionT</td></tr><tr><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td><td>Mention DetectionT</td></tr><tr><td rowspan="4">Sentiment Analysis</td><td>-</td><td>-</td><td>Twitter DatasetD</td><td>Text ClassificationT</td><td>Text ClassificationT</td></tr><tr><td>-</td><td>-</td><td>Text ClassificationT</td><td>Pre-Trained LLMsM</td><td>Neural Network ModelsM</td></tr><tr><td>-</td><td>-</td><td>Neural Netwark ModelsM</td><td>Amazon ReviewsD</td><td>Twitter DatasetD</td></tr><tr><td>-</td><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Argument Mining</td><td>-</td><td>-</td><td>Text ClassificationT</td><td>Sentiment AnalysisT</td><td>Sentiment AnalysisT</td></tr><tr><td>-</td><td>-</td><td>Neural Network ModelsM</td><td>Neural Network ModelsM</td><td>Neural Network ModelsM</td></tr><tr><td>-</td><td>-</td><td>Wikipedia CorpusD</td><td>Wikipedia CorpusD</td><td>Wikipedia CorpusD</td></tr><tr><td>-</td><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Question Answering</td><td>Parsing AlgorithmsM</td><td>Information ExtractionT</td><td>Information ExtractionT</td><td>Pre-Trained LLMsM</td><td>Information ExtractionT</td></tr><tr><td>Information RetrievalT</td><td>WordnetD</td><td>FreebaseD</td><td>SquadD</td><td>Pre-Trained LLMsM</td></tr><tr><td>Accuracym</td><td>Accuracym</td><td>Parsing AlgorithmsM</td><td>SummarizationT</td><td>SquadD</td></tr><tr><td>-</td><td>Statistical ModelsM</td><td>F1 Scorem</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Textual Entailment</td><td>-</td><td>-</td><td>Statistical ModelsM</td><td>Pre-Trained LLMsM</td><td>Pre-Trained LLMsM</td></tr><tr><td>-</td><td>-</td><td>Information ExtractionT</td><td>SNLI DatasetD</td><td>SNLI DatasetD</td></tr><tr><td>-</td><td>-</td><td>F1 Scorem</td><td>Text ClassificationT</td><td>Text ClassificationT</td></tr><tr><td>-</td><td>-</td><td>-</td><td>F1 Scorem</td><td>F1 Scorem</td></tr><tr><td rowspan="4">Summarization</td><td>-</td><td>WordnetD</td><td>Sentence CompressionT</td><td>Pre-Trained LLMsM</td><td>Pre-Trained LLMsM</td></tr><tr><td>-</td><td>Probabilistic Generative ModelsM</td><td>Recurrent Neural NetworksM</td><td>Rougem</td><td>Rougem</td></tr><tr><td>-</td><td>F1 Scorem</td><td>Rougem</td><td>PubmedD</td><td>Question AnsweringT</td></tr><tr><td>-</td><td>Information RetrievalT</td><td>GigawordD</td><td>Question AnsweringT</td><td>PubmedD</td></tr></table>
|
| 342 |
+
|
| 343 |
+
Table 5: The primary reason behind the frequency shift of the tasks. We analyze the trends in four different periods of reference. Most influential Task(T), Dataset(D), Method(M) and Metric(m) are given in the decreasing order of their influence. "-" means there is not enough data instances for the causal analysis.
|
| 344 |
+
|
| 345 |
+
soning. Examining stability values, we note that pre-trained language models have emerged as versatile solutions, reducing the need for task-specific approaches.
|
| 346 |
+
|
| 347 |
+
# C Appendix: Algorithms
|
| 348 |
+
|
| 349 |
+
# C.1 DirectLinGAM
|
| 350 |
+
|
| 351 |
+
In Algorithm 1, we describe the DirectLinGAM algorithm (oracle version) in high level as described by Shimizu et al. (2011).
|
| 352 |
+
|
| 353 |
+
# D Appendix: Multiple Linear Regression
|
| 354 |
+
|
| 355 |
+
We use multiple linear regression to regress a variable on several variables (Pearl et al., 2016). For instance, if we want to predict the value of a variable $Y$ using the values of variables $X_{1}, X_{2}, \ldots, X_{k-1}, X_{k}$ , we perform multiple linear regression of $Y$ on $\{X_{1}, X_{2}, \ldots, X_{k-1}, X_{k}\}$ , and estimate a regression relationship (Eqn. 1), which represents an inclined plane through the $(k+1)$ -dimensional coordinate system.
|
| 356 |
+
|
| 357 |
+
# Algorithm 1: Causal Graph Discovery: DirectLinGAM-Algorithm
|
| 358 |
+
|
| 359 |
+
1 Given a p-dimensional random vector $x$ , a set of its variable subscripts $U$ and a $p \times n$ data matrix of the random vector as $X$ , initialize an ordered list of variables $K := \phi$ and $m := 1$ ;
|
| 360 |
+
2 Repeat until $p - 1$ subscripts are appended to $K$ : Perform least square regression of $x_{i}$ and $x_{j}, \forall i \in U - K (i \neq j)$ and compute the residual vectors $r^{(j)}$ and the residual data matrix $R^{(j)}$ from the matrix $X$ , $\forall j \in U - K$ . Final a variable $x_{m}$ independent of its residuals and append $m$ to the end of $K$ ;
|
| 361 |
+
3 Append the remaining variable to the end of $K$ ;
|
| 362 |
+
4 Construct a strictly lower triangular matrix $B$ by following the order in $K$ , and estimate the connection strengths $b_{ij}$ by using some conventional covariance-based regression such as least squares and maximum likelihood approaches on the original random vector $x$ and the original data matrix $X$ ;
|
| 363 |
+
|
| 364 |
+
$$
|
| 365 |
+
Y = r _ {0} + \sum_ {i = 1} ^ {k} r _ {i} X _ {i} \tag {1}
|
| 366 |
+
$$
|
| 367 |
+
|
| 368 |
+
The Gauss-Markov theorem (Williams and Rasmussen, 2006) simplifies the computation of partial regression coefficients $(r_1,\dots,r_k$ in Eqn 1). It states that if we write $\mathrm{Y}$ as a linear combination of $X_{1},X_{2},\ldots ,X_{k - 1},X_{k}$ and noise term $\epsilon$
|
| 369 |
+
|
| 370 |
+
$$
|
| 371 |
+
Y = r _ {0} + \sum_ {i = 1} ^ {k} r _ {i} X _ {i} + \epsilon \tag {2}
|
| 372 |
+
$$
|
| 373 |
+
|
| 374 |
+
then, regardless of the distributions of the variables $Y$ , $X_{1}$ , $X_{2}$ , ..., $X_{k}$ , the best least-square coefficients are obtained when $\epsilon$ is uncorrelated with each regressors, i.e.,
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
C o v (\epsilon , X _ {i}) = 0, \forall i = 1, 2, \dots , k \tag {3}
|
| 378 |
+
$$
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10b4d290532d0c4418c79c961a0d2b758de7cda444d727f1cffa52331544468f
|
| 3 |
+
size 637812
|
adiachronicanalysisofparadigmshiftsinnlpresearchwhenhowandwhy/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:098d4a2bc3756c5b9b9a3bc41a8cb2830fa30e01e21711ae3acdc3af2a84d4ac
|
| 3 |
+
size 461303
|
adiachronicperspectiveonusertrustinaiunderuncertainty/9e11161d-a922-4d24-b432-d5ebe718bf88_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:092c6b8501582c56dcb56abad057f8a53bfda2622b4bda38c3363667edc9a320
|
| 3 |
+
size 86933
|
adiachronicperspectiveonusertrustinaiunderuncertainty/9e11161d-a922-4d24-b432-d5ebe718bf88_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8e97b3bb739bfb36d5576243ca2efe269c7952aaf1c66ac8d6c3188b2cbf235d
|
| 3 |
+
size 110471
|