Re-upload MinerU batch e0533878-c6c3-46a8-8b07-dcee15e7ce39
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- data/2020/2101_07xxx/2101.07206/83062a60-8a64-4191-a22e-65a49e9db2ee_content_list.json +0 -0
- data/2020/2101_07xxx/2101.07206/83062a60-8a64-4191-a22e-65a49e9db2ee_model.json +0 -0
- data/2020/2101_07xxx/2101.07206/full.md +576 -3
- data/2020/2101_07xxx/2101.07206/layout.json +0 -0
- data/2021/2101_00xxx/2101.00117/b1510fde-b32e-4443-bd7a-7c6aed392142_content_list.json +1495 -3
- data/2021/2101_00xxx/2101.00117/b1510fde-b32e-4443-bd7a-7c6aed392142_model.json +1938 -3
- data/2021/2101_00xxx/2101.00117/full.md +284 -3
- data/2021/2101_00xxx/2101.00117/layout.json +0 -0
- data/2021/2101_00xxx/2101.00121/10de5b4b-1a6e-40ba-bf75-79fb29a7975d_content_list.json +1679 -3
- data/2021/2101_00xxx/2101.00121/10de5b4b-1a6e-40ba-bf75-79fb29a7975d_model.json +0 -0
- data/2021/2101_00xxx/2101.00121/full.md +322 -3
- data/2021/2101_00xxx/2101.00121/layout.json +0 -0
- data/2021/2101_00xxx/2101.00133/a7973c66-170a-4f81-81e1-4c4a0eed068c_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00133/a7973c66-170a-4f81-81e1-4c4a0eed068c_model.json +0 -0
- data/2021/2101_00xxx/2101.00133/full.md +412 -3
- data/2021/2101_00xxx/2101.00133/layout.json +0 -0
- data/2021/2101_00xxx/2101.00178/431507cf-6cad-41d1-9637-1521a5ac3935_content_list.json +1479 -3
- data/2021/2101_00xxx/2101.00178/431507cf-6cad-41d1-9637-1521a5ac3935_model.json +1740 -3
- data/2021/2101_00xxx/2101.00178/full.md +298 -3
- data/2021/2101_00xxx/2101.00178/layout.json +0 -0
- data/2021/2101_00xxx/2101.00180/e6c1938e-fd7a-49b4-ac0c-6863dca02ce5_content_list.json +1280 -3
- data/2021/2101_00xxx/2101.00180/e6c1938e-fd7a-49b4-ac0c-6863dca02ce5_model.json +1720 -3
- data/2021/2101_00xxx/2101.00180/full.md +208 -3
- data/2021/2101_00xxx/2101.00180/layout.json +0 -0
- data/2021/2101_00xxx/2101.00190/d5499c11-780e-4ee3-a8b8-c44413fe322b_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00190/d5499c11-780e-4ee3-a8b8-c44413fe322b_model.json +0 -0
- data/2021/2101_00xxx/2101.00190/full.md +385 -3
- data/2021/2101_00xxx/2101.00190/layout.json +0 -0
- data/2021/2101_00xxx/2101.00204/2a8c286c-88d1-4769-975d-642826d5ce5d_content_list.json +1374 -3
- data/2021/2101_00xxx/2101.00204/2a8c286c-88d1-4769-975d-642826d5ce5d_model.json +0 -0
- data/2021/2101_00xxx/2101.00204/full.md +335 -3
- data/2021/2101_00xxx/2101.00204/layout.json +0 -0
- data/2021/2101_00xxx/2101.00216/1322f2fe-d1ab-4e6d-b8cd-999150e9e3a0_content_list.json +1993 -3
- data/2021/2101_00xxx/2101.00216/1322f2fe-d1ab-4e6d-b8cd-999150e9e3a0_model.json +0 -0
- data/2021/2101_00xxx/2101.00216/full.md +440 -3
- data/2021/2101_00xxx/2101.00216/layout.json +0 -0
- data/2021/2101_00xxx/2101.00217/b21cf84d-19f0-4755-9648-42b72523bbcc_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00217/b21cf84d-19f0-4755-9648-42b72523bbcc_model.json +0 -0
- data/2021/2101_00xxx/2101.00217/full.md +0 -0
- data/2021/2101_00xxx/2101.00217/layout.json +0 -0
- data/2021/2101_00xxx/2101.00240/a32da6b4-14ab-4952-b4f7-fdc1a1a5e4f1_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00240/a32da6b4-14ab-4952-b4f7-fdc1a1a5e4f1_model.json +0 -0
- data/2021/2101_00xxx/2101.00240/full.md +0 -0
- data/2021/2101_00xxx/2101.00240/layout.json +0 -0
- data/2021/2101_00xxx/2101.00288/0dc1456f-b5e5-426e-a776-7cd26db1c614_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00288/0dc1456f-b5e5-426e-a776-7cd26db1c614_model.json +0 -0
- data/2021/2101_00xxx/2101.00288/full.md +433 -3
- data/2021/2101_00xxx/2101.00288/layout.json +0 -0
- data/2021/2101_00xxx/2101.00292/93f9ad16-4637-4a8a-8e75-517130c9c482_content_list.json +0 -0
- data/2021/2101_00xxx/2101.00292/93f9ad16-4637-4a8a-8e75-517130c9c482_model.json +0 -0
data/2020/2101_07xxx/2101.07206/83062a60-8a64-4191-a22e-65a49e9db2ee_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2020/2101_07xxx/2101.07206/83062a60-8a64-4191-a22e-65a49e9db2ee_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2020/2101_07xxx/2101.07206/full.md
CHANGED
|
@@ -1,3 +1,576 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DeepGreen: Deep Learning of Green's Functions for Nonlinear Boundary Value Problems
|
| 2 |
+
|
| 3 |
+
Craig R. Gin $^{1*}$ , Daniel E. Shea $^{2*}$ , Steven L. Brunton $^{3}$ , and J. Nathan Kutz $^{4}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Population Health and Pathobiology, North Carolina State University, Raleigh, NC 27695, United States
|
| 6 |
+
|
| 7 |
+
$^{2}$ Department of Materials Science and Engineering, University of Washington, Seattle, WA 98195, United States
|
| 8 |
+
|
| 9 |
+
$^{3}$ Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, United States
|
| 10 |
+
|
| 11 |
+
$^{4}$ Department of Applied Mathematics, University of Washington, Seattle, WA 98195, United States
|
| 12 |
+
|
| 13 |
+
*Co-first authors and corresponding authors. Email: crgin@ncsu.edu and sheadan@uw.edu.
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Boundary value problems (BVPs) play a central role in the mathematical analysis of constrained physical systems subjected to external forces. Consequently, BVPs frequently emerge in nearly every engineering discipline and span problem domains including fluid mechanics, electromagnetics, quantum mechanics, and elasticity. The fundamental solution, or Green's function, is a leading method for solving linear BVPs that enables facile computation of new solutions to systems under any external forcing. However, fundamental Green's function solutions for nonlinear BVPs are not feasible since linear superposition no longer holds. In this work, we propose a flexible deep learning approach to solve nonlinear BVPs using a dual-autoencoder architecture. The autoencoders discover an invertible coordinate transform that linearizes the nonlinear BVP and identifies both a linear operator $L$ and Green's function $G$ which can be used to solve new nonlinear BVPs. We find that the method succeeds on a variety of nonlinear systems including nonlinear Helmholtz and Sturm-Liouville problems, nonlinear elasticity, and a 2D nonlinear Poisson equation. The method merges the strengths of the universal approximation capabilities of deep learning with the physics knowledge of Green's functions to yield a flexible tool for identifying fundamental solutions to a variety of nonlinear systems.
|
| 18 |
+
|
| 19 |
+
Keywords: Deep learning, Green's function, Nonlinearity, Koopman operator theory
|
| 20 |
+
|
| 21 |
+
# 1 Introduction
|
| 22 |
+
|
| 23 |
+
Boundary value problems (BVPs) are ubiquitous in the sciences (1). From elasticity to quantum electronics, BVPs have been fundamental in the development and engineering design of numerous transformative technologies of the 20th century. Historically, the formulation of many canonical problems in physics and engineering result in linear BVPs: from Fourier formulating the heat equation in 1822 (2) to more modern applications such as designing chip architectures in the semi-conductor industry (3,4). Much of our theoretical understanding of BVPs comes from the construction of the fundamental solution of the BVP, commonly known as the Green's function (5). The Green's function solution relies on a common property of many BVPs: linearity. Specifically, general solutions rely on linear superposition to hold, thus limiting their usefulness in many modern applica
|
| 24 |
+
|
| 25 |
+
tions where BVPs are often heterogeneous and nonlinear. By leveraging modern deep learning, we are able to learn linearizing transformations of BVPs that render nonlinear BVPs linear so that we can construct the Green's function solution. Our deep learning of Green's functions, Deep-Green, provides a transformative architecture for modern solutions of nonlinear BVPs.
|
| 26 |
+
|
| 27 |
+
DeepGreen is inspired by recent works which use deep neural networks (DNNs) to discover advantageous coordinate transformations for dynamical systems (6-15). The universal approximation properties of DNNs (16, 17) are ideal for learning coordinate transformations that linearize nonlinear BVPs, ODEs and PDEs. Specifically, such linearizing transforms fall broadly under the umbrella of Koopman operator theory (18), which has a modern interpretation in terms of dynamical systems theory (19-22). There are only limited cases in which Koopman operators can only be
|
| 28 |
+
|
| 29 |
+
constructed explicitly (23). However Dynamic Mode Decomposition (DMD) (24) provides a numerical algorithm for approximating the Koopman operator (25), with many recent extensions that improve on the DMD approximation (26). More recently, neural networks have been used to construct Koopman embeddings (6, 8-13, 15). This is an alternative to enriching the observables of DMD (27-33). Thus, neural networks have emerged as a highly effective mathematical tool for approximating complex data (34, 35) with a linear model. DNNs have been used in this context to discover time-stepping algorithms for complex systems (36-40). Moreover, DNNs have been used to approximate constitutive models of BVPs (41).
|
| 30 |
+
|
| 31 |
+
DeepGreen leverages the success of DNNs for dynamical systems to discover coordinate transformations that linearize nonlinear BVPs so that the Green's function solution can be recovered. This allows for the discovery of the fundamental solutions for nonlinear BVPs, opening many opportunities for the engineering and physical sciences. DeepGreen exploits physics-informed learning by using autoenconders (AEs) to take data from the original high-dimensional input space to the new coordinates at the intrinsic rank of the underlying physics (6, 7, 42). The architecture also leverages the success of Deep Residual Networks (DRN) (43) which enables our approach to efficiently handle near-identity coordinate transformations (15).
|
| 32 |
+
|
| 33 |
+
The Green's function constructs the solution to a BVP for any given forcing by superposition. Specifically, consider the classical linear BVP (5)
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
L [ v (\mathbf {x}) ] = f (\mathbf {x}) \tag {1}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
where $L$ is a linear differential operator, $f$ is a forcing, $\mathbf{x} \in \Omega$ is the spatial coordinate, and $\Omega$ is an open set. The boundary conditions $B\nu(\mathbf{x}) = 0$ are imposed on $\partial \Omega$ with a linear operator $B$ . The fundamental solution is constructed by considering the adjoint equation
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
L ^ {\dagger} [ G (\mathbf {x}, \xi) ] = \delta (\mathbf {x} - \xi) \tag {2}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
where $L^{\dagger}$ is the adjoint operator (along with its associated boundary conditions) and $\delta (\mathbf{x} - \boldsymbol {\xi})$ is the Dirac delta function. Taking the inner product of (1) with respect to the Green's function gives the fundamental solution
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
v (\mathbf {x}) = (f (\xi), G (\xi , \mathbf {x})) = \int_ {\Omega} G (\xi , \mathbf {x}) f (\xi) d \xi , \tag {3}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
which is valid for any forcing $f(\mathbf{x})$ . Thus once the Green's function is computed, the solution for arbitrary forcing functions can be easily extracted from integration. This integration represents a superposition of a continuum of delta function forcings that are used to represent $f(\mathbf{x})$ .
|
| 52 |
+
|
| 53 |
+
In many modern applications, nonlinearity plays a fundamental role so that the BVP is of the form
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
N [ u (\mathbf {x}) ] = F (\mathbf {x}) \tag {4}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $N[\cdot]$ is a nonlinear differential operator. For this case, the principle of linear superposition no longer holds and the notion of a fundamental solution is lost. However, modern deep learning algorithms allow us the flexibility of learning a coordinate transformation (and their inverses) of the form
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
v = \psi (u), \tag {5a}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
f = \phi (F), \tag {5b}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
such that $\nu$ and $f$ satisfy the linear BVP (1) for which we generated the fundamental solution (3). This gives a nonlinear fundamental solution through use of this deep learning transformation.
|
| 70 |
+
|
| 71 |
+
DeepGreen is a supervised learning algorithm which is ultimately a high-dimensional interpolation problem (44) for learning the coordinate transformations $\psi(u)$ and $\phi(F)$ . DeepGreen is enabled by a physics-informed deep autoencoder coordinate transformation which establishes superposition for nonlinear BVPs, thus enabling a Koopman BVP framework. The learned Green's function enables accurate construction of solutions with new forcing functions in the same way as a linear BVP. We demonstrate the DeepGreen method on a variety of nonlinear boundary value problems, including a nonlinear 2D Poisson problem, showing that such an architecture can be used in many modern and diverse applications in aerospace, electromagnetics, elasticity, materials, and chemical reactors.
|
| 72 |
+
|
| 73 |
+
# 2 Deep Autoencoders for Linearizing BVPs
|
| 74 |
+
|
| 75 |
+
Deep AEs have been used to linearize dynamical systems, which are initial value problems. We extend this idea to BVPs. To be precise, we consider BVPs of the form
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
N [ u (\mathbf {x}) ] = F (\mathbf {x}), \quad \mathbf {x} \in \Omega , \tag {6a}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
B [ u (\mathbf {x}) ] = 0, \quad \mathbf {x} \in \partial \Omega , \tag {6b}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\Omega$ is a simply connected open set in $\mathbb{R}^n$ with boundary $\partial \Omega$ , $N$ is a nonlinear differential operator, $F(\mathbf{x})$ is the nonhomogeneous forcing function, $B$ is a boundary condition, and $u(\mathbf{x})$ is the solution to the BVP. We wish to find a pair of coordinate transformations of the form (5) such that $\nu$ and $f$ satisfy a linear BVP
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
L [ v (\widehat {\mathbf {x}}) ] = f (\widehat {\mathbf {x}}), \quad \widehat {\mathbf {x}} \in \widehat {\Omega}, \tag {7a}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\widehat {B} [ v (\widehat {\mathbf {x}}) ] = 0, \quad \widehat {\mathbf {x}} \in \partial \widehat {\Omega}, \tag {7b}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
Figure 1: DeepGreen solves nonlinear BVPs by identifying the Green's Function of the nonlinear problem using a deep learning approach with a dual autoencoder architecture. An nonhomogenous linear BVP can be solved using the Green's function approach, but a nonlinear BVP cannot. DeepGreen transforms a nonlinear BVP to a linear BVP, solves the linearized BVP, and then inverse transforms the linear solution to solve the nonlinear BVP.
|
| 97 |
+
|
| 98 |
+
where $L$ is a linear differential operator, $\widehat{\mathbf{x}}$ is the spatial coordinate in the transformed domain $\widehat{\Omega}$ with boundary $\partial \widehat{\Omega}$ . Because $L$ is linear, there is a Green's function $G(\widehat{\mathbf{x}},\xi)$ such that the solution $\nu$ to the BVP (7) can be obtained through convolution of the Green's function and transformed forcing function
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
v (\widehat {\mathbf {x}}) = \int_ {\widehat {\Omega}} G (\xi , \widehat {\mathbf {x}}) f (\xi) d \xi . \tag {8}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
The coordinate transformation along with the Green's function of the linearized BVP provide the analog of a Green's function for the nonlinear BVP (6). In particular, for a forcing function $F(\mathbf{x})$ , the transformed forcing function is $f = \phi(F)$ . The solution to the linearized BVP can be obtained using the Green's function $\nu = \int G(\xi, \widehat{\mathbf{x}}) f(\xi) d\xi$ . Then the solution to the nonlinear BVP (6) is obtained by inverting the coordinate transformation $u = \psi^{-1}(\nu)$ to obtain the solution to the nonlinear BVP, $u(\mathbf{x})$ .
|
| 105 |
+
|
| 106 |
+
The question that remains is how to discover the appropriate coordinate transformations $\psi$ and $\phi$ . We leverage the universal approximation properties of neural networks in order to learn these transformations. In order to use neural networks, we first need to discretize the BVP. Let $\mathbf{u}$ be a spatial discretization of $u(\mathbf{x})$ and $\mathbf{F}$ be a discretization of $F(\mathbf{x})$ .
|
| 107 |
+
|
| 108 |
+
Then the discretized version of the BVP (6) is
|
| 109 |
+
|
| 110 |
+
$$
|
| 111 |
+
\mathbf {N} [ \mathbf {u} ] = \mathbf {F}, \tag {9a}
|
| 112 |
+
$$
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathbf {B} [ \mathbf {u} ] = \mathbf {0}. \tag {9b}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
Neural networks $\psi_{u}$ and $\phi_F$ are used to transform $\mathbf{u}$ and $\mathbf{F}$ to the latent space vectors $\mathbf{v}$ and $\mathbf{f}$
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\mathbf {v} = \psi_ {u} (\mathbf {u}), \tag {10a}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
\mathbf {f} = \phi_ {F} (\mathbf {F}), \tag {10b}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
where $\mathbf{v}$ and $\mathbf{f}$ satisfy the linear equation
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathbf {L} \mathbf {v} = \mathbf {f}, \tag {11}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
for some matrix $\mathbf{L}$ , which is also learned. In order to learn invertible transforms $\psi_{u}$ and $\phi_F$ , we construct the problem as a pair of autoencoder networks.
|
| 135 |
+
|
| 136 |
+
In this construction, the transforms $\psi_{u}$ and $\phi_F$ are the encoders and the transform inverses are the decoders. The network architecture and loss functions are shown in Figure 2. The neural network is trained using numerous and diverse solutions to the nonlinear BVP (9), which can be obtained with many different forcings $\mathbf{F}_k$ . Consider a dataset comprised of pairs of discretized solutions and forcing functions $\{\mathbf{u}_k,\mathbf{F}_k\}_{k = 1}^N$ . The loss function for training the network is
|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
Figure 2: DeepGreen architecture. Two autoencoders learn invertible coordinate transformations that linearize a nonlinear boundary value problem. The latent space is constrained to exhibit properties of a linear system, including linear superposition, which enables discovery of a Green's function for nonlinear boundary value problems.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Loss Functions
|
| 146 |
+
|
| 147 |
+
<table><tr><td rowspan="2">Autoencoder</td><td>u = ψu-1ψu u</td><td>(L1)</td></tr><tr><td>F = φF-1φFF</td><td>(L2)</td></tr><tr><td rowspan="2">Linearity</td><td>Lv = f</td><td>(L3)</td></tr><tr><td>Lv i + Lv j = fi + fj</td><td>(L4)</td></tr><tr><td rowspan="2">Cross-Mapping</td><td>F = φF-1○L○ψu u</td><td>(L5)</td></tr><tr><td>u = ψu-1○G○φFF</td><td>(L6)</td></tr></table>
|
| 148 |
+
|
| 149 |
+
the sum of six losses, each of which enforces a desired condition. The loss functions can be split into three categories:
|
| 150 |
+
|
| 151 |
+
1. Autoencoder losses: We wish to learn invertible coordinate transformations given by equations (10a) and (10b). In order to do so, we use two autoencoders. The autoencoder for $\mathbf{u}$ consists of an encoder $\psi_{u}$ which performs the transformation (10a) and a decoder $\psi_{u}^{-1}$ which inverts the transformation. In order to enforce that the encoder and decoder are inverses, we use the autoencoder loss
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathcal {L} _ {1} = \frac {1}{N} \sum_ {k = 1} ^ {N} \frac {\left\| \mathbf {u} _ {k} - \boldsymbol {\psi} _ {u} ^ {- 1} \circ \boldsymbol {\psi} _ {u} (\mathbf {u} _ {k}) \right\| _ {2} ^ {2}}{\left\| \mathbf {u} _ {k} \right\| _ {2} ^ {2}}. \tag {12}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
Similarly, there is an autoencoder for $\mathbf{F}$ where the encoder $\phi_F$ performs the transformation (10b). This transformation also has an inverse enforced by the associated autoencoder loss function
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
\mathcal {L} _ {2} = \frac {1}{N} \sum_ {k = 1} ^ {N} \frac {\left\| \mathbf {F} _ {k} - \phi_ {F} ^ {- 1} \circ \phi_ {F} (\mathbf {F} _ {k}) \right\| _ {2} ^ {2}}{\left\| \mathbf {F} _ {k} \right\| _ {2} ^ {2}}. \tag {13}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
2. Linearity losses: In the transformed coordinate system, we wish for the BVP to be linear so that the operator can be represented by a matrix $\mathbf{L}$ . The matrix $\mathbf{L}$
|
| 164 |
+
|
| 165 |
+
and the encoded vectors $\mathbf{v}$ and $\mathbf{f}$ should satisfy equation (11). This is enforced with the linear operator loss
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
\mathcal {L} _ {3} = \frac {1}{N} \sum_ {k = 1} ^ {N} \frac {\left\| \mathbf {f} _ {k} - \mathbf {L} \mathbf {v} _ {k} \right\| _ {2} ^ {2}}{\left\| \mathbf {f} _ {k} \right\| _ {2} ^ {2}}. \tag {14}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
The major advantage of working with a linear operator is that linear superposition holds. We use a linear superposition loss in order to further enforce the linearity of the operator in the latent space
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\mathcal {L} _ {4} = \frac {1}{N ^ {2}} \sum_ {j = 1} ^ {N} \sum_ {i = 1} ^ {N} \frac {\left\| \left(\mathbf {f} _ {i} + \mathbf {f} _ {j}\right) - \mathbf {L} \left(\mathbf {v} _ {i} + \mathbf {v} _ {j}\right) \right\| _ {2} ^ {2}}{\left\| \mathbf {f} _ {i} + \mathbf {f} _ {j} \right\| _ {2} ^ {2}}. \tag {15}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
3. Cross-mapping losses: The losses described above are theoretically sufficient to find coordinate transformations for $\mathbf{u}$ and $\mathbf{F}$ as well as a linear operator $\mathbf{L}$ . However, in practice the two autoencoders were not capable of generating the Green's function solution. To rectify this, we add two "cross-mapping" loss functions that incorporate parts of both autoencoders. The first cross-mapping loss enforces the following mapping from $\mathbf{u}$ to $\mathbf{F}$ . First, one of the solutions from the dataset $\mathbf{u}_k$ is encoded with $\psi_u$ . This is an approximation for $\mathbf{v}_k$ . This is then multiplied by the matrix $\mathbf{L}$ , giving an approximation of $\mathbf{f}_k$ . Then the result is decoded with $\phi_F^{-1}$ .
|
| 178 |
+
|
| 179 |
+
This gives an approximation of $\mathbf{F}_k$ . The $\mathbf{u}$ to $\mathbf{F}$ cross-mapping loss is given by the formula
|
| 180 |
+
|
| 181 |
+
$$
|
| 182 |
+
\mathcal {L} _ {5} = \frac {1}{N} \sum_ {k = 1} ^ {N} \frac {\left\| \mathbf {F} _ {k} - \phi_ {F} ^ {- 1} \circ \mathbf {L} \circ \psi_ {u} (\mathbf {u} _ {k}) \right\| _ {2} ^ {2}}{\left\| \mathbf {F} _ {k} \right\| _ {2} ^ {2}}. \tag {16}
|
| 183 |
+
$$
|
| 184 |
+
|
| 185 |
+
We can similarly define a cross-mapping from $\mathbf{F}$ to $\mathbf{u}$ . For a forcing function $\mathbf{F}_k$ from the dataset, it is encoded with $\phi_F$ , multiplied by the Green's function $(\mathbf{G} = \mathbf{L}^{-1})$ , and then decoded with $\psi_u^{-1}$ to give an approximation of $\mathbf{u}_k$ . The $\mathbf{F}$ to $\mathbf{u}$ cross-mapping loss is
|
| 186 |
+
|
| 187 |
+
$$
|
| 188 |
+
\mathcal {L} _ {6} = \frac {1}{N} \sum_ {k = 1} ^ {N} \frac {\left\| \mathbf {u} _ {k} - \psi_ {u} ^ {- 1} \circ \mathbf {L} ^ {- 1} \circ \phi_ {F} (\mathbf {F} _ {k}) \right\| _ {2} ^ {2}}{\left\| \mathbf {u} _ {k} \right\| _ {2} ^ {2}}. \tag {17}
|
| 189 |
+
$$
|
| 190 |
+
|
| 191 |
+
Note that this final loss function gives the best indication of the performance of the network to solve the nonlinear BVP (9) using the Green's function. The strategy for solving (9) for a given discrete forcing function $\mathbf{F}$ is to encode the forcing function to obtain $\mathbf{f} = \phi_F(\mathbf{F})$ , apply the Green's function as in equation (8) to obtain $\mathbf{v}$ , and then decode this function to get the solution $\mathbf{u} = \psi_u^{-1}(\mathbf{v})$ . The discrete version of the convolution with the Green's function given in equation (8) is multiplication by the matrix $\mathbf{L}^{-1}$ .
|
| 192 |
+
|
| 193 |
+
For the encoders $\phi$ and $\psi$ and decoders $\phi^{-1}$ and $\psi^{-1}$ , we use a residual neural network (ResNet) architecture (43). The ResNet architecture has been successful in learning coordinate transformations for physical systems (15) and is motivated by near-identity transformations in physics. The linear operator $\mathbf{L}$ is constrained to be a real symmetric matrix and therefore is self-adjoint. Additionally, $\mathbf{L}$ is initialized as the identity matrix. Therefore, $\mathbf{L}$ is strictly diagonally dominant for at least the early parts of training which guarantees $\mathbf{L}$ is invertible and well-conditioned. For more information on the network architecture and training procedure, see Appendix B.
|
| 194 |
+
|
| 195 |
+
# 3 Results
|
| 196 |
+
|
| 197 |
+
The DeepGreen architecture, which is highlighted in Fig. 2 and whose detailed loss functions are discussed in the last section, is demonstrated on a number of canonical nonlinear BVPs. The first three BVPs are one-dimensional systems and the final one is a two-dimensional system. The nonlinearities in these problems do not allow for a fundamental solution, thus recourse is typically made to numerical computations to achieve a solution. DeepGreen, however, can produce a fundamental solution which can then be used for any new forcing of the BVP.
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 3: Learning curve. This is a typical learning curve for the DeepGreen architecture. The vertical dashed line indicates where the training procedure transitions from autoencoders-only (only $\mathcal{L}_1$ and $\mathcal{L}_2$ ) to a full-network training procedure (all losses).
|
| 201 |
+
|
| 202 |
+
# 3.1 Cubic Helmholtz
|
| 203 |
+
|
| 204 |
+
The architecture and methodology is best illustrated using a basic example problem. The example problem uses a nonhomogeneous second-order nonlinear Sturm-Liouville model with constant coefficients and a cubic nonlinearity, thus making it a cubic Helmholtz equation. The differential equation is given by
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
u ^ {\prime \prime} + \alpha u + \varepsilon u ^ {3} = F (x), \tag {18a}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
$$
|
| 211 |
+
u (0) = u (2 \pi) = 0, \tag {18b}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
where $u = u(x)$ is the solution when the system is forced with $F(x)$ with $x \in (0, 2\pi)$ , $\alpha = -1$ and $\varepsilon = -0.3$ . The notation $u''$ denotes $\frac{d^2}{dx^2} u(x)$ . The dataset contains discretized solutions and forcings, $\{\mathbf{u}_k, \mathbf{F}_k\}_{k=1}^N$ . The forcing functions used for training are cosine and Gaussian functions; details of data generation and the forcing functions are provided in Appendix A. The data is divided into three groups: training, validation, and test. The training and validation sets are used for training the model. The test set is used to evaluate the results. The training set contains $N_{train} = 8906$ vector pairs $\mathbf{u}_k$ and $\mathbf{F}_k$ . The validation set contains $N_{validation} = 2227$ and test set contains $N_{test} = 1238$ .
|
| 215 |
+
|
| 216 |
+
# 3.1.1 Training the Model
|
| 217 |
+
|
| 218 |
+
The autoencoders used in this example are constructed with fully connected layers. In both autoencoders, a ResNet-like identity skip connection connects the input layer to the layer before dimension reduction in the encoder, and the first full-dimension layer in the decoder with the final output layer (see Figure 14).
|
| 219 |
+
|
| 220 |
+

|
| 221 |
+
Figure 4: Latent space representations $\mathbf{v}_k$ and $\mathbf{f}_k$ . The autoencoder transformation $\psi_u$ encodes $\mathbf{u}_k$ to the latent space, producing the vector $\mathbf{v}_k$ (orange). The forcing vector $\mathbf{F}_k$ is transformed by $\psi_F$ to the encoded vector $\mathbf{f}_v$ (blue).
|
| 222 |
+
|
| 223 |
+
The model is trained in a two-step procedure. First, the autoencoders are trained, without connection in the latent space, to condition the networks as autoencoders. In this first phase, only the autoencoder loss functions listed in Figure 2 are active ( $\mathcal{L}_1$ and $\mathcal{L}_2$ ). After a set number of epochs, the latent spaces are connected by an invertible matrix operator, $\mathbf{L}$ , and the remaining 4 loss functions in Figure 2 become active ( $\mathcal{L}_3 - \mathcal{L}_6$ ). In the final phase of training, the autoencoder learns to encode a latent space representation of the system where properties associated with linear systems hold true, such as linear superposition.
|
| 224 |
+
|
| 225 |
+
Figure 3 shows a typical training loss curve. The vertical dashed line indicates the transition between the two training phases. The models in this work are trained for 75 epochs in the first autoencoder-only phase and 2750 epochs in the final phase. The first-phase epoch count was tuned empirically based on final model performance. The final phase epoch count was selected for practical reasons; the training curve tended to flatten around 2750 epochs in all of our tested systems. The autoencoder latent spaces are critically important. The latent space is the transformed vector space where linear properties (e.g. superposition) are enforced which enables the solution of nonlinear problems. In the one-dimensional problems, the latent spaces vectors $\mathbf{v}$ and $\mathbf{f}$ are in $\mathbb{R}^{20}$ .
|
| 226 |
+
|
| 227 |
+
The latent spaces did not have any obvious physical interpretation, and qualitatively appeared similar to the representations shown in Figure 4. We trained 100 models to check the consistency in the learned model and latent space representations, but discovered the latent spaces varied considerably (see Appendix C). This implies the existence of an infinity of solutions to the coordinate transform problem, which indicates further constraints could be placed on the model.
|
| 228 |
+
|
| 229 |
+
Despite lacking obvious physical interpretations, the latent space enables discovery of an invertible operator $\mathbf{L}$ which described the linear system $\mathbf{L}[\mathbf{v}_k] = \mathbf{f}_k$ . The operator
|
| 230 |
+
|
| 231 |
+
matrix $\mathbf{L}$ can be inverted to yield the Green's function matrix $\mathbf{G}$ , which allows computation of solutions to the linearized system $\mathbf{v}_k = \mathbf{G}[\mathbf{f}_k]$ . An example of the operator $\mathbf{L}$ and its inverse $\mathbf{G}$ are shown in Figure 5. The operator and Green's function shown in Figure 5 display an important prominent feature seen in all of the results: a diagonally-dominant structure. We initialize the operator as an identity matrix, but the initialization had little impact on the diagonally-dominant form of the learned operator and Green's function matrices (see Appendix C). The diagonally-dominant operators indicate that the deep learning network tends to discover a coordinate transform yielding a nearly-orthonormal basis, which mirrors the common approach of diagonalization in spectral theory for Hermitian operators. Furthermore, diagonally-dominant matrices guarantee favorable properties for this application such as being well-conditioned and non-singular.
|
| 232 |
+
|
| 233 |
+
We emphasize that training parameters and model construction choices used in this work were not extensively optimized. We expect the model performance can be improved in a myriad of ways including extending training times, optimizing model architecture, modifying the size of the latent spaces, restricting the form of the operator, and applying additional constraints to the model. However, these topics are not the main scope of the present work; our focus is to illustrate the use of autoencoders as a coordinate transform for finding solutions to nonlinear BVPs.
|
| 234 |
+
|
| 235 |
+
# 3.1.2 Evaluating the Model
|
| 236 |
+
|
| 237 |
+
The goal for this model is to find a Green's function $\mathbf{G}$ for computing solutions $\mathbf{u}_k$ to a nonlinear BVP governed by (6) for a given forcing function $\mathbf{F}_k$ . Similarly, we can estimate the forcing term, $\mathbf{F}_k$ , given the solution $\mathbf{u}_k$ . The model is consequently evaluated by its ability to use the learned Green's function and operator for predicting solutions and forcings, respectively, for new problems from a withheld test data set.
|
| 238 |
+
|
| 239 |
+
Recall the original model is trained on data where the forcing function is a cosine or Gaussian function. As shown in Figure 6, the model performs well on withheld test data where the forcing functions are cosine or Gaussian functions, producing a cumulative loss around $10^{-4}$ . The solutions $\mathbf{u}_k$ and forcing $\mathbf{F}_k$ are depicted for the best, mean, and worst samples scored by cumulative loss.
|
| 240 |
+
|
| 241 |
+
It's important to note the test data used in Figure 6 is similar to the training and validation data. Because ML models typically work extremely well in interpolation problems, it is reasonable to expect the model to perform well on this test data set.
|
| 242 |
+
|
| 243 |
+
As an interesting test to demonstrate the ability of the model to extrapolate, we prepared a separate set of test data
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
Figure 5: Visualized operator and Green's function. Discovered Green's function $\mathbf{G} = \mathbf{L}^{-1}$ and corresponding linear operator $\mathbf{L}$ .
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
|
| 252 |
+

|
| 253 |
+
|
| 254 |
+

|
| 255 |
+
|
| 256 |
+
Figure 6: Model predictions on test data. The top row shows the true solution $\mathbf{u}_k(x)$ and the solution predicted by the network given the forcing $\mathbf{F}_k(x)$ using the Green's function $\mathbf{G}$ . The bottom row shows the true forcing function $\mathbf{F}_k(x)$ compared to the forcing computed by applying the operator $\mathbf{L}$ to the solution $\mathbf{u}_k$ . Three columns show the best, mean, and worst case samples as evaluated by the sum of normalized $\ell 2$ reconstruction errors.
|
| 257 |
+

|
| 258 |
+
True $\mathbf{u}(x)$ True $\mathbf{F}(x)$ Predicted $\mathbf{u}(x),\mathbf{F}(x)$
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
|
| 262 |
+

|
| 263 |
+
|
| 264 |
+
$\{\mathbf{u}_k,\mathbf{F}_k\}_{k = 1}^N$ containing solutions where $\mathbf{F}_k$ are cubic polynomial forcing functions. This type of data was not present in training, and provides some insight into the generality of the learned linear operator and Green's function matrices. Figure 7 shows examples of how the model performs on these cubic polynomial-type forcing functions. Similar to Figure 6, the best, mean, and worst samples are shown as graded by overall loss. Figures 6 and 7 provide some qualitative insight into the model's performance on specific instances selected from the pool of evaluated data. A quantitative perspective of the model's performance is presented in Figure 8. This box plot shows statistics (median value, $Q_{1}$ , $Q_{3}$ , and range) for four of the loss functions evaluated on the similar (cosine and Gaussian) test data. Note the superposition loss function is not scored in this plot because the superposition loss function can only be evaluated within a single batch,
|
| 265 |
+
|
| 266 |
+
and the loss depends on batch size and composition.
|
| 267 |
+
|
| 268 |
+
In conclusion, the DeepGreen architecture enables discovery of invertible, linearizing transformations that facilitate identification of a linear operator and Green's function to solve nonlinear BVPs. It is tested on data similar and dissimilar to the training data, and evaluated on the loss functions that guide the training procedure. The discovered operator and Green's function take on a surprisingly diagonally-dominant structure, which hints at the model's preference to learn an optimal basis. The model appears to extrapolate beyond the test data, suggesting that the learned operator is somewhat general to the system.
|
| 269 |
+
|
| 270 |
+

|
| 271 |
+
|
| 272 |
+

|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
|
| 276 |
+

|
| 277 |
+
|
| 278 |
+

|
| 279 |
+
|
| 280 |
+

|
| 281 |
+
|
| 282 |
+

|
| 283 |
+
|
| 284 |
+

|
| 285 |
+
Figure 7: Model predictions on cubic Helmholtz forced system. The top row shows the true solution $\mathbf{u}_k(x)$ and the solution predicted by the network given the forcing $\mathbf{F}_k(x)$ using the Green's function $\mathbf{G}$ . The bottom row shows the true forcing function $\mathbf{F}_k(x)$ compared to the forcing computed by applying the operator $\mathbf{L}$ to the solution $\mathbf{u}_k$ . Three columns show the best, mean, and worst case samples as evaluated by the sum of normalized $\ell 2$ reconstruction errors.
|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
Figure 8: Model performance summary. Distribution of loss values are shown for every sample in the test data set. Model loss functions are minimized during training, making them a natural metric to use for summarizing performance.
|
| 289 |
+
|
| 290 |
+
# 3.2 Nonlinear Sturm-Liouville and Biharmonic Operators
|
| 291 |
+
|
| 292 |
+
In addition to the example system described above, the approach was applied to two other one-dimensional systems. We used the same training procedure and forcing functions that were described in Section 3.1. The first is a system governed by the nonlinear Sturm-Liouville equation
|
| 293 |
+
|
| 294 |
+
$$
|
| 295 |
+
[ - p (x) u ^ {\prime} ] ^ {\prime} + q (x) (u + \varepsilon u ^ {3}) = F (x),
|
| 296 |
+
$$
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
u (0) = u (2 \pi) = 0,
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
where $\varepsilon = 0.4$ controls the extent of nonlinearity, and $p(x)$ and $q(x)$ are spatially-varying coefficients
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
p (x) = 0. 5 \sin (x) - 3,
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
$$
|
| 309 |
+
q (x) = 0. 6 \sin (x) - 2,
|
| 310 |
+
$$
|
| 311 |
+
|
| 312 |
+
with $x \in [0, 2\pi]$ . The final one-dimensional system is a biharmonic operator with an added cubic nonlinearity
|
| 313 |
+
|
| 314 |
+
$$
|
| 315 |
+
\left[ - p u ^ {\prime \prime} \right] ^ {\prime \prime} + q \left(u + \varepsilon u ^ {3}\right) = F (x),
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
u (0) = u (2 \pi) = u ^ {\prime} (0) = u ^ {\prime} (2 \pi) = 0,
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
where $p = -4$ and $q = 2$ are the coefficients and $\varepsilon = 0.4$ controls the nonlinearity. As in the prior example, the forcing functions in the training data are cosine and Gaussian functions, which are described further in Appendix A.
|
| 323 |
+
|
| 324 |
+
Results for all the one-dimensional models, including the cubic Helmholtz example from Section 3.1, are presented in Table 1. Model performance is quantitatively summarized by box plots and the Green's function matrix is shown for each model.
|
| 325 |
+
|
| 326 |
+
Importantly, the learned Green's function matrices consist
|
| 327 |
+
|
| 328 |
+
<table><tr><td>Nonlinear cubic Helmholtz (constant coefficients) u'' + αu + εu3 = F u(0) = u(L) = 0</td><td>Nonlinear Sturm-Liouville (varying p(x), q(x)) [−pu']' + qu + αqu3 = F u(0) = u(L) = 0</td><td>Nonlinear Biharmonic operator (constant coefficients) −pu''' + qu + αqu3 = F u(0) = u(L) = 0</td></tr><tr><td>G(x,ξ)</td><td>G(x,ξ)</td><td>G(x,ξ)</td></tr><tr><td>0.2</td><td>0.1</td><td>0.5</td></tr><tr><td>0.0</td><td>0.0</td><td>0.0</td></tr><tr><td>-0.2</td><td>-0.1</td><td>-0.5</td></tr><tr><td>2π</td><td>2π</td><td>2π</td></tr><tr><td>x</td><td>x</td><td>x</td></tr><tr><td>Normalized MSE</td><td>L1 L2 L3 L4 L5 L6</td><td>L3 L1 L6</td></tr></table>
|
| 329 |
+
|
| 330 |
+
Table 1: Summary of results for three one-dimensional models. The models are provided with the Green's function learned by DeepGreen. A summary box plot shows the relative losses $\mathcal{L}_1$ , $\mathcal{L}_2$ , $\mathcal{L}_3$ , $\mathcal{L}_5$ , and $\mathcal{L}_6$ for all three model systems.
|
| 331 |
+
|
| 332 |
+
tently exhibit diagonally-dominant structure. The losses for the nonlinear cubic Helmholtz equation and the nonlinear Sturm-Liouville equation are similar which indicates that spatially-varying coefficients do not make the problem significantly more difficult for the DeepGreen architecture. In contrast, the loss for the nonlinear biharmonic equation are about an order of magnitude higher than the other two systems. This result implies the fourth-order problem is more difficult than the second-order problems. Also of note is that the linear operator loss $\mathcal{L}_3$ is consistently the highest loss across all models. Therefore, it is easier for DeepGreen to find invertible transformations for the solutions and forcing functions than it is to find a linear operator that connects the two latent spaces.
|
| 333 |
+
|
| 334 |
+
# 3.3 Nonlinear Poisson Equation
|
| 335 |
+
|
| 336 |
+
We also tested our method on a two-dimensional system. The two-dimensional model is a nonlinear version of the Poisson equation with Dirichlet boundary conditions
|
| 337 |
+
|
| 338 |
+
$$
|
| 339 |
+
- \nabla \cdot \left[ \left(1 + u ^ {2}\right) \nabla u \right] = F (\mathbf {x}), \quad \mathbf {x} \in \Omega , \tag {19a}
|
| 340 |
+
$$
|
| 341 |
+
|
| 342 |
+
$$
|
| 343 |
+
u = 0, \quad \mathbf {x} \in \partial \Omega , \tag {19b}
|
| 344 |
+
$$
|
| 345 |
+
|
| 346 |
+
where $\Omega := (0,2\pi) \times (0,2\pi)$ . Similar to the one-dimensional models, the forcing functions used to train the model are cosine and Gaussian functions, the details of which are provided in Appendix A. The sizes of the data sets are also similar to the one-dimensional data sets. The training data contains $N_{train} = 9806$ vector pairs $\mathbf{u}_k$ and $\mathbf{F}_k$ , the validation data contains $N_{validation} = 2452$ , and the test data contains $N_{test} = 1363$ .
|
| 347 |
+
|
| 348 |
+
The network architecture of the encoders and decoders for the two-dimensional example differs from the one-dimensional examples. Instead of fully connected layers, convolutional layers were used in the encoders and decoders. However, we still use a ResNet architecture. Additionally, the latent space vectors are in $\mathbb{R}^{200}$ . Full details on the network architecture can be found in Appendix B. Note that the method proposed for discovering Green's functions allows for any network architecture to be used for the encoders and decoders. For the one-dimensional example, similar results were obtained using fully connected and convolutional layers. However, the convolutional architecture was better in the two-layer case and also allowed for a more manageable number of parameters for the wider network
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
(a)
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
(b)
|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
|
| 362 |
+

|
| 363 |
+
|
| 364 |
+

|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
|
| 368 |
+

|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
Figure 9: Model predictions for the (a) best and (b) worst examples from test data with Gaussian and cosine forcings. In both (a) and (b), the top row shows the true solution $\mathbf{u}(\mathbf{x})$ , the predicted solution using the Green's function, and the difference between the true and predicted solution. The bottom row shows the true forcing function $\mathbf{F}(\mathbf{x})$ , the predicted forcing function, and the difference between the true and predicted forces. In order to account for the difference in scale between $\mathbf{u}(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$ , the differences are scaled by the infinity norm of the true solution or forcing function $(\text{Difference} = (\text{True} - \text{Predicted}) / ||\text{True}||_{\infty})$ .
|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+
that resulted from discretizing the two-dimensional space.
|
| 378 |
+
|
| 379 |
+
The operator and Green's function for the two-dimensional model are similar to those displayed in shown in Figure 5. The diagonal dominance is even more prevalent in this case than the one-dimensional example. The model was evaluated on test data containing cosine and Gaussian forcing functions. Figure 9a shows the true solution $\mathbf{u}(x)$ and forcing function $\mathbf{F}(x)$ as well as the network predictions for the example from the test data for which the model performed the best (i.e. the smallest value of the loss). The difference between the true and predicted functions is shown in the right column of Figure 9a and is scaled by the infinity norm of the true solution or forcing functions. Figure 9b
|
| 380 |
+
|
| 381 |
+
shows similar results but for the worst example from the test data. In both cases, the model gives a qualitatively correct solution for both $\mathbf{u}(x)$ and $\mathbf{F}(x)$ . Unsurprisingly, the network struggles most on highly localized forcing functions and has the highest error in the region where the forcing occurs.
|
| 382 |
+
|
| 383 |
+
The model was also evaluated on test data that has cubic polynomial forcing functions, a type of forcing function not found in the training data. The best and worst examples are shown in Figure 10. Although the model does not perform as well for test data which is not similar to the training data, the qualitative features of the predicted solutions are still consistent with the true solutions. Figure 11 shows a box plot of the model's performance on the similar (cosine
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
(a)
|
| 387 |
+
Figure 10: Model predictions for the (a) best and (b) worst examples from test data with cubic polynomial forcings. In both (a) and (b), the top row shows the true solution $\mathbf{u}(\mathbf{x})$ , the predicted solution using the Green's function, and the difference between the true and predicted solution. The bottom row shows the true forcing function $\mathbf{F}(\mathbf{x})$ , the predicted forcing function, and the difference between the true and predicted forces. In order to account for the difference in scale between $\mathbf{u}(\mathbf{x})$ and $\mathbf{F}(\mathbf{x})$ , the differences are scaled by the infinity norm of the true solution or forcing function $(\text{Difference} = (\text{True} - \text{Predicted}) / ||\text{True}||_{\infty})$ .
|
| 388 |
+
|
| 389 |
+
and Gaussian forcing test data). The results are similar to the one-dimensional results, and, in fact, better than the bi-harmonic operator model.
|
| 390 |
+
|
| 391 |
+
# 4 Conclusion
|
| 392 |
+
|
| 393 |
+
We have leveraged the expressive capabilities of deep learning to discover linearizing coordinates for nonlinear BVPs, thus allowing for the construction of the fundamental solution or nonlinear Green's function. Much like the Koopman operator for time-dependent problems, the linearizing transformation provides a framework whereby the fundamental
|
| 394 |
+
|
| 395 |
+
solution of the linear operator can be constructed and used for any arbitrary forcing. This provides a broadly applicable mathematical architecture for constructing solutions for nonlinear BVPs, which typically rely on numerical methods to achieve solutions. Our DeepGreen architecture can achieve solutions for arbitrary forcings by simply computing the convolution of the forcing with the Green's function in the linearized coordinates.
|
| 396 |
+
|
| 397 |
+
Given the critical role that BVPs play in the mathematical analysis of constrained physical systems subjected to external forces, the DeepGreen architecture can be broadly applied in nearly every engineering discipline since BVPs are
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
Figure 11: Two-dimensional Poisson model performance summary. Distribution of loss values are shown for every sample in the test data set. Model loss functions are minimized during training, making them a natural metric to use for summarizing performance.
|
| 401 |
+
|
| 402 |
+
prevalent in diverse problem domains including fluid mechanics, electromagnetics, quantum mechanics, and elasticity. Importantly, DeepGreen provides a bridge between a classic and widely used solution technique to nonlinear BVP problems which generically do not have principled techniques for achieving solutions aside from brute-force computation. DeepGreen establishes this bridge by providing a transformation which allows linear superposition to hold. DeepGreen is a flexible, data-driven, deep learning approach to solving nonlinear boundary value problems (BVPs) using a dual-autoencoder architecture. The autoencoders discover an invertible coordinate transform that linearizes the nonlinear BVP and identifies both a linear operator $L$ and Green's function $G$ which can be used to solve new nonlinear BVPs. We demonstrated that the method succeeds on a variety of nonlinear systems including nonlinear Helmholtz and Sturm-Liouville problems, nonlinear elasticity, and a 2D nonlinear Poisson equation. The method merges the strengths of the universal approximation capabilities of deep learning with the physics knowledge of Green's functions to yield a flexible tool for identifying fundamental solutions to a variety of nonlinear systems.
|
| 403 |
+
|
| 404 |
+
# Acknowledgments
|
| 405 |
+
|
| 406 |
+
SLB is grateful for funding support from the Army Research Office (ARO W911NF-17-1-0306). JNK acknowledges support from the Air Force Office of Scientific Research (FA9550-19-1-0011).
|
| 407 |
+
|
| 408 |
+
# References
|
| 409 |
+
|
| 410 |
+
1. I. Stakgold, Boundary Value Problems of Mathematical Physics: 2-Volume Set, vol. 29 (Siam, 2000).
|
| 411 |
+
|
| 412 |
+
2. J.-B. J. Fourier, Théorie analytique de la chaleur (Chez Firmin Didot, 1822).
|
| 413 |
+
3. J. D. Jackson, Classical electrodynamics (John Wiley & Sons, 2007).
|
| 414 |
+
4. A. Yariv, Quantum Electronics (John Wiley & Sons, 1989).
|
| 415 |
+
5. I. Stakgold, M. J. Holst, *Green's functions and boundary value problems*, vol. 99 (John Wiley & Sons, 2011).
|
| 416 |
+
6. B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning for universal linear embeddings of nonlinear dynamics. Nature communications 9, 4950 (2018).
|
| 417 |
+
7. K. Champion, B. Lusch, J. N. Kutz, S. L. Brunton, Data-driven discovery of coordinates and governing equations. Proceedings of the National Academy of Sciences 116, 22445-22451 (2019).
|
| 418 |
+
8. C. Wehmeyer, F. Noé, Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics. The Journal of Chemical Physics 148, 241703 (2017).
|
| 419 |
+
9. A. Mardt, L. Pasquali, H. Wu, F. Noé, VAMPnets: Deep learning of molecular kinetics. Nature Communications 9 (2018).
|
| 420 |
+
10. N. Takeishi, Y. Kawahara, T. Yairi, Advances in Neural Information Processing Systems (2017), pp. 1130-1140.
|
| 421 |
+
11. E. Yeung, S. Kundu, N. Hodas, 2019 American Control Conference (ACC) (2019), pp. 4832-4839.
|
| 422 |
+
12. S. E. Otto, C. W. Rowley, Linearly-recurrent autoencoder networks for learning dynamics. SIAM Journal on Applied Dynamical Systems 18, 558-593 (2019).
|
| 423 |
+
13. Q. Li, F. Dietrich, E. M. Bollt, I. G. Kevrekidis, Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the koopman operator. Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 103111 (2017).
|
| 424 |
+
14. C. J. Dsilva, R. Talmon, R. R. Coifman, I. G. Kevrekidis, Parsimonious representation of nonlinear dynamical systems through manifold learning: A chemotaxis case study. Applied and Computational Harmonic Analysis 44, 759-773 (2018).
|
| 425 |
+
15. C. Gin, B. Lusch, J. N. Kutz, S. L. Brunton, Deep learning models for global coordinate transformations that linearize PDEs. To appear in the European Journal of Applied Mathematics (2020). Preprint Available: arXiv:1911.02710.
|
| 426 |
+
|
| 427 |
+
16. G. Cybenko, Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals, and Systems (MCSS) 2, 303-314 (1989).
|
| 428 |
+
17. K. Hornik, M. Stinchcombe, H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. *Neural networks* 3, 551-560 (1990).
|
| 429 |
+
18. B. O. Koopman, Hamiltonian systems and transformation in Hilbert space. Proceedings of the National Academy of Sciences 17, 315-318 (1931).
|
| 430 |
+
19. I. Mezić, A. Banaszuk, Comparison of systems with complex behavior. Physica D: Nonlinear Phenomena 197, 101 - 133 (2004).
|
| 431 |
+
20. I. Mezić, Spectral properties of dynamical systems, model reduction and decompositions. *Nonlinear Dynamics* 41, 309-325 (2005).
|
| 432 |
+
21. M. Budisic, I. Mezić, Geometry of the ergodic quotient reveals coherent structures in flows. Physica D: Nonlinear Phenomena 241, 1255-1269 (2012).
|
| 433 |
+
22. I. Mezić, Analysis of fluid flows via spectral properties of the Koopman operator. Annual Review of Fluid Mechanics 45, 357-378 (2013).
|
| 434 |
+
23. S. L. Brunton, B. W. Brunton, J. L. Proctor, J. N. Kutz, Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLOS ONE 11, 1-19 (2016).
|
| 435 |
+
24. P. J. Schmid, Dynamic mode decomposition of numerical and experimental data. Journal of Fluid Mechanics 656, 5-28 (2010).
|
| 436 |
+
25. C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, D. Henningson, Spectral analysis of nonlinear flows. J. Fluid Mech. 645, 115-127 (2009).
|
| 437 |
+
26. J. N. Kutz, S. L. Brunton, B. W. Brunton, J. L. Proctor, Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems (SIAM, 2016).
|
| 438 |
+
27. F. Noé, F. Nuske, A variational approach to modeling slow processes in stochastic dynamical systems. Multiscale Modeling & Simulation 11, 635-655 (2013).
|
| 439 |
+
28. F. Nuske, B. G. Keller, G. Pérez-Hernández, A. S. Mey, F. Noé, Variational approach to molecular kinetics. Journal of chemical theory and computation 10, 1739-1752 (2014).
|
| 440 |
+
29. M. O. Williams, I. G. Kevrekidis, C. W. Rowley, A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. Journal of Nonlinear Science 25, 1307-1346 (2015).
|
| 441 |
+
30. M. O. Williams, C. W. Rowley, I. G. Kevrekidis, A kernel-based method for data-driven Koopman spectral analysis. Journal of Computational Dynamics 2, 247-265 (2015).
|
| 442 |
+
31. S. Klus, F. Nuske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, F. Noé, Data-driven model reduction and transfer operator approximation. Journal of Nonlinear Science 28, 985-1010 (2018).
|
| 443 |
+
32. J. N. Kutz, J. L. Proctor, S. L. Brunton, Applied Koopman theory for partial differential equations and data-driven modeling of spatio-temporal systems. Complexity 2018, 1-16 (2018).
|
| 444 |
+
33. J. Page, R. R. Kerswell, Koopman analysis of burgers equation. Physical Review Fluids 3, 071901 (2018).
|
| 445 |
+
34. C. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).
|
| 446 |
+
|
| 447 |
+
35. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, 2016).
|
| 448 |
+
36. R. Rico-Martinez, I. Kevrekidis, K. Krischer, Nonlinear system identification using neural networks: dynamics and instabilities. Neural networks for chemical engineers pp. 409-442 (1995).
|
| 449 |
+
37. R. Gonzalez-Garcia, R. Rico-Martinez, I. Kevrekidis, Identification of distributed parameter systems: A neural net based approach. Computers & chemical engineering 22, S965-S968 (1998).
|
| 450 |
+
38. S. H. Rudy, J. N. Kutz, S. L. Brunton, Deep learning of dynamics and signal-noise decomposition with time-stepping constraints. Journal of Computational Physics 396, 483-506 (2019).
|
| 451 |
+
39. H. Lange, S. L. Brunton, N. Kutz, From fourier to koopman: Spectral methods for long-term time series prediction. arXiv preprint arXiv:2004.00574 (2020).
|
| 452 |
+
40. Y. Liu, J. N. Kutz, S. L. Brunton, Hierarchical deep learning of multiscale differential equation time-steppers. arXiv preprint arXiv:2008.09768 (2020).
|
| 453 |
+
41. D. Z. Huang, K. Xu, C. Farhat, E. Darve, Learning constitutive relations from indirect observations using deep neural networks. Journal of Computational Physics 416, 109491 (2020).
|
| 454 |
+
42. S. Pan, K. Duraisamy, Physics-informed probabilistic learning of linear embeddings of nonlinear dynamics with guaranteed stability. SIAM Journal on Applied Dynamical Systems 19, 480-509 (2020).
|
| 455 |
+
43. K. He, X. Zhang, S. Ren, J. Sun, Proceedings of the IEEE conference on computer vision and pattern recognition (2016), pp. 770-778.
|
| 456 |
+
44. S. Mallat, Understanding deep convolutional networks. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, 20150203 (2016).
|
| 457 |
+
45. A. Logg, G. N. Wells, Dolfin: Automated finite element computing. ACM Transactions on Mathematical Software 37 (2010).
|
| 458 |
+
46. M. S. Alnæs, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, G. N. Wells, The fenics project version 1.5. Archive of Numerical Software 3 (2015).
|
| 459 |
+
47. A. Logg, K.-A. Mardal, G. N. Wells, et al., Automated Solution of Differential Equations by the Finite Element Method (Springer, 2012).
|
| 460 |
+
|
| 461 |
+
# Appendix A Data Generation
|
| 462 |
+
|
| 463 |
+
# A.1 1D Problems
|
| 464 |
+
|
| 465 |
+
The data for all of the one-dimensional systems are created using the same method and forcing functions. Each solution is computed on an evenly-spaced 128-point grid using MATLAB's bvp5c solver with a relative error tolerance of $10^{-8}$ and an absolute error tolerance of $10^{-10}$ . The forcing functions $\mathbf{F}_k(x)$ are designed to yield a variety of solutions $\mathbf{u}_k$ such that $\| \mathbf{u}_k\|_2\simeq 1$ .
|
| 466 |
+
|
| 467 |
+
The training data consists of two types of systems: Gaussian-forced and cosine-forced systems. The Gaussian-forced systems have forcing functions of the form
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
F _ {k} (x) = a \exp \left(\frac {- (x - b) ^ {2}}{2 c ^ {2}}\right),
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
where $a \in \{-25, -20, -15, -10, -5, 5, 10, 15, 20, 25\}$ , $b \in \{0, 2\pi/19, 4\pi/19, \ldots, 2\pi\}$ , and $c \in \{0.1, 0.3, 0.5, \ldots, 4.9\}$ . The cosine forcing functions are of the form
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
F _ {k} (x) = \alpha \cos (\beta x),
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
where $\alpha \in \{1, 1.1, 1.2, \ldots, 10\}$ and $\beta \in \{1, 1.05, 1.10, \ldots, 5\}$ . This gives a total of 5000 Gaussian-forced solutions and 7371 cosine-forced solutions. For the cubic Helmholtz equation and the nonlinear Sturm-Liouville equation with spatially-varying coefficients, all of the 12371 solutions were within the error tolerance. However, there were 97 solutions of the nonlinear biharmonic equation that did not meet the error tolerance and were therefore discarded. Of the remaining data, $10\%$ are randomly chosen and withheld as test data, $80\%$ are used as training data, and $20\%$ are used as validation data.
|
| 480 |
+
|
| 481 |
+
In order to test the ability of the network to generalize, we also have another test data set that consists of solutions with cubic forcing functions of the form
|
| 482 |
+
|
| 483 |
+
$$
|
| 484 |
+
F _ {i} (x) = \gamma (x - \pi) ^ {3},
|
| 485 |
+
$$
|
| 486 |
+
|
| 487 |
+
where $\gamma \in \{0.01, 0.03, 0.05, \ldots, 0.29\}$ , and cubic forcing functions of the form
|
| 488 |
+
|
| 489 |
+
$$
|
| 490 |
+
F _ {i} (x) = \gamma (x - \pi) ^ {3} + \zeta (x - \pi) ^ {2} + \psi ,
|
| 491 |
+
$$
|
| 492 |
+
|
| 493 |
+
where $\gamma \in \{0.01, 0.03, 0.05, \ldots, 0.29\}, \zeta \in \{0.01, 0.03, 0.05, \ldots, 0.49\}$ , and $\psi \in \{-5, -4, -3, \ldots, 5\}$ . There are a total of 4140 solutions with cubic forcing functions.
|
| 494 |
+
|
| 495 |
+
# A.2 2D Problem
|
| 496 |
+
|
| 497 |
+
The two-dimensional data satisfies the nonlinear Poisson equation (19). The solutions are computed with a finite element method using the DOLFIN library (45) of the FEn-iCS Project (46, 47). The forcing functions are similar to
|
| 498 |
+
|
| 499 |
+
the one-dimensional data in that there are Gaussian and cosine forcing functions along with a separate data set of cubic polynomial forcing functions used to test the ability of the network to generalize. The Gaussian forcing functions are of the form
|
| 500 |
+
|
| 501 |
+
$$
|
| 502 |
+
F _ {k} (x, y) = a \exp \left(\frac {- (x - b _ {x}) ^ {2} - (y - b _ {y}) ^ {2}}{2 c ^ {2}}\right),
|
| 503 |
+
$$
|
| 504 |
+
|
| 505 |
+
where $a \in \{-25, -20, -15, -10, -5, 5, 10, 15, 20, 25\}$ , $b_x, b_y \in \{\pi/3, 2\pi/3, \pi, 4\pi/3, 5\pi/3\}$ , and $c \in \{0.1, 0.3, 0.5, \ldots, 4.9\}$ . The cosine forcing functions are of the form
|
| 506 |
+
|
| 507 |
+
$$
|
| 508 |
+
F _ {k} (x, y) = \alpha \cos (\beta_ {x} x) \cos (\beta_ {y} y),
|
| 509 |
+
$$
|
| 510 |
+
|
| 511 |
+
where $\alpha \in \{1,1.1,1.2,\dots ,10\}$ and $\beta_{x},\beta_{y}\in$ $\{1,1.5,2,\ldots ,5\}$ . The cubic forcing functions are of the form
|
| 512 |
+
|
| 513 |
+
$$
|
| 514 |
+
F _ {i} (x, y) = \gamma_ {x} (x - \pi) ^ {3} + \gamma_ {y} (y - \pi) ^ {3},
|
| 515 |
+
$$
|
| 516 |
+
|
| 517 |
+
where $\gamma_x, \gamma_y \in \{0.01 + 0.28k/3 | k = 0, 1, 2, 3\}$ , and cubic forcing functions of the form
|
| 518 |
+
|
| 519 |
+
$$
|
| 520 |
+
\begin{array}{l} F _ {i} (x, y) = \gamma_ {x} (x - \pi) ^ {3} + \gamma_ {y} (y - \pi) ^ {3} \\ + \zeta_ {x} (x - \pi) ^ {2} + \zeta_ {y} (y - \pi) ^ {2} + \psi , \\ \end{array}
|
| 521 |
+
$$
|
| 522 |
+
|
| 523 |
+
where $\gamma_x, \gamma_y \in \{0.01 + 0.28k/3 | k = 0,1,2,3\}$ , $\zeta_x, \zeta_y \in \{0.01,0.07,0.13,0.19,0.25\}$ , and $\psi \in \{-5,-4,-3,\ldots,5\}$ . There are 6250 solutions with Gaussian forcing functions, 7371 solutions with cosine forcing functions, and 4416 solutions with cubic forcing functions.
|
| 524 |
+
|
| 525 |
+
# Appendix B Neural Network Implementation Details
|
| 526 |
+
|
| 527 |
+
The model training procedure is kept constant for all of the examples in this work. The networks are optimized with an Adam optimizer $(\beta_{1} = 0.9, \beta_{2} = 0.999)$ . Every numerical experiment starts by training a set of 20 models for a 'small' number of epochs. Each of the 20 models has a randomly selected learning rate for the Adam optimizer, uniformly selected between $10^{-2}$ and $10^{-5}$ . The initial training period consists of two phases: autoencoder-only (75 epochs) and full model (250 epochs). The autoencoder-only phase only enforces the autoencoder losses $\mathcal{L}_1$ and $\mathcal{L}_2$ during backpropagation. A checkpoint algorithm is used to keep track of the model with the lowest overall loss throughout the training procedure. At the end of the initial period, the best model is selected and the others are eliminated. The best model is trained for an additional 2500 epochs.
|
| 528 |
+
|
| 529 |
+

|
| 530 |
+
Figure 12: Encoder network architecture for the two-dimensional data. All convolutional layers use $4 \times 4$ kernels with stride size 1, zero-padding, and ReLU activation functions. All pooling layers are average pooling layers with pool size 2 and stride size 2.
|
| 531 |
+
|
| 532 |
+

|
| 533 |
+
Figure 13: Decoder network architecture for the two-dimensional data. All transposed convolutional layers use $4 \times 4$ kernels with stride size 2, zero-padding, and ReLU activation functions except for the last layer which has stride size 1.
|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
Figure 14: Layer-by-layer autoencoder architecture for 1D problems.
|
| 537 |
+
|
| 538 |
+
There are two network architectures in this work. The architectures depicted in Figures 12 and 13 are applied to the two-dimensional nonlinear Poisson BVP. The architecture depicted in Figure 14, is applied to one-dimensional prob
|
| 539 |
+
|
| 540 |
+
lems.
|
| 541 |
+
|
| 542 |
+
The two architectures have a few training variables in common. Both models use variance scaling initialization, $\ell_2$ regularization $(\lambda = 10^{-6})$ , and ReLu activation functions for fully connected (1D architecture) and convolutional (2D architecture) layers. Notably, the two layers immediately before and after the latent space do not have activation functions. A normalized mean squared error loss function is used for all of the loss functions, as described in Section 2. The models are trained in batches of 64 samples.
|
| 543 |
+
|
| 544 |
+
The 2D architecture utilizes convolutional layers and pooling layers, as shown in Figures 12 and 13. All convolutional layers use a kernel size of $4 \times 4$ . There are differences between the convolutional layers in the encoder and the convolutional layers in the decoder. The encoder convolutional layers use a stride size of $1 \times 1$ and an increasing number of filters (8, 16, 32, 64). The deconvolutional layers use a stride size of $2 \times 2$ with decreasing filter size (32, 16, 8). Pooling layers are similar for both the encoder and decoder with a stride size of $2 \times 2$ and a pool size of $2 \times 2$ .
|
| 545 |
+
|
| 546 |
+
# Appendix C Additional Results
|
| 547 |
+
|
| 548 |
+
The repeatability of the results and models learned by the DeepGreen architecture are interesting to study from the
|
| 549 |
+
|
| 550 |
+

|
| 551 |
+
Figure 15: Initial vs learned operators for an operator matrix $L$ for different initial conditions. The top row shows identity matrix initialization, the middle row shows random initialization (He normal), and the bottom row shows a Toeplitz gradient initialization.
|
| 552 |
+
|
| 553 |
+
perspective of operator convergence and latent space representations. In both cases, we aim to investigate the convergence of the model parameters to determine if the learned latent spaces and operators are unique or non-unique.
|
| 554 |
+
|
| 555 |
+
# C.1 Operator Initialization
|
| 556 |
+
|
| 557 |
+
We repeat the training procedure for DeepGreen with three different initialization approaches for the operator $L$ . Again, we train with data from the example nonlinear cubic Helmholtz model. This experiment focuses on comparing the initial values of the operator $L$ with the final values of the operator at the end of training to determine if the DeepGreen approach tends to converge to a specific operator construction. The results in Figure 15 show the initial and final operator for identity-initialized, randomly initialized, and Toeplitz-initialized operator matrices. Impressively, the result shows that the network tends to learn operators with diagonal dominance for all of the tested initialization strategies. This approach, which DeepGreen appears to prefer, draws strong parallels to the coordinate diagonalization approach commonly used in physics.
|
| 558 |
+
|
| 559 |
+
# C.2 Latent Space Analysis
|
| 560 |
+
|
| 561 |
+
We repeat the training procedure for the example system, the nonlinear cubic Helmholtz model, a total of one hundred times. A single sample was selected from the training data and the latent space representation, $\mathbf{v}_i$ and $\mathbf{f}_i$ , of the input vectors $\mathbf{u}_i$ and $\mathbf{F}_i$ are computed. Statistics for the latent space representations are presented in Figure 16. It is evident that the latent space vectors are not identical between runs, and that the values in the vector do not follow any particular statistical distribution. This information implies that the learned weights in the model, and the learned latent space representations, vary for each training instance and do not appear to converge to a single representation.
|
| 562 |
+
|
| 563 |
+
# C.3 Residual network architecture
|
| 564 |
+
|
| 565 |
+
All of the autoencoders used in this work use a residual network (ResNet) architecture. In order to demonstrate the advantage of the ResNet architecture, we trained six models using the DeepGreen architecture for each of the four systems. Three of the models use the ResNet skip connections, while three do not use the ResNet architecture.
|
| 566 |
+
|
| 567 |
+
For the two simplest systems, the nonlinear cubic Helmholtz equation and the nonlinear Sturm-Liouville equation, the difference between the models with and without the ResNet skip connections was negligible. For the nonlinear cubic Helmholtz equation, the mean validation loss for the non-ResNet models was $2.7 \times 10^{-3}$ and the median validation loss was $2.4 \times 10^{-3}$ . Using the ResNet architecture resulted in a mean validation loss of $3.5 \times 10^{-3}$ and a median validation loss of $8.8 \times 10^{-4}$ . The ResNet architecture resulted in a lower median validation loss but a higher mean due to one of the three models performing much more poorly than the other two. The results for the nonlinear Sturm-Liouville system are analogous. With a non-ResNet architecture, the mean validation loss was $4.5 \times 10^{-3}$ and the median validation loss was $4.0 \times 10^{-3}$ . With a ResNet architecture, the mean validation loss was $5.7 \times 10^{-3}$ and the median validation loss was $3.1 \times 10^{-3}$ . Therefore, the use of the ResNet architecture produced similar results to a non-ResNet architecture for these two simple systems.
|
| 568 |
+
|
| 569 |
+
For the two systems that had larger losses - the nonlinear biharmonic equation in 1D and the 2D nonlinear Poisson equation - the ResNet architecture was clearly superior to a non-ResNet architecture. For the nonlinear biharmonic equation, the ResNet architecture yields a mean validation loss of $2.5 \times 10^{-2}$ and median validation loss of $2.8 \times 10^{-2}$ for the three models, compared with $3.8 \times 10^{-2}$ and $4.0 \times 10^{-2}$ , respectively, for the non-ResNet architecture. Therefore, the ResNet architecture performed better in terms of both the mean and median loss. The ResNet ar
|
| 570 |
+
|
| 571 |
+

|
| 572 |
+
Figure 16: Statistics of latent space values of a single sample over 100 experimental runs.
|
| 573 |
+
|
| 574 |
+

|
| 575 |
+
|
| 576 |
+
chitecture is absolutely vital for the nonlinear Poisson system. Without the ResNet architecture, the model essentially did not converge. Both the mean and median validation losses were $1.9 \times 10^{0}$ . In contrast, the ResNet architecture had a mean validation loss of $1.8 \times 10^{-2}$ and a median of $1.9 \times 10^{-3}$ .
|
data/2020/2101_07xxx/2101.07206/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00117/b1510fde-b32e-4443-bd7a-7c6aed392142_content_list.json
CHANGED
|
@@ -1,3 +1,1495 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Multi-task Retrieval for Knowledge-Intensive Tasks",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
231,
|
| 8 |
+
80,
|
| 9 |
+
771,
|
| 10 |
+
101
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Jean Maillard* Vladimir Karpukhin* Fabio Petroni \nWen-tau Yih Barlas Oğuz Veselin Stoyanov Gargi Ghosh \nFacebook AI",
|
| 17 |
+
"bbox": [
|
| 18 |
+
228,
|
| 19 |
+
134,
|
| 20 |
+
779,
|
| 21 |
+
181
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "{jeanm,vladk, fabiopetroni, scottyih, barlaso, ves, gghosh}@fb.com",
|
| 28 |
+
"bbox": [
|
| 29 |
+
136,
|
| 30 |
+
184,
|
| 31 |
+
870,
|
| 32 |
+
200
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
262,
|
| 42 |
+
263,
|
| 43 |
+
342,
|
| 44 |
+
279
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Retrieving relevant contexts from a large corpus is a crucial step for tasks such as open-domain question answering and fact checking. Although neural retrieval outperforms traditional methods like tfidf and BM25, its performance degrades considerably when applied to out-of-domain data. Driven by the question of whether a neural retrieval model can be universal and perform robustly on a wide variety of problems, we propose a multi-task trained model. Our approach not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant. With the help of our retriever, we improve existing models for downstream tasks and closely match or improve the state of the art on multiple benchmarks.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
142,
|
| 53 |
+
294,
|
| 54 |
+
463,
|
| 55 |
+
551
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
115,
|
| 65 |
+
567,
|
| 66 |
+
260,
|
| 67 |
+
583
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Knowledge-intensive tasks is the common designation for a class of real-world NLP problems which, because of their nature, require large amounts of knowledge about the world (Petroni et al., 2020). For example, open-domain question answering requires producing answers to general factoid questions; fact checking involves determining the veracity of claims based on a database of trusted evidence. Practical solutions to these tasks usually involve an efficient retrieval component that, given an input query, selects a limited subset of relevant information from a large knowledge source. Sophisticated downstream models then consider the input only in the context of the retrieved information, and perform the final task.",
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
595,
|
| 77 |
+
492,
|
| 78 |
+
835
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "The standard retrieval component in many systems (e.g., Thorne et al., 2018; Wang et al., 2018; Chen et al., 2017) has long relied on term-matching methods, such as tfidf or BM25 (Robertson and Zaragoza, 2009). These methods rely on efficient algorithms and usually perform reasonably well regardless of the problem. In contrast, recent neural retrieval models, such as ICT (Lee et al., 2019), DPR (Karpukhin et al., 2020) and RAG (Lewis et al., 2020b) achieve better results by learning directly from task-specific training data and going beyond simple keyword matching. While task specialisation results in improved task performance, researchers have observed that a retriever trained for one specific domain will typically achieve low out-of-domain performance, and even lower performance on entirely different tasks (Petroni et al., 2020). This has two implications. First, unlike tfidf or BM25, neural retrieval models are unsuitable for low data regimes such as few- and zero-shot settings. Second, task-specific retrievers complicate practical applications where multiple knowledge-intensive tasks may need to be performed using the same supporting database or over the same input text. It may not be practical to deploy multiple separate specialised models due to computational performance or memory concerns.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
509,
|
| 87 |
+
263,
|
| 88 |
+
887,
|
| 89 |
+
697
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "We ask the following question in this work: can we develop a universal neural retriever? Namely, we target a retriever that can perform well on a wide variety of problems, without task-specific fine-tuning, but, if additional in-domain labelled data is available, it can be further fine-tuned to improve the performance. We perform a large experimental study to attempt to build such a universal retrieval model. We find that, by jointly training on an extensive selection of retrieval tasks, we obtain a model which is not only more robust than previous approaches, but also can lead to better performance on the downstream knowledge-intensive tasks when",
|
| 96 |
+
"bbox": [
|
| 97 |
+
509,
|
| 98 |
+
700,
|
| 99 |
+
887,
|
| 100 |
+
910
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "aside_text",
|
| 106 |
+
"text": "arXiv:2101.00117v1 [cs.CL] 1 Jan 2021",
|
| 107 |
+
"bbox": [
|
| 108 |
+
21,
|
| 109 |
+
322,
|
| 110 |
+
60,
|
| 111 |
+
717
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "page_footnote",
|
| 117 |
+
"text": "* Equal Contribution.",
|
| 118 |
+
"bbox": [
|
| 119 |
+
139,
|
| 120 |
+
846,
|
| 121 |
+
275,
|
| 122 |
+
859
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_footnote",
|
| 128 |
+
"text": "<sup>1</sup>While large pre-trained neural models have been shown to incorporate real-world knowledge in their parameters and thus may skip retrieval (Petroni et al., 2019), they still have limited capacity and suffer from a lack of explainability.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
115,
|
| 131 |
+
859,
|
| 132 |
+
489,
|
| 133 |
+
909
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "plugged into an existing system. Our approach combines the benefits from IR-based models with those of task-specific neural retrievers – namely, good performance when no (or not enough) training data is available and high task performance due to its ability to learn highly specialised representations.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
114,
|
| 142 |
+
74,
|
| 143 |
+
490,
|
| 144 |
+
185
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 1
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "Our contributions can be summarised as follows.",
|
| 151 |
+
"bbox": [
|
| 152 |
+
134,
|
| 153 |
+
187,
|
| 154 |
+
490,
|
| 155 |
+
203
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 1
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "list",
|
| 161 |
+
"sub_type": "text",
|
| 162 |
+
"list_items": [
|
| 163 |
+
"- We propose a single general-purpose \"universal\" retrieval model, able to perform comparably or better than specialised retriever approaches in both zero-shot (leave-one-out) and few-shot retrieval. We investigate several model variants, shedding light on what are the aspects of the architecture that affect its performance.",
|
| 164 |
+
"- We show that our model's gains in terms of retrieval directly translate into performance gains for a variety of downstream knowledge-intensive tasks.",
|
| 165 |
+
"- We will share the implementation as well as our best model. This is in the form of a readily available BERT checkpoint which, as we will show, can be used by NLP practitioners as a strong out-of-the-box retrieval system, but which can also undergo further in-domain training for even higher performance."
|
| 166 |
+
],
|
| 167 |
+
"bbox": [
|
| 168 |
+
137,
|
| 169 |
+
212,
|
| 170 |
+
490,
|
| 171 |
+
537
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 1
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "text",
|
| 177 |
+
"text": "2 Background",
|
| 178 |
+
"text_level": 1,
|
| 179 |
+
"bbox": [
|
| 180 |
+
115,
|
| 181 |
+
546,
|
| 182 |
+
257,
|
| 183 |
+
564
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 1
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "text",
|
| 189 |
+
"text": "In this section, we first give an overview of retrieval methods based on sparse and dense representations. We then discuss a wide range of knowledge-intensive NLP tasks, where retrieval plays a crucial role in solving the problems.",
|
| 190 |
+
"bbox": [
|
| 191 |
+
114,
|
| 192 |
+
573,
|
| 193 |
+
490,
|
| 194 |
+
653
|
| 195 |
+
],
|
| 196 |
+
"page_idx": 1
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"type": "text",
|
| 200 |
+
"text": "2.1 Retrieval methods",
|
| 201 |
+
"text_level": 1,
|
| 202 |
+
"bbox": [
|
| 203 |
+
115,
|
| 204 |
+
663,
|
| 205 |
+
305,
|
| 206 |
+
678
|
| 207 |
+
],
|
| 208 |
+
"page_idx": 1
|
| 209 |
+
},
|
| 210 |
+
{
|
| 211 |
+
"type": "text",
|
| 212 |
+
"text": "Given a large collection of unstructured text passages, information retrieval (IR) can be broadly defined as finding a small set of passages that satisfies an information need, often presented in the form of a short-text query (Manning et al., 2008). Traditional IR methods, such as tfidf and BM25 (Robertson and Zaragoza, 2009), match keywords efficiently with an inverted index. Such methods can be seen as representing queries and passages in high-dimensional, sparse vectors, where each dimension corresponds to a term in the vocabulary and the weight indicates its importance.",
|
| 213 |
+
"bbox": [
|
| 214 |
+
114,
|
| 215 |
+
684,
|
| 216 |
+
490,
|
| 217 |
+
876
|
| 218 |
+
],
|
| 219 |
+
"page_idx": 1
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"type": "text",
|
| 223 |
+
"text": "In contrast to tfidf and BM25, dense retrieval methods encode text as a latent semantic vector of",
|
| 224 |
+
"bbox": [
|
| 225 |
+
115,
|
| 226 |
+
877,
|
| 227 |
+
490,
|
| 228 |
+
909
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "a fixed, much smaller dimensionality. Whether a passage is relevant to a given query is determined by the distance of their vectors (Deerwester et al., 1990). Although dense representations do not encode tokens explicitly and can potentially map paraphrases of completely different tokens to close vectors, performance of early dense retrieval methods was often inferior to term-matching approaches, except when large labelled data is available (Yih et al., 2011; Gao et al., 2011; Huang et al., 2013). Thanks to success of large pre-trained models (Devlin et al., 2019; Liu et al., 2019b), however, recent dense retrieval methods have shown to outperform the sparse counterparts, when fine-tuned on a small set of in-domain labelled data (Karpukhin et al., 2020; Lewis et al., 2020b; Xiong et al., 2020). Efficient index and search of dense vectors are made possible by maximum inner product search (MIPS) algorithms (e.g., Shrivastava and Li, 2014; Guo et al., 2016), as well as tools like FAISS (Johnson et al., 2019).",
|
| 235 |
+
"bbox": [
|
| 236 |
+
507,
|
| 237 |
+
74,
|
| 238 |
+
885,
|
| 239 |
+
412
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "Our work is built upon the Dense Passage Retriever (DPR) architecture of Karpukhin et al. (2020), which was initially proposed for the task of open-domain question answering. DPR is a neural bi-encoder model which embeds queries with an encoder $\\pmb{f}(\\cdot)$ and passages with a separate encoder $\\pmb{g}(\\cdot)$ . Given an input query $x$ and a target passage $y$ , we have",
|
| 246 |
+
"bbox": [
|
| 247 |
+
509,
|
| 248 |
+
413,
|
| 249 |
+
885,
|
| 250 |
+
542
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "equation",
|
| 256 |
+
"text": "\n$$\n\\mathrm {p} (x \\mid y) \\propto \\operatorname {s i m} (x, y),\n$$\n",
|
| 257 |
+
"text_format": "latex",
|
| 258 |
+
"bbox": [
|
| 259 |
+
611,
|
| 260 |
+
555,
|
| 261 |
+
781,
|
| 262 |
+
571
|
| 263 |
+
],
|
| 264 |
+
"page_idx": 1
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"type": "text",
|
| 268 |
+
"text": "where the similarity score $\\operatorname{sim}(x, y)$ is defined as the inner product of the embeddings of its arguments, $f(x) \\cdot g(y)$ . Given a query at inference time, calculating its similarity with every possible passage would be prohibitive for large knowledge sources. Therefore, DPR makes use of the FAISS library (Johnson et al., 2019) to perform fast approximate nearest neighbour search in sub-linear time.",
|
| 269 |
+
"bbox": [
|
| 270 |
+
509,
|
| 271 |
+
585,
|
| 272 |
+
885,
|
| 273 |
+
727
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "Training is based on a contrastive loss. Given a query $x$ , a relevant passage $y$ , and a set of $n$ irrelevant passages $y_{i}^{-}$ , we train the model by optimising the following negative log likelihood:",
|
| 280 |
+
"bbox": [
|
| 281 |
+
509,
|
| 282 |
+
727,
|
| 283 |
+
885,
|
| 284 |
+
783
|
| 285 |
+
],
|
| 286 |
+
"page_idx": 1
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"type": "equation",
|
| 290 |
+
"text": "\n$$\n\\mathcal {L} = - \\log \\frac {\\exp (\\sin (x , y))}{\\exp (\\sin (x , y)) + \\sum_ {i = 1} ^ {n} \\exp (\\sin \\left(x , y _ {i} ^ {-}\\right))}.\n$$\n",
|
| 291 |
+
"text_format": "latex",
|
| 292 |
+
"bbox": [
|
| 293 |
+
524,
|
| 294 |
+
793,
|
| 295 |
+
870,
|
| 296 |
+
825
|
| 297 |
+
],
|
| 298 |
+
"page_idx": 1
|
| 299 |
+
},
|
| 300 |
+
{
|
| 301 |
+
"type": "text",
|
| 302 |
+
"text": "As the set of irrelevant passages, we use the relevant passages for other queries within the same batch, as well as a specially selected \"hard\" confounder. This is a passage which has high lexical",
|
| 303 |
+
"bbox": [
|
| 304 |
+
509,
|
| 305 |
+
845,
|
| 306 |
+
885,
|
| 307 |
+
910
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 1
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "image",
|
| 313 |
+
"img_path": "images/621902d35d1bf24b48bdcdbf60ce2d59b0143827fe860dd1388b9ae45571b499.jpg",
|
| 314 |
+
"image_caption": [
|
| 315 |
+
"Figure 1: Training of DPR (Karpukhin et al., 2020), a bi-encoder model for open-domain question answering. Queries and passages are encoded as vectors, and retrieval is performed as a maximum inner product search."
|
| 316 |
+
],
|
| 317 |
+
"image_footnote": [],
|
| 318 |
+
"bbox": [
|
| 319 |
+
117,
|
| 320 |
+
74,
|
| 321 |
+
497,
|
| 322 |
+
209
|
| 323 |
+
],
|
| 324 |
+
"page_idx": 2
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"text": "overlap with the query (high BM25 score), but is not among the set of relevant passages for the given data point. Karpukhin et al. (2020) have shown that the inclusion of such \"hard\" confounders leads to substantially improved training results. This training process is illustrated in Figure 1.",
|
| 329 |
+
"bbox": [
|
| 330 |
+
114,
|
| 331 |
+
322,
|
| 332 |
+
489,
|
| 333 |
+
418
|
| 334 |
+
],
|
| 335 |
+
"page_idx": 2
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"text": "2.2 Knowledge-intensive Tasks",
|
| 340 |
+
"text_level": 1,
|
| 341 |
+
"bbox": [
|
| 342 |
+
115,
|
| 343 |
+
443,
|
| 344 |
+
374,
|
| 345 |
+
458
|
| 346 |
+
],
|
| 347 |
+
"page_idx": 2
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"type": "text",
|
| 351 |
+
"text": "For the training and evaluation of all models in the paper we make use of KILT, a benchmark and library of datasets (Petroni et al., 2020). KILT consists of a selection of datasets spanning five varied classes of knowledge-intensive tasks (i.e., question answering, slot filling, fact checking, dialogue, entity linking), with the aim to cover many different ways of seeking knowledge. Input queries can vary wildly from one task to the other, and include classic examples of open-domain retrieval tasks such as natural language questions and claims to be verified, as well as more unusual examples like conversation fragments and long chunks of annotated text. Crucially, all datasets distributed in KILT have been re-aligned such that they are all grounded in the same snapshot of Wikipedia, which the authors distribute. The knowledge required to answer any of the queries in the library of tasks can thus be found within the same unified knowledge source.",
|
| 352 |
+
"bbox": [
|
| 353 |
+
114,
|
| 354 |
+
470,
|
| 355 |
+
489,
|
| 356 |
+
776
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "To illustrate the variety of ways in which the input queries for different tasks can be formulated, we provide a few simple examples in Table 1. In spite of the differences between query formulations, all these tasks share one crucial aspect: they all require a retriever to fetch the relevant passages from the knowledge source, in order to support the final downstream task.",
|
| 363 |
+
"bbox": [
|
| 364 |
+
114,
|
| 365 |
+
781,
|
| 366 |
+
489,
|
| 367 |
+
908
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "3 Methods",
|
| 374 |
+
"text_level": 1,
|
| 375 |
+
"bbox": [
|
| 376 |
+
510,
|
| 377 |
+
74,
|
| 378 |
+
621,
|
| 379 |
+
87
|
| 380 |
+
],
|
| 381 |
+
"page_idx": 2
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"type": "text",
|
| 385 |
+
"text": "3.1 Universal retrieval",
|
| 386 |
+
"text_level": 1,
|
| 387 |
+
"bbox": [
|
| 388 |
+
510,
|
| 389 |
+
99,
|
| 390 |
+
705,
|
| 391 |
+
112
|
| 392 |
+
],
|
| 393 |
+
"page_idx": 2
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"type": "text",
|
| 397 |
+
"text": "Using task-specific models to tackle our collection of retrieval tasks would involve completely separate models, one per dataset. As illustrated in Figure 2, this would lead to a proliferation of models and data, down to separate indexed copies of the knowledge source itself (Wikipedia). This setup will form one of our baselines.",
|
| 398 |
+
"bbox": [
|
| 399 |
+
509,
|
| 400 |
+
120,
|
| 401 |
+
885,
|
| 402 |
+
231
|
| 403 |
+
],
|
| 404 |
+
"page_idx": 2
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"type": "image",
|
| 408 |
+
"img_path": "images/41bc26ece836270c36eb8528cdf2d4e8a134c890b3217324417fb9514d3c77c6.jpg",
|
| 409 |
+
"image_caption": [
|
| 410 |
+
"Figure 2: Two retrieval tasks performed by two fully-specialised models."
|
| 411 |
+
],
|
| 412 |
+
"image_footnote": [],
|
| 413 |
+
"bbox": [
|
| 414 |
+
510,
|
| 415 |
+
243,
|
| 416 |
+
899,
|
| 417 |
+
351
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 2
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "Multi-task training has been successfully used to allow models to leverage cross-task data, as well as to provide a regularisation effect leading to better generalisation ability (Liu et al., 2019a). We apply this concept to neural retrievers, with the aim of improving performance by jointly leveraging multiple different retrieval datasets.",
|
| 424 |
+
"bbox": [
|
| 425 |
+
509,
|
| 426 |
+
405,
|
| 427 |
+
885,
|
| 428 |
+
516
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 2
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "image",
|
| 434 |
+
"img_path": "images/623eb7950ef004cb9855c8a3421984cd9d064b466b281ef895f1503a37d4a238.jpg",
|
| 435 |
+
"image_caption": [
|
| 436 |
+
"(a) Separate query encoders.",
|
| 437 |
+
"Figure 3: Parameter sharing between neural retrievers."
|
| 438 |
+
],
|
| 439 |
+
"image_footnote": [],
|
| 440 |
+
"bbox": [
|
| 441 |
+
519,
|
| 442 |
+
526,
|
| 443 |
+
690,
|
| 444 |
+
653
|
| 445 |
+
],
|
| 446 |
+
"page_idx": 2
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"type": "image",
|
| 450 |
+
"img_path": "images/eba6035b726d121e906989332cbda4da7e5b94070803d58fad586d8cd3a1e147.jpg",
|
| 451 |
+
"image_caption": [
|
| 452 |
+
"(b) A single retrieval model."
|
| 453 |
+
],
|
| 454 |
+
"image_footnote": [],
|
| 455 |
+
"bbox": [
|
| 456 |
+
705,
|
| 457 |
+
544,
|
| 458 |
+
875,
|
| 459 |
+
653
|
| 460 |
+
],
|
| 461 |
+
"page_idx": 2
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"type": "text",
|
| 465 |
+
"text": "Our base setup is illustrated in Figure 3b and involves using a shared passage encoder — so that a single index of encoded passages can be used — as well as a query encoder that is shared across all tasks. In essence, in this setup a single DPR model is used to perform all retrieval tasks.",
|
| 466 |
+
"bbox": [
|
| 467 |
+
509,
|
| 468 |
+
715,
|
| 469 |
+
885,
|
| 470 |
+
810
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 2
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "Due to the complexity of training and evaluating retrieval models (which involves training the retriever, embedding all of Wikipedia, and building an index), our main set of experiments is all based on this configuration, which was found to work well in preliminary experiments. However, in order",
|
| 477 |
+
"bbox": [
|
| 478 |
+
509,
|
| 479 |
+
813,
|
| 480 |
+
885,
|
| 481 |
+
909
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 2
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "table",
|
| 487 |
+
"img_path": "images/e70ef89c384f0a7c0ebee7364135e6d5c3faba0d7cf3d995089cf2baf402a9e2.jpg",
|
| 488 |
+
"table_caption": [],
|
| 489 |
+
"table_footnote": [],
|
| 490 |
+
"table_body": "<table><tr><td>Task</td><td>Example query</td><td>Answer</td><td>Relevant doc.</td></tr><tr><td>Question Answering</td><td>Who is playing the Halftime Show at Super Bowl 2016?</td><td>Coldplay</td><td>The Super Bowl 50 Halftime Show took place on February 7, 2016 ... It was headlined by the British rock group Coldplay.</td></tr><tr><td>Fact Checking</td><td>Bermuda Triangle is in the western part of the Himalayas</td><td>REFUTES</td><td>The Bermuda Triangle ... is a loosely defined region in the western part of the North Atlantic Ocean</td></tr><tr><td>Slot Filling</td><td>Piner Creek [sep] mouth of the watercourse</td><td>Santa Rosa Creek</td><td>Piner Creek discharges to Santa Rosa Creek which in turn ...</td></tr><tr><td>Entity Linking</td><td>Leicestershire take over at top after innings victory. London. [start_ent]West Indian [end_ent] all-rounder Phil Simmons ...</td><td>West Indies cricket team</td><td>The West Indies cricket team is a multi-national men's cricket team representing the Anglophone Caribbean region</td></tr><tr><td>Dialogue</td><td>I am a big fan of Star Trek [sep] I don't know much about it. When did the first episode air? [sep] It debuted in .. [sep] What is the plot of the show?</td><td>William Shatner plays the role of Captain Kirk</td><td>It followed the interstellar adventures of Captain James T. Kirk (William Shatner) and his crew ...</td></tr></table>",
|
| 491 |
+
"bbox": [
|
| 492 |
+
126,
|
| 493 |
+
72,
|
| 494 |
+
878,
|
| 495 |
+
326
|
| 496 |
+
],
|
| 497 |
+
"page_idx": 3
|
| 498 |
+
},
|
| 499 |
+
{
|
| 500 |
+
"type": "text",
|
| 501 |
+
"text": "Table 1: Illustrative examples of some of the tasks within KILT, and how varied their query formulations can be.",
|
| 502 |
+
"bbox": [
|
| 503 |
+
119,
|
| 504 |
+
336,
|
| 505 |
+
875,
|
| 506 |
+
351
|
| 507 |
+
],
|
| 508 |
+
"page_idx": 3
|
| 509 |
+
},
|
| 510 |
+
{
|
| 511 |
+
"type": "text",
|
| 512 |
+
"text": "to report on the performance of alternative architectures, we also investigate the following additional variants in a restricted experimental setting, limited to a few tasks:",
|
| 513 |
+
"bbox": [
|
| 514 |
+
114,
|
| 515 |
+
376,
|
| 516 |
+
490,
|
| 517 |
+
441
|
| 518 |
+
],
|
| 519 |
+
"page_idx": 3
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"type": "list",
|
| 523 |
+
"sub_type": "text",
|
| 524 |
+
"list_items": [
|
| 525 |
+
"- Task-specific query encoder. A different query encoder is used for each family of tasks, e.g. all question answering tasks use the same query encoder, but fact checking uses a different one. This is meant to allow for potentially different needs in processing queries, given the fundamentally diverse nature of the tasks at hand. This setup configuration is illustrated in Figure 3a.",
|
| 526 |
+
"- Task markers. This approach is similar to our base setup, where a single model performs all tasks. Additionally, we introduce specialised tokens which are inserted at the beginning of each query. Their aim is to help the model distinguish between the different tasks, by marking them. We use one task marker for each of the five task classes of KILT, such that all question answering tasks share the same marker."
|
| 527 |
+
],
|
| 528 |
+
"bbox": [
|
| 529 |
+
137,
|
| 530 |
+
450,
|
| 531 |
+
490,
|
| 532 |
+
765
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 3
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "text",
|
| 538 |
+
"text": "3.2 Adversarial confounder selection",
|
| 539 |
+
"text_level": 1,
|
| 540 |
+
"bbox": [
|
| 541 |
+
115,
|
| 542 |
+
776,
|
| 543 |
+
421,
|
| 544 |
+
791
|
| 545 |
+
],
|
| 546 |
+
"page_idx": 3
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "text",
|
| 550 |
+
"text": "We saw in § 2.1 how \"hard\" confounder passages are collected using a BM25 baseline, following the standard approach in DPR. However, any other retriever can be used to select such confounders, including the very retriever being trained, leading to an iterative, self-adversarial training. Concretely, this amounts to following steps: (1) a first version",
|
| 551 |
+
"bbox": [
|
| 552 |
+
114,
|
| 553 |
+
797,
|
| 554 |
+
490,
|
| 555 |
+
910
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 3
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"text": "of the retriever is trained with BM25 confounders; (2) new confounders are selected with the trained model, by retrieving high-ranking passages which are not among the set of relevant ones; (3) a second version of the model is trained using the additional new confounders.",
|
| 562 |
+
"bbox": [
|
| 563 |
+
509,
|
| 564 |
+
376,
|
| 565 |
+
884,
|
| 566 |
+
472
|
| 567 |
+
],
|
| 568 |
+
"page_idx": 3
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"text": "Intuitively, it is expected that this approach should lead to higher quality confounders compared to those selected by BM25 based on simple keyword matching. Based on our own experience as well as relevant literature (Khattab et al., 2020), this adversarial approach has been shown to work well for question answering.",
|
| 573 |
+
"bbox": [
|
| 574 |
+
509,
|
| 575 |
+
485,
|
| 576 |
+
885,
|
| 577 |
+
599
|
| 578 |
+
],
|
| 579 |
+
"page_idx": 3
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"text": "As a way of further pushing the performance of the model, we experiment with this adversarial confounder selection on two datasets, Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). We selected these two datasets since, out of all of the tasks we are considering, they have an easy way of checking whether a certain passage is relevant or not for a given query – namely, by checking whether the answer is present in the passage. This enabled us to automatically build sets of confounders, ensuring relevant passages would be excluded.",
|
| 584 |
+
"bbox": [
|
| 585 |
+
509,
|
| 586 |
+
609,
|
| 587 |
+
885,
|
| 588 |
+
802
|
| 589 |
+
],
|
| 590 |
+
"page_idx": 3
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "page_footnote",
|
| 594 |
+
"text": "${}^{2}$ Strictly speaking,assuming a passage to be irrelevant because of the absence of the answer span is not formally correct. However, experiments show a good correlation between this simple check and the overall model quality.",
|
| 595 |
+
"bbox": [
|
| 596 |
+
509,
|
| 597 |
+
859,
|
| 598 |
+
885,
|
| 599 |
+
910
|
| 600 |
+
],
|
| 601 |
+
"page_idx": 3
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "table",
|
| 605 |
+
"img_path": "images/11f6def3eb26c414f526585b1d5b413e9b5ed21b606e437e383ab67e829f8904.jpg",
|
| 606 |
+
"table_caption": [],
|
| 607 |
+
"table_footnote": [],
|
| 608 |
+
"table_body": "<table><tr><td>Dataset</td><td>Task class</td><td>#Train</td></tr><tr><td>FEVER</td><td>Fact Checking</td><td>71 k</td></tr><tr><td>AIDA-YAGO 2</td><td>Entity Linking</td><td>18 k</td></tr><tr><td>T-REx</td><td>Slot Filling</td><td>2,284 k</td></tr><tr><td>Zero Shot RE</td><td>Slot Filling</td><td>132 k</td></tr><tr><td>Natural Questions</td><td>QA</td><td>77 k</td></tr><tr><td>HotpotQA</td><td>QA</td><td>69 k</td></tr><tr><td>TriviaQA</td><td>QA</td><td>53 k</td></tr><tr><td>Wizard of Wikipedia</td><td>Dialogue</td><td>80 k</td></tr></table>",
|
| 609 |
+
"bbox": [
|
| 610 |
+
117,
|
| 611 |
+
72,
|
| 612 |
+
497,
|
| 613 |
+
234
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 4
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "text",
|
| 619 |
+
"text": "Table 2: KILT datasets used in this work, and the size of our converted training sets for each.",
|
| 620 |
+
"bbox": [
|
| 621 |
+
114,
|
| 622 |
+
243,
|
| 623 |
+
489,
|
| 624 |
+
273
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 4
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "4 Experiments",
|
| 631 |
+
"text_level": 1,
|
| 632 |
+
"bbox": [
|
| 633 |
+
115,
|
| 634 |
+
298,
|
| 635 |
+
262,
|
| 636 |
+
313
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 4
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "text",
|
| 642 |
+
"text": "4.1 Experimental settings",
|
| 643 |
+
"text_level": 1,
|
| 644 |
+
"bbox": [
|
| 645 |
+
115,
|
| 646 |
+
325,
|
| 647 |
+
332,
|
| 648 |
+
340
|
| 649 |
+
],
|
| 650 |
+
"page_idx": 4
|
| 651 |
+
},
|
| 652 |
+
{
|
| 653 |
+
"type": "text",
|
| 654 |
+
"text": "Dataset selection For our experiments we select the eight KILT datasets listed in Table 2, which cover all five task classes and include a training split, a validation split, and a held-out test split.",
|
| 655 |
+
"bbox": [
|
| 656 |
+
114,
|
| 657 |
+
347,
|
| 658 |
+
489,
|
| 659 |
+
411
|
| 660 |
+
],
|
| 661 |
+
"page_idx": 4
|
| 662 |
+
},
|
| 663 |
+
{
|
| 664 |
+
"type": "text",
|
| 665 |
+
"text": "Preprocessing Starting from the raw KILT data, we split each Wikipedia article into disjoint 100-token chunks which form our basic retrieval units, following the approach of Wang et al. (2019) and Karpukhin et al. (2020). To maintain the same language introduced in §3, we will simply call these chunks passages.",
|
| 666 |
+
"bbox": [
|
| 667 |
+
114,
|
| 668 |
+
420,
|
| 669 |
+
490,
|
| 670 |
+
533
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 4
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "This preprocessing results in a knowledge source of 36 million passages. In order to harmonise all datasets to the same knowledge source, KILT used a mapping strategy based on the BLEU metric to map relevant passages in the original versions of its datasets to passages in its own shared knowledge source (Petroni et al., 2020). Entries included in the KILT training sets which have a mapping BLEU score below 0.5 are likely to be noise, and we exclude them from training.",
|
| 677 |
+
"bbox": [
|
| 678 |
+
114,
|
| 679 |
+
535,
|
| 680 |
+
489,
|
| 681 |
+
695
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 4
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "Multi-tasking Training is performed on the union of all training sets. Since two of the training sets are of different orders of magnitude, we use a simple downsampling strategy to bring them to the same order of magnitude as the others. Preliminary experiments with more complex sampling methods, like resampling all datasets so that each epoch would see an equal number of samples from each, found that they had no measurable effect compared to this simpler approach.",
|
| 688 |
+
"bbox": [
|
| 689 |
+
114,
|
| 690 |
+
706,
|
| 691 |
+
489,
|
| 692 |
+
866
|
| 693 |
+
],
|
| 694 |
+
"page_idx": 4
|
| 695 |
+
},
|
| 696 |
+
{
|
| 697 |
+
"type": "text",
|
| 698 |
+
"text": "Encoders Our query and passage encoders are initialised as two distinct BERT base uncased en",
|
| 699 |
+
"bbox": [
|
| 700 |
+
115,
|
| 701 |
+
877,
|
| 702 |
+
489,
|
| 703 |
+
908
|
| 704 |
+
],
|
| 705 |
+
"page_idx": 4
|
| 706 |
+
},
|
| 707 |
+
{
|
| 708 |
+
"type": "text",
|
| 709 |
+
"text": "coders (Devlin et al., 2019), trained separately. As pooling mechanism we find it effective to simply take the [CLS] token representation at the topmost layer.",
|
| 710 |
+
"bbox": [
|
| 711 |
+
509,
|
| 712 |
+
74,
|
| 713 |
+
884,
|
| 714 |
+
139
|
| 715 |
+
],
|
| 716 |
+
"page_idx": 4
|
| 717 |
+
},
|
| 718 |
+
{
|
| 719 |
+
"type": "text",
|
| 720 |
+
"text": "Training We train our models for up to 80 epochs. To select the best checkpoint, we perform full evaluations of the validation set retrieval performance at regular intervals. We use the Adam optimiser (Kingma and Ba, 2015) with a learning rate of $2 \\cdot 10^{-5}$ with warmup and a linear decay schedule, and a dropout rate of 0.1. The batch size is set to 128 samples, and in preliminary experiments we found no benefit in increasing this further. We use an additional \"hard\" confounder per batch, selected based on BM25 score as in (Karpukhin et al., 2020).",
|
| 721 |
+
"bbox": [
|
| 722 |
+
509,
|
| 723 |
+
147,
|
| 724 |
+
885,
|
| 725 |
+
338
|
| 726 |
+
],
|
| 727 |
+
"page_idx": 4
|
| 728 |
+
},
|
| 729 |
+
{
|
| 730 |
+
"type": "text",
|
| 731 |
+
"text": "Downstream evaluation When evaluating our retriever within a larger architecture to perform a knowledge-intensive task, we replicate the DPR + BART setup of Petroni et al. (2020). This uses DPR to retrieve and comprehend the top 3 passages to the query, which is then processed by a task-specific fine-tuned BART model to generate the final answer for the end task.",
|
| 732 |
+
"bbox": [
|
| 733 |
+
509,
|
| 734 |
+
348,
|
| 735 |
+
885,
|
| 736 |
+
475
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 4
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "4.2 Universal retrieval",
|
| 743 |
+
"text_level": 1,
|
| 744 |
+
"bbox": [
|
| 745 |
+
510,
|
| 746 |
+
486,
|
| 747 |
+
705,
|
| 748 |
+
500
|
| 749 |
+
],
|
| 750 |
+
"page_idx": 4
|
| 751 |
+
},
|
| 752 |
+
{
|
| 753 |
+
"type": "text",
|
| 754 |
+
"text": "The results of the evaluations reported in (Petroni et al., 2020) show that retrievers trained for question answering have poor performance outside of their domain. We would like to understand if it is possible to design a single model which can accurately satisfy the information needs of a wide variety of knowledge-intensive tasks. In short: Can a neural retriever be universal?",
|
| 755 |
+
"bbox": [
|
| 756 |
+
509,
|
| 757 |
+
507,
|
| 758 |
+
885,
|
| 759 |
+
634
|
| 760 |
+
],
|
| 761 |
+
"page_idx": 4
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"type": "text",
|
| 765 |
+
"text": "We perform a comprehensive evaluation of several models on the eight tasks of Table 2. The setups we evaluate include eight task-specific models (one trained on each of the eight datasets), for which we measure both in-domain and out-of-domain performance, and a BM25 baseline. Additionally, we include a multi-task trained model - as described in §3.1 - with the hope that it can learn to perform all tasks satisfyingly. This amounts to 10 models evaluated on eight tasks each, for a total of 80 evaluations.",
|
| 766 |
+
"bbox": [
|
| 767 |
+
509,
|
| 768 |
+
636,
|
| 769 |
+
885,
|
| 770 |
+
810
|
| 771 |
+
],
|
| 772 |
+
"page_idx": 4
|
| 773 |
+
},
|
| 774 |
+
{
|
| 775 |
+
"type": "text",
|
| 776 |
+
"text": "To measure retrieval performance, we adopt the main metric used for the KILT benchmark, $R$ -precision. This is calculated as $r / R$ , where $R$ is the total number of relevant passages for a given query, and $r$ is the number of relevant passages returned among the top- $R$ retrieval results. For the",
|
| 777 |
+
"bbox": [
|
| 778 |
+
509,
|
| 779 |
+
813,
|
| 780 |
+
884,
|
| 781 |
+
909
|
| 782 |
+
],
|
| 783 |
+
"page_idx": 4
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"type": "table",
|
| 787 |
+
"img_path": "images/1bda402b6767ab51523423567418aef5d4a1c8b797946c7dcdaeaafef015b582.jpg",
|
| 788 |
+
"table_caption": [],
|
| 789 |
+
"table_footnote": [],
|
| 790 |
+
"table_body": "<table><tr><td rowspan=\"2\">model</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td colspan=\"2\">Slot Filling</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td></tr><tr><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td></tr><tr><td colspan=\"9\">Task-specific models</td></tr><tr><td>FEVER</td><td>73.60/43.92</td><td>5.62</td><td>19.50/10.02</td><td>42.88/19.98</td><td>36.69/18.05</td><td>23.18/17.59</td><td>45.08/22.24</td><td>41.27/19.85</td></tr><tr><td>AY2</td><td>47.36/37.58</td><td>81.77</td><td>5.52/4.08</td><td>8.94/5.50</td><td>10.22/6.77</td><td>11.69/10.71</td><td>15.11/8.47</td><td>17.59/13.08</td></tr><tr><td>T-REx</td><td>45.63/25.22</td><td>1.05</td><td>69.08/58.54</td><td>71.64/40.95</td><td>17.10/8.71</td><td>22.31/15.63</td><td>18.10/8.06</td><td>4.02/1.83</td></tr><tr><td>zsRE</td><td>70.10/33.12</td><td>0.42</td><td>68.34/57.40</td><td>97.74/78.81</td><td>25.98/13.81</td><td>22.23/18.35</td><td>28.68/14.44</td><td>10.40/2.09</td></tr><tr><td>NQ</td><td>68.16/14.81</td><td>1.44</td><td>31.78/7.20</td><td>61.12/12.92</td><td>63.24/28.13</td><td>29.39/11.33</td><td>48.39/14.42</td><td>30.77/11.81</td></tr><tr><td>HoPo</td><td>56.18/40.03</td><td>2.07</td><td>35.76/27.62</td><td>44.44/31.15</td><td>35.60/23.26</td><td>46.63/43.47</td><td>41.18/29.37</td><td>23.51/16.02</td></tr><tr><td>TQA</td><td>70.06/10.68</td><td>4.95</td><td>32.22/12.52</td><td>60.37/17.43</td><td>45.01/12.97</td><td>32.62/13.05</td><td>65.12/23.79</td><td>41.17/8.11</td></tr><tr><td>WoW</td><td>59.16/42.79</td><td>3.11</td><td>20.92/18.52</td><td>41.14/35.26</td><td>33.27/22.52</td><td>20.36/17.66</td><td>39.37/23.15</td><td>40.32/20.73</td></tr></table>",
|
| 791 |
+
"bbox": [
|
| 792 |
+
124,
|
| 793 |
+
72,
|
| 794 |
+
878,
|
| 795 |
+
258
|
| 796 |
+
],
|
| 797 |
+
"page_idx": 5
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "text",
|
| 801 |
+
"text": "Table 3: Page- and passage-level $R$ -precision on KILT validation data. For the AIDA-YAGO 2 dataset, due to the nature of the task, only page-level retrieval is defined.",
|
| 802 |
+
"bbox": [
|
| 803 |
+
114,
|
| 804 |
+
268,
|
| 805 |
+
884,
|
| 806 |
+
297
|
| 807 |
+
],
|
| 808 |
+
"page_idx": 5
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "text",
|
| 812 |
+
"text": "case of $R = 1$ this is therefore equivalent to precision@1. Table 3 shows retrieval performance on the validation data, with the best performance on a given dataset marked in bold, and the second best performance underlined.",
|
| 813 |
+
"bbox": [
|
| 814 |
+
114,
|
| 815 |
+
323,
|
| 816 |
+
490,
|
| 817 |
+
403
|
| 818 |
+
],
|
| 819 |
+
"page_idx": 5
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "text",
|
| 823 |
+
"text": "While the KILT evaluation focuses on retrieval at the level of Wikipedia pages (thereby marking as \"hits\" any results that lie within the correct page), we are also interested in performing an evaluation at a more fine-grained level. We therefore also evaluate our models at the passage level, using a modified version of the official KILT evaluation scripts. These are shown as the second number in each column.",
|
| 824 |
+
"bbox": [
|
| 825 |
+
112,
|
| 826 |
+
405,
|
| 827 |
+
490,
|
| 828 |
+
548
|
| 829 |
+
],
|
| 830 |
+
"page_idx": 5
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "text",
|
| 834 |
+
"text": "We straight away notice that the task-specific models tend to achieve high performance on their respective tasks, often taking one of the top two spots. Interestingly, we also note that these neural retrievers consistently outperform the BM25 baseline, showing that the result which Karpukhin et al. (2020) achieved for open-domain question answering also holds for other knowledge-intensive tasks.",
|
| 835 |
+
"bbox": [
|
| 836 |
+
114,
|
| 837 |
+
552,
|
| 838 |
+
489,
|
| 839 |
+
695
|
| 840 |
+
],
|
| 841 |
+
"page_idx": 5
|
| 842 |
+
},
|
| 843 |
+
{
|
| 844 |
+
"type": "text",
|
| 845 |
+
"text": "The results reveal a strong performance for the multi-task model, confirming the hypothesis that a single model can be successfully trained to perform a wide variety of retrieval tasks. With the exception of one dataset, the shared model achieves the best retrieval performance or is within a few percentage points of the top score. We note that the one exception is the Zero-shot RE task (Levy et al., 2017), a trivial task in which the query will always contain the title of the page to be retrieved. Indeed, the model specific to this task manages to achieve a near-perfect score.",
|
| 846 |
+
"bbox": [
|
| 847 |
+
114,
|
| 848 |
+
699,
|
| 849 |
+
489,
|
| 850 |
+
890
|
| 851 |
+
],
|
| 852 |
+
"page_idx": 5
|
| 853 |
+
},
|
| 854 |
+
{
|
| 855 |
+
"type": "text",
|
| 856 |
+
"text": "Another task which stands out for being",
|
| 857 |
+
"bbox": [
|
| 858 |
+
132,
|
| 859 |
+
894,
|
| 860 |
+
489,
|
| 861 |
+
910
|
| 862 |
+
],
|
| 863 |
+
"page_idx": 5
|
| 864 |
+
},
|
| 865 |
+
{
|
| 866 |
+
"type": "text",
|
| 867 |
+
"text": "markedly different in formulation is AIDA-YAGO 2 (Hoffart et al., 2011). As shown in Table 2, models that were not trained on this specific task perform it very poorly. Entity linking is a task that is normally better performed by models which are explicitly designed for it (Cao et al., 2020). We nevertheless include it to showcase the ability of neural retrievers to adapt to it, and note how well the multi-task retriever performs on it in spite of its unusual nature.",
|
| 868 |
+
"bbox": [
|
| 869 |
+
509,
|
| 870 |
+
323,
|
| 871 |
+
885,
|
| 872 |
+
482
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 5
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "text",
|
| 878 |
+
"text": "4.3 Downstream performance",
|
| 879 |
+
"text_level": 1,
|
| 880 |
+
"bbox": [
|
| 881 |
+
510,
|
| 882 |
+
498,
|
| 883 |
+
764,
|
| 884 |
+
514
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 5
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "text",
|
| 890 |
+
"text": "We saw that our proposed approach achieves strong performance across a variety of retrieval tasks. However, our interest in neural retrievers stems from their use as components within larger systems, to perform tasks such as question answering. Our next experimental question is therefore: Can a universal retriever lead to better downstream performance in knowledge-intensive tasks?",
|
| 891 |
+
"bbox": [
|
| 892 |
+
509,
|
| 893 |
+
521,
|
| 894 |
+
885,
|
| 895 |
+
649
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 5
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "text",
|
| 901 |
+
"text": "We perform a downstream evaluation of our approach used in conjunction with BART (Lewis et al., 2020a) as the generative component or classifier, adopting the same setup as Petroni et al. (2020). Results are reported in Table 4, with bold and underline marking the best and second best scores respectively.",
|
| 902 |
+
"bbox": [
|
| 903 |
+
509,
|
| 904 |
+
651,
|
| 905 |
+
885,
|
| 906 |
+
762
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 5
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "text",
|
| 912 |
+
"text": "The $DPR + BART$ line refers to a setup similar to our own, but with the simpler retriever of Karpukhin et al. (2020). Therefore, comparing its performance to ours gives us a clear indication of the contribution of multi-task training on the overall performance on knowledge-intensive tasks. Our proposed model achieves significantly better performance than this baseline in AY2, zsRE and HoPo; while for the other tasks, the discrepancy",
|
| 913 |
+
"bbox": [
|
| 914 |
+
509,
|
| 915 |
+
765,
|
| 916 |
+
885,
|
| 917 |
+
910
|
| 918 |
+
],
|
| 919 |
+
"page_idx": 5
|
| 920 |
+
},
|
| 921 |
+
{
|
| 922 |
+
"type": "table",
|
| 923 |
+
"img_path": "images/860c68c2e852a27a1083ca7af8a8e2581de205656fffee4595064a947dbc4f66.jpg",
|
| 924 |
+
"table_caption": [],
|
| 925 |
+
"table_footnote": [],
|
| 926 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td rowspan=\"2\">Slot Fill. zsRE</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td><td rowspan=\"2\">Avg.</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task + BART</td><td>86.32</td><td>82.61</td><td>57.95</td><td>39.75</td><td>31.77</td><td>59.60</td><td>15.33</td><td>53.33</td></tr><tr><td>DPR + BART</td><td>86.74</td><td>75.49</td><td>30.43</td><td>41.27</td><td>25.18</td><td>58.55</td><td>15.55</td><td>47.60</td></tr><tr><td>RAG</td><td>86.31</td><td>72.62</td><td>44.74</td><td>44.39</td><td>26.97</td><td>71.27</td><td>13.22</td><td>51.36</td></tr><tr><td>T5</td><td>76.30</td><td>74.05</td><td>9.02</td><td>19.60</td><td>12.64</td><td>18.11</td><td>13.49</td><td>31.89</td></tr></table>",
|
| 927 |
+
"bbox": [
|
| 928 |
+
127,
|
| 929 |
+
72,
|
| 930 |
+
878,
|
| 931 |
+
192
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 6
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"text": "Table 4: KILT test scores on the downstream evaluation. Results in the bottom section are as reported in Petroni et al. (2020). The score metrics are accuracy for fact checking, entirely linking and slot filling; exact match for QA; and F1 score for dialogue. $^3$",
|
| 938 |
+
"bbox": [
|
| 939 |
+
114,
|
| 940 |
+
200,
|
| 941 |
+
884,
|
| 942 |
+
246
|
| 943 |
+
],
|
| 944 |
+
"page_idx": 6
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"text": "is always below two points. This fact is reflected in the last column too, showing that on average multi-task training leads to better downstream performance. The model also compares favourably to RAG (Lewis et al., 2020b), a more advanced system in which the query encoder is fine-tuned on the end task.",
|
| 949 |
+
"bbox": [
|
| 950 |
+
114,
|
| 951 |
+
269,
|
| 952 |
+
489,
|
| 953 |
+
381
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 6
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"text": "4.4 Zero- and few-shot performance",
|
| 960 |
+
"text_level": 1,
|
| 961 |
+
"bbox": [
|
| 962 |
+
115,
|
| 963 |
+
394,
|
| 964 |
+
415,
|
| 965 |
+
409
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 6
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "text",
|
| 971 |
+
"text": "Task-specific neural retrievers can achieve higher performance than IR-based methods, but they are not suitable for cases where no training data (or not enough) is available. In those cases, tfidf and BM25 are the better choice. To evaluate the performance of a multi-task retriever as a suitable replacement for them in this scenario, we run a series of experiments in the low data regimes (few-shot and zero-shot).",
|
| 972 |
+
"bbox": [
|
| 973 |
+
114,
|
| 974 |
+
414,
|
| 975 |
+
489,
|
| 976 |
+
558
|
| 977 |
+
],
|
| 978 |
+
"page_idx": 6
|
| 979 |
+
},
|
| 980 |
+
{
|
| 981 |
+
"type": "text",
|
| 982 |
+
"text": "We start by training a set of multi-task retrievers (using the base setup) in the leave-one-out setting for each of the datasets, in order to see how a neural retriever will perform when trained on all domains except for the one it is to be evaluated on. The results of these zero-shot experiments are reported in the second line of Table 5 (again, text here is in bold for the best overall performance, and underlined for second best). They show that, even in the zero-shot setting, the multi-task neural retriever achieves performance that is competitive to BM25, with retrieval being 10 points higher at the page level and 5 points lower at the passage level on average.",
|
| 983 |
+
"bbox": [
|
| 984 |
+
114,
|
| 985 |
+
561,
|
| 986 |
+
489,
|
| 987 |
+
785
|
| 988 |
+
],
|
| 989 |
+
"page_idx": 6
|
| 990 |
+
},
|
| 991 |
+
{
|
| 992 |
+
"type": "text",
|
| 993 |
+
"text": "The advantage of neural retrievers over BM25 lies in their ability to improve with training. We therefore look at few-shot training for each task, and create two smaller copies for each of the orig",
|
| 994 |
+
"bbox": [
|
| 995 |
+
114,
|
| 996 |
+
785,
|
| 997 |
+
489,
|
| 998 |
+
850
|
| 999 |
+
],
|
| 1000 |
+
"page_idx": 6
|
| 1001 |
+
},
|
| 1002 |
+
{
|
| 1003 |
+
"type": "text",
|
| 1004 |
+
"text": "inal training sets with a random sample of 128 and 1,024 examples respectively. In order to evaluate the suitability of a multi-task trained retriever as a starting checkpoint for few-shot training, we take the various leave-one-out models and finetune them on our few-shot training sets. To check whether multi-task pre-training is effective, we also compare these to DPR models (which are just initialised with BERT weights) fine-tuned on the same data.",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
509,
|
| 1007 |
+
269,
|
| 1008 |
+
884,
|
| 1009 |
+
430
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 6
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "The bottom two sections of Table 5 report the results. The most dramatic gains from fine-tuning are seen for AY2, an \"outlier\" task whose formulation differs from that of the other tasks, and which seems to benefit the most from seeing in-domain data. The zsRE performance does not seem to improve from fine-tuning on the smaller dataset, but sees a very big jump when switching to the larger dataset. As a reminder, in this trivial task the title of the page to be retrieved always appears at the start of the query. It is therefore not surprising that models specifically fine-tuned on it can achieve near-perfect scores, as long as enough training data is provided.",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
509,
|
| 1018 |
+
434,
|
| 1019 |
+
884,
|
| 1020 |
+
658
|
| 1021 |
+
],
|
| 1022 |
+
"page_idx": 6
|
| 1023 |
+
},
|
| 1024 |
+
{
|
| 1025 |
+
"type": "text",
|
| 1026 |
+
"text": "In spite of the fine-tuning, we note that both DPR and the multi-task model fail to improve on their performance for T-REx, suggesting that large amounts of training data are required to learn this task. Nevertheless, the multi-task model proves itself more robust, and achieves the top performance on it.",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
509,
|
| 1029 |
+
662,
|
| 1030 |
+
884,
|
| 1031 |
+
772
|
| 1032 |
+
],
|
| 1033 |
+
"page_idx": 6
|
| 1034 |
+
},
|
| 1035 |
+
{
|
| 1036 |
+
"type": "text",
|
| 1037 |
+
"text": "Finally, we note for 2 out of 8 tasks, namely zsRE and WoW, DPR achieves lower page-level retrieval scores than the multi-task model, but performs better at the passage level. This shows that fine-grained and coarse-grained retrieval performance are not always perfectly correlated.",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
509,
|
| 1040 |
+
778,
|
| 1041 |
+
884,
|
| 1042 |
+
873
|
| 1043 |
+
],
|
| 1044 |
+
"page_idx": 6
|
| 1045 |
+
},
|
| 1046 |
+
{
|
| 1047 |
+
"type": "text",
|
| 1048 |
+
"text": "Overall, the experiments show strong results for the multi-task model, with the average zero-shot",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
509,
|
| 1051 |
+
877,
|
| 1052 |
+
882,
|
| 1053 |
+
910
|
| 1054 |
+
],
|
| 1055 |
+
"page_idx": 6
|
| 1056 |
+
},
|
| 1057 |
+
{
|
| 1058 |
+
"type": "page_footnote",
|
| 1059 |
+
"text": "3Performing this evaluation required retrieving relevant documents for all training sets. Due to the very large size of T-REx, this particular dataset could not be included in this section.",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
114,
|
| 1062 |
+
859,
|
| 1063 |
+
489,
|
| 1064 |
+
908
|
| 1065 |
+
],
|
| 1066 |
+
"page_idx": 6
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"type": "table",
|
| 1070 |
+
"img_path": "images/d83d4bb26b364066a9d58039acf6e23590cac900387ce8c26904ba91626deb4f.jpg",
|
| 1071 |
+
"table_caption": [],
|
| 1072 |
+
"table_footnote": [],
|
| 1073 |
+
"table_body": "<table><tr><td>Model</td><td>FEV</td><td>AY2</td><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td><td>WoW</td><td>Avg.</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td><td>38.17/33.12</td></tr><tr><td colspan=\"10\">Leave-one-out multi-task models</td></tr><tr><td>Zero-shot</td><td>74.11/37.09</td><td>4.16</td><td>67.54/44.84</td><td>73.42/32.65</td><td>47.23/21.50</td><td>34.72/16.52</td><td>49.08/28.06</td><td>36.92/16.19</td><td>48.40/28.12</td></tr><tr><td>Finetune (128)</td><td>75.95/32.75</td><td>32.38</td><td>67.54/44.84</td><td>73.41/32.65</td><td>47.48/14.98</td><td>34.72/27.82</td><td>54.71/19.82</td><td>48.36/17.46</td><td>54.23/27.19</td></tr><tr><td>Finetune (1k)</td><td>73.08/40.83</td><td>70.40</td><td>67.54/44.84</td><td>93.04/58.67</td><td>51.00/19.90</td><td>39.19/35.43</td><td>59.08/20.22</td><td>47.65/19.75</td><td>62.62/34.23</td></tr><tr><td colspan=\"10\">Vanilla DPR models</td></tr><tr><td>Finetune (128)</td><td>37.99/25.31</td><td>26.23</td><td>0.20/0.02</td><td>0.16/0.00</td><td>20.92/9.52</td><td>14.46/14.08</td><td>26.85/10.54</td><td>30.31/17.20</td><td>19.64/10.95</td></tr><tr><td>Finetune (1k)</td><td>70.87/47.82</td><td>72.49</td><td>0.20/0.02</td><td>90.33/80.20</td><td>43.43/19.81</td><td>30.75/30.50</td><td>52.50/17.33</td><td>44.70/24.92</td><td>50.66/31.51</td></tr></table>",
|
| 1074 |
+
"bbox": [
|
| 1075 |
+
122,
|
| 1076 |
+
71,
|
| 1077 |
+
884,
|
| 1078 |
+
211
|
| 1079 |
+
],
|
| 1080 |
+
"page_idx": 7
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "text",
|
| 1084 |
+
"text": "performance being competitive to BM25, and the average few-shot performance being markedly better than the alternatives. The discrepancy in performance between a vanilla DPR model and the leave-one-out multi-task model is especially noticeable when using the smaller of the two datasets, in which case average performance for the latter is more than double that of vanilla DPR.",
|
| 1085 |
+
"bbox": [
|
| 1086 |
+
114,
|
| 1087 |
+
272,
|
| 1088 |
+
490,
|
| 1089 |
+
401
|
| 1090 |
+
],
|
| 1091 |
+
"page_idx": 7
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "text",
|
| 1095 |
+
"text": "4.5 Model variants",
|
| 1096 |
+
"text_level": 1,
|
| 1097 |
+
"bbox": [
|
| 1098 |
+
115,
|
| 1099 |
+
412,
|
| 1100 |
+
282,
|
| 1101 |
+
426
|
| 1102 |
+
],
|
| 1103 |
+
"page_idx": 7
|
| 1104 |
+
},
|
| 1105 |
+
{
|
| 1106 |
+
"type": "table",
|
| 1107 |
+
"img_path": "images/aa23591cab1d7e0e1d594419600dccce763206545ef6cb41425d39c2b1b8c6e4.jpg",
|
| 1108 |
+
"table_caption": [
|
| 1109 |
+
"Table 5: Page- and passage-level $R$ -Precision in the zero-shot setting and with additional fine-tuning of 128 and 1,024 examples. We also compare to a BM25 retriever and a DPR model initialised with BERT weights."
|
| 1110 |
+
],
|
| 1111 |
+
"table_footnote": [],
|
| 1112 |
+
"table_body": "<table><tr><td>variant</td><td>FEV</td><td>NQ</td><td>TQA</td></tr><tr><td>Base</td><td>76.38/40.76</td><td>60.91/24.50</td><td>64.77/21.75</td></tr><tr><td>Task markers</td><td>75.84/40.79</td><td>62.31/25.10</td><td>64.04/20.86</td></tr><tr><td>Task-spec. enc.</td><td>73.53/40.02</td><td>61.05/25.52</td><td>64.17/21.23</td></tr></table>",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
119,
|
| 1115 |
+
441,
|
| 1116 |
+
487,
|
| 1117 |
+
507
|
| 1118 |
+
],
|
| 1119 |
+
"page_idx": 7
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "text",
|
| 1123 |
+
"text": "Table 6: Multi-task model variants evaluated on a subset of tasks ( $R$ -precision on validation data at page/passage level).",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
114,
|
| 1126 |
+
516,
|
| 1127 |
+
489,
|
| 1128 |
+
558
|
| 1129 |
+
],
|
| 1130 |
+
"page_idx": 7
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "text",
|
| 1134 |
+
"text": "In this set of experiments we compare our base multi-task model with the two variants described in § 3.1. Due to the high memory consumption of the \"task-specific encoders\" variant (requiring one full query encoder per task family, in addition to the passage encoder), it was only possible to perform these evaluations in a restricted setting of three datasets. The results in Table 6 do not reveal a clear winner, suggesting that the base architecture might be the better choice due to its simplicity and generally good performance. $^4$",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
114,
|
| 1137 |
+
579,
|
| 1138 |
+
489,
|
| 1139 |
+
755
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 7
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "4.6 Adversarial confounder selection",
|
| 1146 |
+
"text_level": 1,
|
| 1147 |
+
"bbox": [
|
| 1148 |
+
115,
|
| 1149 |
+
766,
|
| 1150 |
+
421,
|
| 1151 |
+
781
|
| 1152 |
+
],
|
| 1153 |
+
"page_idx": 7
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "text",
|
| 1157 |
+
"text": "Finally, we evaluate the adversarial confounder selection method described in § 3.2. This involves augmenting our regular training sets with additional confounders for TriviaQA and Natural Ques",
|
| 1158 |
+
"bbox": [
|
| 1159 |
+
114,
|
| 1160 |
+
787,
|
| 1161 |
+
490,
|
| 1162 |
+
851
|
| 1163 |
+
],
|
| 1164 |
+
"page_idx": 7
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "text",
|
| 1168 |
+
"text": "tions, selected using our top multi-task trained model. A new multi-task model is then trained from scratch on this augmented data. Its performance is reported in Table 7, showing an overall improvement across multiple tasks. While this approach is demonstrated here on our multi-task model, it is in fact orthogonal to it, and could be applied to any other neural retrievers trained with a contrastive loss.",
|
| 1169 |
+
"bbox": [
|
| 1170 |
+
509,
|
| 1171 |
+
273,
|
| 1172 |
+
885,
|
| 1173 |
+
416
|
| 1174 |
+
],
|
| 1175 |
+
"page_idx": 7
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"type": "text",
|
| 1179 |
+
"text": "5 Related work",
|
| 1180 |
+
"text_level": 1,
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
510,
|
| 1183 |
+
431,
|
| 1184 |
+
663,
|
| 1185 |
+
445
|
| 1186 |
+
],
|
| 1187 |
+
"page_idx": 7
|
| 1188 |
+
},
|
| 1189 |
+
{
|
| 1190 |
+
"type": "text",
|
| 1191 |
+
"text": "The approach most closely related to ours is DPR (Karpukhin et al., 2020), upon which we built all our retrieval systems. This model is covered in detail in § 2.1, in addition to the historical context. Another closely related approach is the Retrieval-Augmented Generation (RAG) model of Lewis et al. (2020b). In its base configuration it augments DPR with a generative reader, and it trains the query encoder end-to-end (differing from traditional retriever-reader architectures which treat the two steps as disjoint). A natural extension of the work we have presented would be to combine RAG with our joint learning approach, to study whether it can lead to further gains in performance or robustness.",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
507,
|
| 1194 |
+
458,
|
| 1195 |
+
885,
|
| 1196 |
+
697
|
| 1197 |
+
],
|
| 1198 |
+
"page_idx": 7
|
| 1199 |
+
},
|
| 1200 |
+
{
|
| 1201 |
+
"type": "text",
|
| 1202 |
+
"text": "A number of promising techniques to boost retrieval performance have been proposed recently. These are orthogonal to our work, and as such they could be combined with it. Amongst these, pretraining methods form one class. Inverse Cloze Task (Lee et al., 2019) and its extensions (Chang et al., 2020) are self-supervised pre-training methods designed for retrieval in open-domain question answering. Whether such specific pre-training is beneficial to tasks other than question answering remains an open question. CERT (Fang et al., 2020) is an alternative pre-training approach, inspired by some recent advances in computer vision. While to",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
507,
|
| 1205 |
+
700,
|
| 1206 |
+
885,
|
| 1207 |
+
910
|
| 1208 |
+
],
|
| 1209 |
+
"page_idx": 7
|
| 1210 |
+
},
|
| 1211 |
+
{
|
| 1212 |
+
"type": "page_footnote",
|
| 1213 |
+
"text": "4Not included in this table, due to very poor performance in preliminary experiments, are two further variants: a base model with a single encoder for both queries and passages, and a base model trained from scratch without BERT pre-training.",
|
| 1214 |
+
"bbox": [
|
| 1215 |
+
114,
|
| 1216 |
+
859,
|
| 1217 |
+
489,
|
| 1218 |
+
910
|
| 1219 |
+
],
|
| 1220 |
+
"page_idx": 7
|
| 1221 |
+
},
|
| 1222 |
+
{
|
| 1223 |
+
"type": "table",
|
| 1224 |
+
"img_path": "images/e938ea3401d9fbf01044d7701b4503bc75fabe9e3b5fbb56039936cfd15d5557.jpg",
|
| 1225 |
+
"table_caption": [],
|
| 1226 |
+
"table_footnote": [],
|
| 1227 |
+
"table_body": "<table><tr><td rowspan=\"2\">confounders</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td rowspan=\"2\">Slot Filling T-REx</td><td rowspan=\"2\">zsRE</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>BM25</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25 + adv</td><td>74.79/52.12</td><td>84.86</td><td>71.36/61.40</td><td>80.04/54.08</td><td>59.25/40.11</td><td>44.08/41.04</td><td>59.19/34.17</td><td>41.04/24.62</td></tr></table>",
|
| 1228 |
+
"bbox": [
|
| 1229 |
+
124,
|
| 1230 |
+
72,
|
| 1231 |
+
880,
|
| 1232 |
+
137
|
| 1233 |
+
],
|
| 1234 |
+
"page_idx": 8
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "text",
|
| 1238 |
+
"text": "Table 7: Comparison of two confounder selection methods for the multi-task model: simple BM25, and BM25 augmented with adversarial confounders ( $R$ -precision on validation data at page/passage level).",
|
| 1239 |
+
"bbox": [
|
| 1240 |
+
114,
|
| 1241 |
+
147,
|
| 1242 |
+
882,
|
| 1243 |
+
177
|
| 1244 |
+
],
|
| 1245 |
+
"page_idx": 8
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "text",
|
| 1249 |
+
"text": "our knowledge this has not been applied to retrieval problems, we believe it might be promising due to its focus on sentence-level semantics (as opposed to the more standard masked language modelling pre-training, which focuses on the token-level).",
|
| 1250 |
+
"bbox": [
|
| 1251 |
+
114,
|
| 1252 |
+
202,
|
| 1253 |
+
489,
|
| 1254 |
+
282
|
| 1255 |
+
],
|
| 1256 |
+
"page_idx": 8
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "text",
|
| 1260 |
+
"text": "Another class of orthogonal improvements to dense retrieval involves models which embed passages into multiple fixed-size vectors. Of these, ColBERT (Khattab and Zaharia, 2020) and MEBERT (Luan et al., 2020) are two representative examples. One further approach is ColBERT-QA (Khattab et al., 2020), which additionally uses a data augmentation strategy closely related to our own approach described in § 3.2.",
|
| 1261 |
+
"bbox": [
|
| 1262 |
+
114,
|
| 1263 |
+
284,
|
| 1264 |
+
490,
|
| 1265 |
+
428
|
| 1266 |
+
],
|
| 1267 |
+
"page_idx": 8
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "text",
|
| 1271 |
+
"text": "Finally two entity linkers, GENRE (Cao et al., 2020) and BLINK (Wu et al., 2020), are worth mentioning. Being trained specifically for entity linking, these models will generally outperform retrieval-based approaches on that task. While they are not comparable to retrieval models and will not generally be applicable to information retrieval tasks, we mention them here to provide readers with a fuller context of the existing literature.",
|
| 1272 |
+
"bbox": [
|
| 1273 |
+
114,
|
| 1274 |
+
429,
|
| 1275 |
+
489,
|
| 1276 |
+
576
|
| 1277 |
+
],
|
| 1278 |
+
"page_idx": 8
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"text": "6 Conclusions",
|
| 1283 |
+
"text_level": 1,
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
115,
|
| 1286 |
+
589,
|
| 1287 |
+
257,
|
| 1288 |
+
605
|
| 1289 |
+
],
|
| 1290 |
+
"page_idx": 8
|
| 1291 |
+
},
|
| 1292 |
+
{
|
| 1293 |
+
"type": "text",
|
| 1294 |
+
"text": "We have conducted a large-scale experimental study on knowledge-intensive tasks, and how retrieval models that tackle them seek the required information from knowledge bases such as Wikipedia.",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
114,
|
| 1297 |
+
617,
|
| 1298 |
+
489,
|
| 1299 |
+
697
|
| 1300 |
+
],
|
| 1301 |
+
"page_idx": 8
|
| 1302 |
+
},
|
| 1303 |
+
{
|
| 1304 |
+
"type": "text",
|
| 1305 |
+
"text": "The study started with the question of whether the way in which information is embedded for retrieval purposes is universal. Section 4.2 provided evidence that to a large extent it is, with a single \"universal\" retriever, trained jointly on 8 datasets, often performing comparably to task-specific models.",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
114,
|
| 1308 |
+
699,
|
| 1309 |
+
490,
|
| 1310 |
+
810
|
| 1311 |
+
],
|
| 1312 |
+
"page_idx": 8
|
| 1313 |
+
},
|
| 1314 |
+
{
|
| 1315 |
+
"type": "text",
|
| 1316 |
+
"text": "Armed with this knowledge, in Section 4.3 we plugged our single model in a larger pipeline, in order to see its contribution to the downstream performance on a wide range of knowledge-intensive tasks. This led to an overall improvement in downstream performance, setting new top results for a",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
114,
|
| 1319 |
+
813,
|
| 1320 |
+
490,
|
| 1321 |
+
910
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 8
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "text",
|
| 1327 |
+
"text": "number of tasks in the KILT benchmark.",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
510,
|
| 1330 |
+
202,
|
| 1331 |
+
815,
|
| 1332 |
+
217
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 8
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "text",
|
| 1338 |
+
"text": "Next, in Section 4.4, we evaluated the model's performance in the zero-shot and few-shot settings. By evaluating on a wide range of tasks, we were able to show that our proposed approach performs comparably to BM25 in the zero-shot setting, and quickly overtakes it even with minimal in-domain training.",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
509,
|
| 1341 |
+
219,
|
| 1342 |
+
885,
|
| 1343 |
+
332
|
| 1344 |
+
],
|
| 1345 |
+
"page_idx": 8
|
| 1346 |
+
},
|
| 1347 |
+
{
|
| 1348 |
+
"type": "text",
|
| 1349 |
+
"text": "In Section 4.5 we evaluated a number of more complex variants of the model involving task specialisation, but failed to see clear performance improvements. Finally, in Section 4.6 we saw how a simple iterative approach to data augmentation can lead to better performance.",
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
509,
|
| 1352 |
+
332,
|
| 1353 |
+
885,
|
| 1354 |
+
429
|
| 1355 |
+
],
|
| 1356 |
+
"page_idx": 8
|
| 1357 |
+
},
|
| 1358 |
+
{
|
| 1359 |
+
"type": "text",
|
| 1360 |
+
"text": "In the coming months we will provide a pretrained snapshot of our best-performing model, in the form of a BERT checkpoint. As shown, this model will be useful in zero-shot and few-shot settings as a better performing alternative to both IR-based approaches such as BM25, as well as task-specific models. The multi-task training approach demonstrated here can also be useful in industry settings where several retrieval operations may need to be performed on the same piece of content, and the deployment of multiple task-specific models might not be possible due to space or computational performance concerns.",
|
| 1361 |
+
"bbox": [
|
| 1362 |
+
509,
|
| 1363 |
+
431,
|
| 1364 |
+
885,
|
| 1365 |
+
640
|
| 1366 |
+
],
|
| 1367 |
+
"page_idx": 8
|
| 1368 |
+
},
|
| 1369 |
+
{
|
| 1370 |
+
"type": "text",
|
| 1371 |
+
"text": "References",
|
| 1372 |
+
"text_level": 1,
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
512,
|
| 1375 |
+
670,
|
| 1376 |
+
610,
|
| 1377 |
+
684
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 8
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "list",
|
| 1383 |
+
"sub_type": "ref_text",
|
| 1384 |
+
"list_items": [
|
| 1385 |
+
"Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. ArXiv, abs/2010.00904.",
|
| 1386 |
+
"Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representations.",
|
| 1387 |
+
"Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational"
|
| 1388 |
+
],
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
510,
|
| 1391 |
+
694,
|
| 1392 |
+
885,
|
| 1393 |
+
883
|
| 1394 |
+
],
|
| 1395 |
+
"page_idx": 8
|
| 1396 |
+
},
|
| 1397 |
+
{
|
| 1398 |
+
"type": "page_footnote",
|
| 1399 |
+
"text": "E.g. fact checking and hate speech detection.",
|
| 1400 |
+
"bbox": [
|
| 1401 |
+
531,
|
| 1402 |
+
894,
|
| 1403 |
+
815,
|
| 1404 |
+
909
|
| 1405 |
+
],
|
| 1406 |
+
"page_idx": 8
|
| 1407 |
+
},
|
| 1408 |
+
{
|
| 1409 |
+
"type": "list",
|
| 1410 |
+
"sub_type": "ref_text",
|
| 1411 |
+
"list_items": [
|
| 1412 |
+
"Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1413 |
+
"Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391-407.",
|
| 1414 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 1415 |
+
"Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. CERT: Contrastive self-supervised learning for language understanding. ArXiv, abs/2005.12766.",
|
| 1416 |
+
"Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 675-684. ACM.",
|
| 1417 |
+
"Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Artificial Intelligence and Statistics, pages 482-490.",
|
| 1418 |
+
"Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spanirol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 782-792, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
|
| 1419 |
+
"Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, page 2333-2338, New York, NY, USA. Association for Computing Machinery.",
|
| 1420 |
+
"J. Johnson, M. Douze, and H. Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, pages 1-1.",
|
| 1421 |
+
"Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics"
|
| 1422 |
+
],
|
| 1423 |
+
"bbox": [
|
| 1424 |
+
117,
|
| 1425 |
+
76,
|
| 1426 |
+
490,
|
| 1427 |
+
908
|
| 1428 |
+
],
|
| 1429 |
+
"page_idx": 9
|
| 1430 |
+
},
|
| 1431 |
+
{
|
| 1432 |
+
"type": "list",
|
| 1433 |
+
"sub_type": "ref_text",
|
| 1434 |
+
"list_items": [
|
| 1435 |
+
"(Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1436 |
+
"Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.",
|
| 1437 |
+
"Omar Khattab, Christopher Potts, and Matei Zaharia. 2020. Relevance-guided supervision for OpenQA with ColBERT. ArXiv, abs/2007.00814.",
|
| 1438 |
+
"Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd international ACM SIGIR conference on Research and development in Information Retrieval.",
|
| 1439 |
+
"Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).",
|
| 1440 |
+
"Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.",
|
| 1441 |
+
"Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.",
|
| 1442 |
+
"Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1443 |
+
"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
|
| 1444 |
+
"Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe"
|
| 1445 |
+
],
|
| 1446 |
+
"bbox": [
|
| 1447 |
+
512,
|
| 1448 |
+
76,
|
| 1449 |
+
884,
|
| 1450 |
+
907
|
| 1451 |
+
],
|
| 1452 |
+
"page_idx": 9
|
| 1453 |
+
},
|
| 1454 |
+
{
|
| 1455 |
+
"type": "list",
|
| 1456 |
+
"sub_type": "ref_text",
|
| 1457 |
+
"list_items": [
|
| 1458 |
+
"Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems (NeurIPS).",
|
| 1459 |
+
"Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.",
|
| 1460 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.",
|
| 1461 |
+
"Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and attentional representations for text retrieval. *ArXiv*, abs/2005.00181.",
|
| 1462 |
+
"Christopher D Manning, Hinrich Schütze, and Prabhakar Raghavan. 2008. Introduction to information retrieval. Cambridge university press.",
|
| 1463 |
+
"Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vassilis Plachouras, Tim Roktaschel, and Sebastian Riedel. 2020. KILT: a benchmark for knowledge intensive language tasks. In arXiv:2009.02252.",
|
| 1464 |
+
"Fabio Petroni, Tim Rocttäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? EMNLP.",
|
| 1465 |
+
"Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389.",
|
| 1466 |
+
"Anshumali Shrivastava and Ping Li. 2014. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems (NIPS), pages 2321-2329.",
|
| 1467 |
+
"James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.",
|
| 1468 |
+
"Shuohang Wang, Mo Yu, Xiaoxiao Guo, Z. Wang, Tim Klinger, Wei Zhang, S. Chang, G. Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering. In AAAI."
|
| 1469 |
+
],
|
| 1470 |
+
"bbox": [
|
| 1471 |
+
117,
|
| 1472 |
+
76,
|
| 1473 |
+
490,
|
| 1474 |
+
907
|
| 1475 |
+
],
|
| 1476 |
+
"page_idx": 10
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"type": "list",
|
| 1480 |
+
"sub_type": "ref_text",
|
| 1481 |
+
"list_items": [
|
| 1482 |
+
"Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics.",
|
| 1483 |
+
"Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computational Linguistics.",
|
| 1484 |
+
"Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ArXiv, abs/2007.00808.",
|
| 1485 |
+
"Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 247-256. Association for Computational Linguistics."
|
| 1486 |
+
],
|
| 1487 |
+
"bbox": [
|
| 1488 |
+
512,
|
| 1489 |
+
76,
|
| 1490 |
+
885,
|
| 1491 |
+
461
|
| 1492 |
+
],
|
| 1493 |
+
"page_idx": 10
|
| 1494 |
+
}
|
| 1495 |
+
]
|
data/2021/2101_00xxx/2101.00117/b1510fde-b32e-4443-bd7a-7c6aed392142_model.json
CHANGED
|
@@ -1,3 +1,1938 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.323,
|
| 8 |
+
0.061,
|
| 9 |
+
0.718
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2101.00117v1 [cs.CL] 1 Jan 2021"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.233,
|
| 18 |
+
0.081,
|
| 19 |
+
0.773,
|
| 20 |
+
0.102
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Multi-task Retrieval for Knowledge-Intensive Tasks"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.23,
|
| 29 |
+
0.135,
|
| 30 |
+
0.78,
|
| 31 |
+
0.183
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Jean Maillard* Vladimir Karpukhin* Fabio Petroni \nWen-tau Yih Barlas Oğuz Veselin Stoyanov Gargi Ghosh \nFacebook AI"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.137,
|
| 40 |
+
0.185,
|
| 41 |
+
0.871,
|
| 42 |
+
0.202
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "{jeanm,vladk, fabiopetroni, scottyih, barlaso, ves, gghosh}@fb.com"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "title",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.263,
|
| 51 |
+
0.265,
|
| 52 |
+
0.344,
|
| 53 |
+
0.28
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Abstract"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.144,
|
| 62 |
+
0.296,
|
| 63 |
+
0.465,
|
| 64 |
+
0.552
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Retrieving relevant contexts from a large corpus is a crucial step for tasks such as open-domain question answering and fact checking. Although neural retrieval outperforms traditional methods like tfidf and BM25, its performance degrades considerably when applied to out-of-domain data. Driven by the question of whether a neural retrieval model can be universal and perform robustly on a wide variety of problems, we propose a multi-task trained model. Our approach not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant. With the help of our retriever, we improve existing models for downstream tasks and closely match or improve the state of the art on multiple benchmarks."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.117,
|
| 73 |
+
0.568,
|
| 74 |
+
0.262,
|
| 75 |
+
0.584
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "1 Introduction"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.115,
|
| 84 |
+
0.596,
|
| 85 |
+
0.494,
|
| 86 |
+
0.837
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Knowledge-intensive tasks is the common designation for a class of real-world NLP problems which, because of their nature, require large amounts of knowledge about the world (Petroni et al., 2020). For example, open-domain question answering requires producing answers to general factoid questions; fact checking involves determining the veracity of claims based on a database of trusted evidence. Practical solutions to these tasks usually involve an efficient retrieval component that, given an input query, selects a limited subset of relevant information from a large knowledge source. Sophisticated downstream models then consider the input only in the context of the retrieved information, and perform the final task."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.51,
|
| 95 |
+
0.265,
|
| 96 |
+
0.889,
|
| 97 |
+
0.699
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "The standard retrieval component in many systems (e.g., Thorne et al., 2018; Wang et al., 2018; Chen et al., 2017) has long relied on term-matching methods, such as tfidf or BM25 (Robertson and Zaragoza, 2009). These methods rely on efficient algorithms and usually perform reasonably well regardless of the problem. In contrast, recent neural retrieval models, such as ICT (Lee et al., 2019), DPR (Karpukhin et al., 2020) and RAG (Lewis et al., 2020b) achieve better results by learning directly from task-specific training data and going beyond simple keyword matching. While task specialisation results in improved task performance, researchers have observed that a retriever trained for one specific domain will typically achieve low out-of-domain performance, and even lower performance on entirely different tasks (Petroni et al., 2020). This has two implications. First, unlike tfidf or BM25, neural retrieval models are unsuitable for low data regimes such as few- and zero-shot settings. Second, task-specific retrievers complicate practical applications where multiple knowledge-intensive tasks may need to be performed using the same supporting database or over the same input text. It may not be practical to deploy multiple separate specialised models due to computational performance or memory concerns."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.51,
|
| 106 |
+
0.701,
|
| 107 |
+
0.889,
|
| 108 |
+
0.911
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "We ask the following question in this work: can we develop a universal neural retriever? Namely, we target a retriever that can perform well on a wide variety of problems, without task-specific fine-tuning, but, if additional in-domain labelled data is available, it can be further fine-tuned to improve the performance. We perform a large experimental study to attempt to build such a universal retrieval model. We find that, by jointly training on an extensive selection of retrieval tasks, we obtain a model which is not only more robust than previous approaches, but also can lead to better performance on the downstream knowledge-intensive tasks when"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "page_footnote",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.14,
|
| 117 |
+
0.847,
|
| 118 |
+
0.276,
|
| 119 |
+
0.86
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "* Equal Contribution."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "page_footnote",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.117,
|
| 128 |
+
0.86,
|
| 129 |
+
0.49,
|
| 130 |
+
0.91
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "<sup>1</sup>While large pre-trained neural models have been shown to incorporate real-world knowledge in their parameters and thus may skip retrieval (Petroni et al., 2019), they still have limited capacity and suffer from a lack of explainability."
|
| 134 |
+
}
|
| 135 |
+
],
|
| 136 |
+
[
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"bbox": [
|
| 140 |
+
0.115,
|
| 141 |
+
0.076,
|
| 142 |
+
0.492,
|
| 143 |
+
0.186
|
| 144 |
+
],
|
| 145 |
+
"angle": 0,
|
| 146 |
+
"content": "plugged into an existing system. Our approach combines the benefits from IR-based models with those of task-specific neural retrievers – namely, good performance when no (or not enough) training data is available and high task performance due to its ability to learn highly specialised representations."
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.135,
|
| 152 |
+
0.188,
|
| 153 |
+
0.492,
|
| 154 |
+
0.204
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": "Our contributions can be summarised as follows."
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.138,
|
| 163 |
+
0.213,
|
| 164 |
+
0.492,
|
| 165 |
+
0.342
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "- We propose a single general-purpose \"universal\" retrieval model, able to perform comparably or better than specialised retriever approaches in both zero-shot (leave-one-out) and few-shot retrieval. We investigate several model variants, shedding light on what are the aspects of the architecture that affect its performance."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.138,
|
| 174 |
+
0.351,
|
| 175 |
+
0.491,
|
| 176 |
+
0.415
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "- We show that our model's gains in terms of retrieval directly translate into performance gains for a variety of downstream knowledge-intensive tasks."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.138,
|
| 185 |
+
0.425,
|
| 186 |
+
0.492,
|
| 187 |
+
0.538
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "- We will share the implementation as well as our best model. This is in the form of a readily available BERT checkpoint which, as we will show, can be used by NLP practitioners as a strong out-of-the-box retrieval system, but which can also undergo further in-domain training for even higher performance."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "list",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.138,
|
| 196 |
+
0.213,
|
| 197 |
+
0.492,
|
| 198 |
+
0.538
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": null
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "title",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.116,
|
| 207 |
+
0.548,
|
| 208 |
+
0.258,
|
| 209 |
+
0.565
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "2 Background"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.115,
|
| 218 |
+
0.574,
|
| 219 |
+
0.491,
|
| 220 |
+
0.655
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "In this section, we first give an overview of retrieval methods based on sparse and dense representations. We then discuss a wide range of knowledge-intensive NLP tasks, where retrieval plays a crucial role in solving the problems."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "title",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.116,
|
| 229 |
+
0.664,
|
| 230 |
+
0.307,
|
| 231 |
+
0.679
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "2.1 Retrieval methods"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.115,
|
| 240 |
+
0.685,
|
| 241 |
+
0.491,
|
| 242 |
+
0.877
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Given a large collection of unstructured text passages, information retrieval (IR) can be broadly defined as finding a small set of passages that satisfies an information need, often presented in the form of a short-text query (Manning et al., 2008). Traditional IR methods, such as tfidf and BM25 (Robertson and Zaragoza, 2009), match keywords efficiently with an inverted index. Such methods can be seen as representing queries and passages in high-dimensional, sparse vectors, where each dimension corresponds to a term in the vocabulary and the weight indicates its importance."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.116,
|
| 251 |
+
0.878,
|
| 252 |
+
0.492,
|
| 253 |
+
0.91
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "In contrast to tfidf and BM25, dense retrieval methods encode text as a latent semantic vector of"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.509,
|
| 262 |
+
0.075,
|
| 263 |
+
0.887,
|
| 264 |
+
0.413
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "a fixed, much smaller dimensionality. Whether a passage is relevant to a given query is determined by the distance of their vectors (Deerwester et al., 1990). Although dense representations do not encode tokens explicitly and can potentially map paraphrases of completely different tokens to close vectors, performance of early dense retrieval methods was often inferior to term-matching approaches, except when large labelled data is available (Yih et al., 2011; Gao et al., 2011; Huang et al., 2013). Thanks to success of large pre-trained models (Devlin et al., 2019; Liu et al., 2019b), however, recent dense retrieval methods have shown to outperform the sparse counterparts, when fine-tuned on a small set of in-domain labelled data (Karpukhin et al., 2020; Lewis et al., 2020b; Xiong et al., 2020). Efficient index and search of dense vectors are made possible by maximum inner product search (MIPS) algorithms (e.g., Shrivastava and Li, 2014; Guo et al., 2016), as well as tools like FAISS (Johnson et al., 2019)."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.51,
|
| 273 |
+
0.414,
|
| 274 |
+
0.887,
|
| 275 |
+
0.543
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Our work is built upon the Dense Passage Retriever (DPR) architecture of Karpukhin et al. (2020), which was initially proposed for the task of open-domain question answering. DPR is a neural bi-encoder model which embeds queries with an encoder \\( \\pmb{f}(\\cdot) \\) and passages with a separate encoder \\( \\pmb{g}(\\cdot) \\). Given an input query \\( x \\) and a target passage \\( y \\), we have"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "equation",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.613,
|
| 284 |
+
0.556,
|
| 285 |
+
0.782,
|
| 286 |
+
0.573
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "\\[\n\\mathrm {p} (x \\mid y) \\propto \\operatorname {s i m} (x, y),\n\\]"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.51,
|
| 295 |
+
0.586,
|
| 296 |
+
0.886,
|
| 297 |
+
0.728
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "where the similarity score \\(\\operatorname{sim}(x, y)\\) is defined as the inner product of the embeddings of its arguments, \\(f(x) \\cdot g(y)\\). Given a query at inference time, calculating its similarity with every possible passage would be prohibitive for large knowledge sources. Therefore, DPR makes use of the FAISS library (Johnson et al., 2019) to perform fast approximate nearest neighbour search in sub-linear time."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.51,
|
| 306 |
+
0.728,
|
| 307 |
+
0.886,
|
| 308 |
+
0.784
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "Training is based on a contrastive loss. Given a query \\( x \\), a relevant passage \\( y \\), and a set of \\( n \\) irrelevant passages \\( y_{i}^{-} \\), we train the model by optimising the following negative log likelihood:"
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "equation",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.525,
|
| 317 |
+
0.794,
|
| 318 |
+
0.871,
|
| 319 |
+
0.826
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "\\[\n\\mathcal {L} = - \\log \\frac {\\exp (\\sin (x , y))}{\\exp (\\sin (x , y)) + \\sum_ {i = 1} ^ {n} \\exp (\\sin \\left(x , y _ {i} ^ {-}\\right))}.\n\\]"
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.51,
|
| 328 |
+
0.846,
|
| 329 |
+
0.886,
|
| 330 |
+
0.911
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "As the set of irrelevant passages, we use the relevant passages for other queries within the same batch, as well as a specially selected \"hard\" confounder. This is a passage which has high lexical"
|
| 334 |
+
}
|
| 335 |
+
],
|
| 336 |
+
[
|
| 337 |
+
{
|
| 338 |
+
"type": "image",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.118,
|
| 341 |
+
0.075,
|
| 342 |
+
0.498,
|
| 343 |
+
0.21
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": null
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "image_caption",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.115,
|
| 352 |
+
0.219,
|
| 353 |
+
0.49,
|
| 354 |
+
0.288
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "Figure 1: Training of DPR (Karpukhin et al., 2020), a bi-encoder model for open-domain question answering. Queries and passages are encoded as vectors, and retrieval is performed as a maximum inner product search."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.115,
|
| 363 |
+
0.323,
|
| 364 |
+
0.49,
|
| 365 |
+
0.419
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "overlap with the query (high BM25 score), but is not among the set of relevant passages for the given data point. Karpukhin et al. (2020) have shown that the inclusion of such \"hard\" confounders leads to substantially improved training results. This training process is illustrated in Figure 1."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "title",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.116,
|
| 374 |
+
0.444,
|
| 375 |
+
0.375,
|
| 376 |
+
0.459
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "2.2 Knowledge-intensive Tasks"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.115,
|
| 385 |
+
0.472,
|
| 386 |
+
0.49,
|
| 387 |
+
0.777
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "For the training and evaluation of all models in the paper we make use of KILT, a benchmark and library of datasets (Petroni et al., 2020). KILT consists of a selection of datasets spanning five varied classes of knowledge-intensive tasks (i.e., question answering, slot filling, fact checking, dialogue, entity linking), with the aim to cover many different ways of seeking knowledge. Input queries can vary wildly from one task to the other, and include classic examples of open-domain retrieval tasks such as natural language questions and claims to be verified, as well as more unusual examples like conversation fragments and long chunks of annotated text. Crucially, all datasets distributed in KILT have been re-aligned such that they are all grounded in the same snapshot of Wikipedia, which the authors distribute. The knowledge required to answer any of the queries in the library of tasks can thus be found within the same unified knowledge source."
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.115,
|
| 396 |
+
0.782,
|
| 397 |
+
0.49,
|
| 398 |
+
0.909
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "To illustrate the variety of ways in which the input queries for different tasks can be formulated, we provide a few simple examples in Table 1. In spite of the differences between query formulations, all these tasks share one crucial aspect: they all require a retriever to fetch the relevant passages from the knowledge source, in order to support the final downstream task."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "title",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.512,
|
| 407 |
+
0.075,
|
| 408 |
+
0.623,
|
| 409 |
+
0.089
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "3 Methods"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "title",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.511,
|
| 418 |
+
0.1,
|
| 419 |
+
0.706,
|
| 420 |
+
0.114
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "3.1 Universal retrieval"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.51,
|
| 429 |
+
0.121,
|
| 430 |
+
0.886,
|
| 431 |
+
0.233
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "Using task-specific models to tackle our collection of retrieval tasks would involve completely separate models, one per dataset. As illustrated in Figure 2, this would lead to a proliferation of models and data, down to separate indexed copies of the knowledge source itself (Wikipedia). This setup will form one of our baselines."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "image",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.512,
|
| 440 |
+
0.244,
|
| 441 |
+
0.9,
|
| 442 |
+
0.352
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": null
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "image_caption",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.51,
|
| 451 |
+
0.361,
|
| 452 |
+
0.886,
|
| 453 |
+
0.39
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "Figure 2: Two retrieval tasks performed by two fully-specialised models."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.51,
|
| 462 |
+
0.406,
|
| 463 |
+
0.886,
|
| 464 |
+
0.517
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "Multi-task training has been successfully used to allow models to leverage cross-task data, as well as to provide a regularisation effect leading to better generalisation ability (Liu et al., 2019a). We apply this concept to neural retrievers, with the aim of improving performance by jointly leveraging multiple different retrieval datasets."
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "image",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.52,
|
| 473 |
+
0.527,
|
| 474 |
+
0.691,
|
| 475 |
+
0.655
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": null
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "image_caption",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.519,
|
| 484 |
+
0.66,
|
| 485 |
+
0.691,
|
| 486 |
+
0.673
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "(a) Separate query encoders."
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "image",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.706,
|
| 495 |
+
0.545,
|
| 496 |
+
0.876,
|
| 497 |
+
0.655
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": null
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "image_caption",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.706,
|
| 506 |
+
0.66,
|
| 507 |
+
0.876,
|
| 508 |
+
0.673
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "(b) A single retrieval model."
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "image_caption",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.512,
|
| 517 |
+
0.684,
|
| 518 |
+
0.882,
|
| 519 |
+
0.699
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "Figure 3: Parameter sharing between neural retrievers."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.51,
|
| 528 |
+
0.717,
|
| 529 |
+
0.886,
|
| 530 |
+
0.812
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "Our base setup is illustrated in Figure 3b and involves using a shared passage encoder — so that a single index of encoded passages can be used — as well as a query encoder that is shared across all tasks. In essence, in this setup a single DPR model is used to perform all retrieval tasks."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.51,
|
| 539 |
+
0.814,
|
| 540 |
+
0.886,
|
| 541 |
+
0.91
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "Due to the complexity of training and evaluating retrieval models (which involves training the retriever, embedding all of Wikipedia, and building an index), our main set of experiments is all based on this configuration, which was found to work well in preliminary experiments. However, in order"
|
| 545 |
+
}
|
| 546 |
+
],
|
| 547 |
+
[
|
| 548 |
+
{
|
| 549 |
+
"type": "table",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.127,
|
| 552 |
+
0.073,
|
| 553 |
+
0.88,
|
| 554 |
+
0.328
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "<table><tr><td>Task</td><td>Example query</td><td>Answer</td><td>Relevant doc.</td></tr><tr><td>Question Answering</td><td>Who is playing the Halftime Show at Super Bowl 2016?</td><td>Coldplay</td><td>The Super Bowl 50 Halftime Show took place on February 7, 2016 ... It was headlined by the British rock group Coldplay.</td></tr><tr><td>Fact Checking</td><td>Bermuda Triangle is in the western part of the Himalayas</td><td>REFUTES</td><td>The Bermuda Triangle ... is a loosely defined region in the western part of the North Atlantic Ocean</td></tr><tr><td>Slot Filling</td><td>Piner Creek [sep] mouth of the watercourse</td><td>Santa Rosa Creek</td><td>Piner Creek discharges to Santa Rosa Creek which in turn ...</td></tr><tr><td>Entity Linking</td><td>Leicestershire take over at top after innings victory. London. [start_ent]West Indian [end_ent] all-rounder Phil Simmons ...</td><td>West Indies cricket team</td><td>The West Indies cricket team is a multi-national men's cricket team representing the Anglophone Caribbean region</td></tr><tr><td>Dialogue</td><td>I am a big fan of Star Trek [sep] I don't know much about it. When did the first episode air? [sep] It debuted in .. [sep] What is the plot of the show?</td><td>William Shatner plays the role of Captain Kirk</td><td>It followed the interstellar adventures of Captain James T. Kirk (William Shatner) and his crew ...</td></tr></table>"
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "table_caption",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.121,
|
| 563 |
+
0.337,
|
| 564 |
+
0.877,
|
| 565 |
+
0.352
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "Table 1: Illustrative examples of some of the tasks within KILT, and how varied their query formulations can be."
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.115,
|
| 574 |
+
0.378,
|
| 575 |
+
0.491,
|
| 576 |
+
0.442
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "to report on the performance of alternative architectures, we also investigate the following additional variants in a restricted experimental setting, limited to a few tasks:"
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.138,
|
| 585 |
+
0.451,
|
| 586 |
+
0.49,
|
| 587 |
+
0.596
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "- Task-specific query encoder. A different query encoder is used for each family of tasks, e.g. all question answering tasks use the same query encoder, but fact checking uses a different one. This is meant to allow for potentially different needs in processing queries, given the fundamentally diverse nature of the tasks at hand. This setup configuration is illustrated in Figure 3a."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.138,
|
| 596 |
+
0.607,
|
| 597 |
+
0.492,
|
| 598 |
+
0.766
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "- Task markers. This approach is similar to our base setup, where a single model performs all tasks. Additionally, we introduce specialised tokens which are inserted at the beginning of each query. Their aim is to help the model distinguish between the different tasks, by marking them. We use one task marker for each of the five task classes of KILT, such that all question answering tasks share the same marker."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "list",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.138,
|
| 607 |
+
0.451,
|
| 608 |
+
0.492,
|
| 609 |
+
0.766
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": null
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "title",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.116,
|
| 618 |
+
0.777,
|
| 619 |
+
0.422,
|
| 620 |
+
0.792
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "3.2 Adversarial confounder selection"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.115,
|
| 629 |
+
0.798,
|
| 630 |
+
0.491,
|
| 631 |
+
0.911
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "We saw in § 2.1 how \"hard\" confounder passages are collected using a BM25 baseline, following the standard approach in DPR. However, any other retriever can be used to select such confounders, including the very retriever being trained, leading to an iterative, self-adversarial training. Concretely, this amounts to following steps: (1) a first version"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.51,
|
| 640 |
+
0.378,
|
| 641 |
+
0.885,
|
| 642 |
+
0.473
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "of the retriever is trained with BM25 confounders; (2) new confounders are selected with the trained model, by retrieving high-ranking passages which are not among the set of relevant ones; (3) a second version of the model is trained using the additional new confounders."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.51,
|
| 651 |
+
0.486,
|
| 652 |
+
0.886,
|
| 653 |
+
0.6
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "Intuitively, it is expected that this approach should lead to higher quality confounders compared to those selected by BM25 based on simple keyword matching. Based on our own experience as well as relevant literature (Khattab et al., 2020), this adversarial approach has been shown to work well for question answering."
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "text",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.51,
|
| 662 |
+
0.611,
|
| 663 |
+
0.886,
|
| 664 |
+
0.803
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "As a way of further pushing the performance of the model, we experiment with this adversarial confounder selection on two datasets, Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). We selected these two datasets since, out of all of the tasks we are considering, they have an easy way of checking whether a certain passage is relevant or not for a given query – namely, by checking whether the answer is present in the passage. This enabled us to automatically build sets of confounders, ensuring relevant passages would be excluded."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "page_footnote",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.51,
|
| 673 |
+
0.86,
|
| 674 |
+
0.887,
|
| 675 |
+
0.911
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "\\( {}^{2} \\) Strictly speaking,assuming a passage to be irrelevant because of the absence of the answer span is not formally correct. However, experiments show a good correlation between this simple check and the overall model quality."
|
| 679 |
+
}
|
| 680 |
+
],
|
| 681 |
+
[
|
| 682 |
+
{
|
| 683 |
+
"type": "table",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.118,
|
| 686 |
+
0.073,
|
| 687 |
+
0.498,
|
| 688 |
+
0.235
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "<table><tr><td>Dataset</td><td>Task class</td><td>#Train</td></tr><tr><td>FEVER</td><td>Fact Checking</td><td>71 k</td></tr><tr><td>AIDA-YAGO 2</td><td>Entity Linking</td><td>18 k</td></tr><tr><td>T-REx</td><td>Slot Filling</td><td>2,284 k</td></tr><tr><td>Zero Shot RE</td><td>Slot Filling</td><td>132 k</td></tr><tr><td>Natural Questions</td><td>QA</td><td>77 k</td></tr><tr><td>HotpotQA</td><td>QA</td><td>69 k</td></tr><tr><td>TriviaQA</td><td>QA</td><td>53 k</td></tr><tr><td>Wizard of Wikipedia</td><td>Dialogue</td><td>80 k</td></tr></table>"
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "table_caption",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.115,
|
| 697 |
+
0.244,
|
| 698 |
+
0.49,
|
| 699 |
+
0.274
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "Table 2: KILT datasets used in this work, and the size of our converted training sets for each."
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "title",
|
| 706 |
+
"bbox": [
|
| 707 |
+
0.116,
|
| 708 |
+
0.299,
|
| 709 |
+
0.263,
|
| 710 |
+
0.315
|
| 711 |
+
],
|
| 712 |
+
"angle": 0,
|
| 713 |
+
"content": "4 Experiments"
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "title",
|
| 717 |
+
"bbox": [
|
| 718 |
+
0.116,
|
| 719 |
+
0.326,
|
| 720 |
+
0.334,
|
| 721 |
+
0.341
|
| 722 |
+
],
|
| 723 |
+
"angle": 0,
|
| 724 |
+
"content": "4.1 Experimental settings"
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "text",
|
| 728 |
+
"bbox": [
|
| 729 |
+
0.115,
|
| 730 |
+
0.348,
|
| 731 |
+
0.49,
|
| 732 |
+
0.412
|
| 733 |
+
],
|
| 734 |
+
"angle": 0,
|
| 735 |
+
"content": "Dataset selection For our experiments we select the eight KILT datasets listed in Table 2, which cover all five task classes and include a training split, a validation split, and a held-out test split."
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"bbox": [
|
| 740 |
+
0.115,
|
| 741 |
+
0.422,
|
| 742 |
+
0.492,
|
| 743 |
+
0.534
|
| 744 |
+
],
|
| 745 |
+
"angle": 0,
|
| 746 |
+
"content": "Preprocessing Starting from the raw KILT data, we split each Wikipedia article into disjoint 100-token chunks which form our basic retrieval units, following the approach of Wang et al. (2019) and Karpukhin et al. (2020). To maintain the same language introduced in §3, we will simply call these chunks passages."
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.115,
|
| 752 |
+
0.536,
|
| 753 |
+
0.49,
|
| 754 |
+
0.696
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": "This preprocessing results in a knowledge source of 36 million passages. In order to harmonise all datasets to the same knowledge source, KILT used a mapping strategy based on the BLEU metric to map relevant passages in the original versions of its datasets to passages in its own shared knowledge source (Petroni et al., 2020). Entries included in the KILT training sets which have a mapping BLEU score below 0.5 are likely to be noise, and we exclude them from training."
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.115,
|
| 763 |
+
0.707,
|
| 764 |
+
0.49,
|
| 765 |
+
0.867
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "Multi-tasking Training is performed on the union of all training sets. Since two of the training sets are of different orders of magnitude, we use a simple downsampling strategy to bring them to the same order of magnitude as the others. Preliminary experiments with more complex sampling methods, like resampling all datasets so that each epoch would see an equal number of samples from each, found that they had no measurable effect compared to this simpler approach."
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.116,
|
| 774 |
+
0.878,
|
| 775 |
+
0.49,
|
| 776 |
+
0.909
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "Encoders Our query and passage encoders are initialised as two distinct BERT base uncased en"
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.51,
|
| 785 |
+
0.076,
|
| 786 |
+
0.885,
|
| 787 |
+
0.14
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "coders (Devlin et al., 2019), trained separately. As pooling mechanism we find it effective to simply take the [CLS] token representation at the topmost layer."
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "text",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.51,
|
| 796 |
+
0.148,
|
| 797 |
+
0.886,
|
| 798 |
+
0.34
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "Training We train our models for up to 80 epochs. To select the best checkpoint, we perform full evaluations of the validation set retrieval performance at regular intervals. We use the Adam optimiser (Kingma and Ba, 2015) with a learning rate of \\(2 \\cdot 10^{-5}\\) with warmup and a linear decay schedule, and a dropout rate of 0.1. The batch size is set to 128 samples, and in preliminary experiments we found no benefit in increasing this further. We use an additional \"hard\" confounder per batch, selected based on BM25 score as in (Karpukhin et al., 2020)."
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.51,
|
| 807 |
+
0.349,
|
| 808 |
+
0.886,
|
| 809 |
+
0.476
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "Downstream evaluation When evaluating our retriever within a larger architecture to perform a knowledge-intensive task, we replicate the DPR + BART setup of Petroni et al. (2020). This uses DPR to retrieve and comprehend the top 3 passages to the query, which is then processed by a task-specific fine-tuned BART model to generate the final answer for the end task."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "title",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.511,
|
| 818 |
+
0.487,
|
| 819 |
+
0.706,
|
| 820 |
+
0.501
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "4.2 Universal retrieval"
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.51,
|
| 829 |
+
0.508,
|
| 830 |
+
0.886,
|
| 831 |
+
0.635
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "The results of the evaluations reported in (Petroni et al., 2020) show that retrievers trained for question answering have poor performance outside of their domain. We would like to understand if it is possible to design a single model which can accurately satisfy the information needs of a wide variety of knowledge-intensive tasks. In short: Can a neural retriever be universal?"
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "text",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.51,
|
| 840 |
+
0.637,
|
| 841 |
+
0.886,
|
| 842 |
+
0.812
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "We perform a comprehensive evaluation of several models on the eight tasks of Table 2. The setups we evaluate include eight task-specific models (one trained on each of the eight datasets), for which we measure both in-domain and out-of-domain performance, and a BM25 baseline. Additionally, we include a multi-task trained model - as described in §3.1 - with the hope that it can learn to perform all tasks satisfyingly. This amounts to 10 models evaluated on eight tasks each, for a total of 80 evaluations."
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.51,
|
| 851 |
+
0.814,
|
| 852 |
+
0.885,
|
| 853 |
+
0.91
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "To measure retrieval performance, we adopt the main metric used for the KILT benchmark, \\( R \\)-precision. This is calculated as \\( r / R \\), where \\( R \\) is the total number of relevant passages for a given query, and \\( r \\) is the number of relevant passages returned among the top- \\( R \\) retrieval results. For the"
|
| 857 |
+
}
|
| 858 |
+
],
|
| 859 |
+
[
|
| 860 |
+
{
|
| 861 |
+
"type": "table",
|
| 862 |
+
"bbox": [
|
| 863 |
+
0.126,
|
| 864 |
+
0.073,
|
| 865 |
+
0.88,
|
| 866 |
+
0.259
|
| 867 |
+
],
|
| 868 |
+
"angle": 0,
|
| 869 |
+
"content": "<table><tr><td rowspan=\"2\">model</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td colspan=\"2\">Slot Filling</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td></tr><tr><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td></tr><tr><td colspan=\"9\">Task-specific models</td></tr><tr><td>FEVER</td><td>73.60/43.92</td><td>5.62</td><td>19.50/10.02</td><td>42.88/19.98</td><td>36.69/18.05</td><td>23.18/17.59</td><td>45.08/22.24</td><td>41.27/19.85</td></tr><tr><td>AY2</td><td>47.36/37.58</td><td>81.77</td><td>5.52/4.08</td><td>8.94/5.50</td><td>10.22/6.77</td><td>11.69/10.71</td><td>15.11/8.47</td><td>17.59/13.08</td></tr><tr><td>T-REx</td><td>45.63/25.22</td><td>1.05</td><td>69.08/58.54</td><td>71.64/40.95</td><td>17.10/8.71</td><td>22.31/15.63</td><td>18.10/8.06</td><td>4.02/1.83</td></tr><tr><td>zsRE</td><td>70.10/33.12</td><td>0.42</td><td>68.34/57.40</td><td>97.74/78.81</td><td>25.98/13.81</td><td>22.23/18.35</td><td>28.68/14.44</td><td>10.40/2.09</td></tr><tr><td>NQ</td><td>68.16/14.81</td><td>1.44</td><td>31.78/7.20</td><td>61.12/12.92</td><td>63.24/28.13</td><td>29.39/11.33</td><td>48.39/14.42</td><td>30.77/11.81</td></tr><tr><td>HoPo</td><td>56.18/40.03</td><td>2.07</td><td>35.76/27.62</td><td>44.44/31.15</td><td>35.60/23.26</td><td>46.63/43.47</td><td>41.18/29.37</td><td>23.51/16.02</td></tr><tr><td>TQA</td><td>70.06/10.68</td><td>4.95</td><td>32.22/12.52</td><td>60.37/17.43</td><td>45.01/12.97</td><td>32.62/13.05</td><td>65.12/23.79</td><td>41.17/8.11</td></tr><tr><td>WoW</td><td>59.16/42.79</td><td>3.11</td><td>20.92/18.52</td><td>41.14/35.26</td><td>33.27/22.52</td><td>20.36/17.66</td><td>39.37/23.15</td><td>40.32/20.73</td></tr></table>"
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "table_caption",
|
| 873 |
+
"bbox": [
|
| 874 |
+
0.115,
|
| 875 |
+
0.269,
|
| 876 |
+
0.885,
|
| 877 |
+
0.298
|
| 878 |
+
],
|
| 879 |
+
"angle": 0,
|
| 880 |
+
"content": "Table 3: Page- and passage-level \\( R \\) -precision on KILT validation data. For the AIDA-YAGO 2 dataset, due to the nature of the task, only page-level retrieval is defined."
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"bbox": [
|
| 885 |
+
0.115,
|
| 886 |
+
0.324,
|
| 887 |
+
0.492,
|
| 888 |
+
0.404
|
| 889 |
+
],
|
| 890 |
+
"angle": 0,
|
| 891 |
+
"content": "case of \\( R = 1 \\) this is therefore equivalent to precision@1. Table 3 shows retrieval performance on the validation data, with the best performance on a given dataset marked in bold, and the second best performance underlined."
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"bbox": [
|
| 896 |
+
0.114,
|
| 897 |
+
0.406,
|
| 898 |
+
0.491,
|
| 899 |
+
0.549
|
| 900 |
+
],
|
| 901 |
+
"angle": 0,
|
| 902 |
+
"content": "While the KILT evaluation focuses on retrieval at the level of Wikipedia pages (thereby marking as \"hits\" any results that lie within the correct page), we are also interested in performing an evaluation at a more fine-grained level. We therefore also evaluate our models at the passage level, using a modified version of the official KILT evaluation scripts. These are shown as the second number in each column."
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "text",
|
| 906 |
+
"bbox": [
|
| 907 |
+
0.115,
|
| 908 |
+
0.553,
|
| 909 |
+
0.49,
|
| 910 |
+
0.696
|
| 911 |
+
],
|
| 912 |
+
"angle": 0,
|
| 913 |
+
"content": "We straight away notice that the task-specific models tend to achieve high performance on their respective tasks, often taking one of the top two spots. Interestingly, we also note that these neural retrievers consistently outperform the BM25 baseline, showing that the result which Karpukhin et al. (2020) achieved for open-domain question answering also holds for other knowledge-intensive tasks."
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "text",
|
| 917 |
+
"bbox": [
|
| 918 |
+
0.115,
|
| 919 |
+
0.7,
|
| 920 |
+
0.49,
|
| 921 |
+
0.891
|
| 922 |
+
],
|
| 923 |
+
"angle": 0,
|
| 924 |
+
"content": "The results reveal a strong performance for the multi-task model, confirming the hypothesis that a single model can be successfully trained to perform a wide variety of retrieval tasks. With the exception of one dataset, the shared model achieves the best retrieval performance or is within a few percentage points of the top score. We note that the one exception is the Zero-shot RE task (Levy et al., 2017), a trivial task in which the query will always contain the title of the page to be retrieved. Indeed, the model specific to this task manages to achieve a near-perfect score."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"bbox": [
|
| 929 |
+
0.134,
|
| 930 |
+
0.895,
|
| 931 |
+
0.49,
|
| 932 |
+
0.911
|
| 933 |
+
],
|
| 934 |
+
"angle": 0,
|
| 935 |
+
"content": "Another task which stands out for being"
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"bbox": [
|
| 940 |
+
0.51,
|
| 941 |
+
0.324,
|
| 942 |
+
0.886,
|
| 943 |
+
0.483
|
| 944 |
+
],
|
| 945 |
+
"angle": 0,
|
| 946 |
+
"content": "markedly different in formulation is AIDA-YAGO 2 (Hoffart et al., 2011). As shown in Table 2, models that were not trained on this specific task perform it very poorly. Entity linking is a task that is normally better performed by models which are explicitly designed for it (Cao et al., 2020). We nevertheless include it to showcase the ability of neural retrievers to adapt to it, and note how well the multi-task retriever performs on it in spite of its unusual nature."
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "title",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.511,
|
| 952 |
+
0.499,
|
| 953 |
+
0.765,
|
| 954 |
+
0.515
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": "4.3 Downstream performance"
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.51,
|
| 963 |
+
0.522,
|
| 964 |
+
0.886,
|
| 965 |
+
0.65
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "We saw that our proposed approach achieves strong performance across a variety of retrieval tasks. However, our interest in neural retrievers stems from their use as components within larger systems, to perform tasks such as question answering. Our next experimental question is therefore: Can a universal retriever lead to better downstream performance in knowledge-intensive tasks?"
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.51,
|
| 974 |
+
0.652,
|
| 975 |
+
0.886,
|
| 976 |
+
0.763
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "We perform a downstream evaluation of our approach used in conjunction with BART (Lewis et al., 2020a) as the generative component or classifier, adopting the same setup as Petroni et al. (2020). Results are reported in Table 4, with bold and underline marking the best and second best scores respectively."
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "text",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.51,
|
| 985 |
+
0.766,
|
| 986 |
+
0.886,
|
| 987 |
+
0.911
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "The \\(DPR + BART\\) line refers to a setup similar to our own, but with the simpler retriever of Karpukhin et al. (2020). Therefore, comparing its performance to ours gives us a clear indication of the contribution of multi-task training on the overall performance on knowledge-intensive tasks. Our proposed model achieves significantly better performance than this baseline in AY2, zsRE and HoPo; while for the other tasks, the discrepancy"
|
| 991 |
+
}
|
| 992 |
+
],
|
| 993 |
+
[
|
| 994 |
+
{
|
| 995 |
+
"type": "table",
|
| 996 |
+
"bbox": [
|
| 997 |
+
0.128,
|
| 998 |
+
0.073,
|
| 999 |
+
0.88,
|
| 1000 |
+
0.193
|
| 1001 |
+
],
|
| 1002 |
+
"angle": 0,
|
| 1003 |
+
"content": "<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td rowspan=\"2\">Slot Fill. zsRE</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td><td rowspan=\"2\">Avg.</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task + BART</td><td>86.32</td><td>82.61</td><td>57.95</td><td>39.75</td><td>31.77</td><td>59.60</td><td>15.33</td><td>53.33</td></tr><tr><td>DPR + BART</td><td>86.74</td><td>75.49</td><td>30.43</td><td>41.27</td><td>25.18</td><td>58.55</td><td>15.55</td><td>47.60</td></tr><tr><td>RAG</td><td>86.31</td><td>72.62</td><td>44.74</td><td>44.39</td><td>26.97</td><td>71.27</td><td>13.22</td><td>51.36</td></tr><tr><td>T5</td><td>76.30</td><td>74.05</td><td>9.02</td><td>19.60</td><td>12.64</td><td>18.11</td><td>13.49</td><td>31.89</td></tr></table>"
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "table_caption",
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
0.115,
|
| 1009 |
+
0.202,
|
| 1010 |
+
0.885,
|
| 1011 |
+
0.247
|
| 1012 |
+
],
|
| 1013 |
+
"angle": 0,
|
| 1014 |
+
"content": "Table 4: KILT test scores on the downstream evaluation. Results in the bottom section are as reported in Petroni et al. (2020). The score metrics are accuracy for fact checking, entirely linking and slot filling; exact match for QA; and F1 score for dialogue.\\(^3\\)"
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"bbox": [
|
| 1019 |
+
0.115,
|
| 1020 |
+
0.271,
|
| 1021 |
+
0.49,
|
| 1022 |
+
0.382
|
| 1023 |
+
],
|
| 1024 |
+
"angle": 0,
|
| 1025 |
+
"content": "is always below two points. This fact is reflected in the last column too, showing that on average multi-task training leads to better downstream performance. The model also compares favourably to RAG (Lewis et al., 2020b), a more advanced system in which the query encoder is fine-tuned on the end task."
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "title",
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
0.116,
|
| 1031 |
+
0.395,
|
| 1032 |
+
0.416,
|
| 1033 |
+
0.41
|
| 1034 |
+
],
|
| 1035 |
+
"angle": 0,
|
| 1036 |
+
"content": "4.4 Zero- and few-shot performance"
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "text",
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
0.115,
|
| 1042 |
+
0.416,
|
| 1043 |
+
0.49,
|
| 1044 |
+
0.56
|
| 1045 |
+
],
|
| 1046 |
+
"angle": 0,
|
| 1047 |
+
"content": "Task-specific neural retrievers can achieve higher performance than IR-based methods, but they are not suitable for cases where no training data (or not enough) is available. In those cases, tfidf and BM25 are the better choice. To evaluate the performance of a multi-task retriever as a suitable replacement for them in this scenario, we run a series of experiments in the low data regimes (few-shot and zero-shot)."
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "text",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
0.115,
|
| 1053 |
+
0.562,
|
| 1054 |
+
0.49,
|
| 1055 |
+
0.786
|
| 1056 |
+
],
|
| 1057 |
+
"angle": 0,
|
| 1058 |
+
"content": "We start by training a set of multi-task retrievers (using the base setup) in the leave-one-out setting for each of the datasets, in order to see how a neural retriever will perform when trained on all domains except for the one it is to be evaluated on. The results of these zero-shot experiments are reported in the second line of Table 5 (again, text here is in bold for the best overall performance, and underlined for second best). They show that, even in the zero-shot setting, the multi-task neural retriever achieves performance that is competitive to BM25, with retrieval being 10 points higher at the page level and 5 points lower at the passage level on average."
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "text",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
0.115,
|
| 1064 |
+
0.787,
|
| 1065 |
+
0.49,
|
| 1066 |
+
0.851
|
| 1067 |
+
],
|
| 1068 |
+
"angle": 0,
|
| 1069 |
+
"content": "The advantage of neural retrievers over BM25 lies in their ability to improve with training. We therefore look at few-shot training for each task, and create two smaller copies for each of the orig"
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "text",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
0.51,
|
| 1075 |
+
0.271,
|
| 1076 |
+
0.885,
|
| 1077 |
+
0.431
|
| 1078 |
+
],
|
| 1079 |
+
"angle": 0,
|
| 1080 |
+
"content": "inal training sets with a random sample of 128 and 1,024 examples respectively. In order to evaluate the suitability of a multi-task trained retriever as a starting checkpoint for few-shot training, we take the various leave-one-out models and finetune them on our few-shot training sets. To check whether multi-task pre-training is effective, we also compare these to DPR models (which are just initialised with BERT weights) fine-tuned on the same data."
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "text",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
0.51,
|
| 1086 |
+
0.435,
|
| 1087 |
+
0.885,
|
| 1088 |
+
0.659
|
| 1089 |
+
],
|
| 1090 |
+
"angle": 0,
|
| 1091 |
+
"content": "The bottom two sections of Table 5 report the results. The most dramatic gains from fine-tuning are seen for AY2, an \"outlier\" task whose formulation differs from that of the other tasks, and which seems to benefit the most from seeing in-domain data. The zsRE performance does not seem to improve from fine-tuning on the smaller dataset, but sees a very big jump when switching to the larger dataset. As a reminder, in this trivial task the title of the page to be retrieved always appears at the start of the query. It is therefore not surprising that models specifically fine-tuned on it can achieve near-perfect scores, as long as enough training data is provided."
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "text",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
0.51,
|
| 1097 |
+
0.663,
|
| 1098 |
+
0.885,
|
| 1099 |
+
0.774
|
| 1100 |
+
],
|
| 1101 |
+
"angle": 0,
|
| 1102 |
+
"content": "In spite of the fine-tuning, we note that both DPR and the multi-task model fail to improve on their performance for T-REx, suggesting that large amounts of training data are required to learn this task. Nevertheless, the multi-task model proves itself more robust, and achieves the top performance on it."
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "text",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
0.51,
|
| 1108 |
+
0.779,
|
| 1109 |
+
0.885,
|
| 1110 |
+
0.875
|
| 1111 |
+
],
|
| 1112 |
+
"angle": 0,
|
| 1113 |
+
"content": "Finally, we note for 2 out of 8 tasks, namely zsRE and WoW, DPR achieves lower page-level retrieval scores than the multi-task model, but performs better at the passage level. This shows that fine-grained and coarse-grained retrieval performance are not always perfectly correlated."
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "text",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
0.51,
|
| 1119 |
+
0.878,
|
| 1120 |
+
0.884,
|
| 1121 |
+
0.911
|
| 1122 |
+
],
|
| 1123 |
+
"angle": 0,
|
| 1124 |
+
"content": "Overall, the experiments show strong results for the multi-task model, with the average zero-shot"
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "page_footnote",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.115,
|
| 1130 |
+
0.86,
|
| 1131 |
+
0.49,
|
| 1132 |
+
0.909
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "3Performing this evaluation required retrieving relevant documents for all training sets. Due to the very large size of T-REx, this particular dataset could not be included in this section."
|
| 1136 |
+
}
|
| 1137 |
+
],
|
| 1138 |
+
[
|
| 1139 |
+
{
|
| 1140 |
+
"type": "table",
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
0.124,
|
| 1143 |
+
0.072,
|
| 1144 |
+
0.885,
|
| 1145 |
+
0.212
|
| 1146 |
+
],
|
| 1147 |
+
"angle": 0,
|
| 1148 |
+
"content": "<table><tr><td>Model</td><td>FEV</td><td>AY2</td><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td><td>WoW</td><td>Avg.</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td><td>38.17/33.12</td></tr><tr><td colspan=\"10\">Leave-one-out multi-task models</td></tr><tr><td>Zero-shot</td><td>74.11/37.09</td><td>4.16</td><td>67.54/44.84</td><td>73.42/32.65</td><td>47.23/21.50</td><td>34.72/16.52</td><td>49.08/28.06</td><td>36.92/16.19</td><td>48.40/28.12</td></tr><tr><td>Finetune (128)</td><td>75.95/32.75</td><td>32.38</td><td>67.54/44.84</td><td>73.41/32.65</td><td>47.48/14.98</td><td>34.72/27.82</td><td>54.71/19.82</td><td>48.36/17.46</td><td>54.23/27.19</td></tr><tr><td>Finetune (1k)</td><td>73.08/40.83</td><td>70.40</td><td>67.54/44.84</td><td>93.04/58.67</td><td>51.00/19.90</td><td>39.19/35.43</td><td>59.08/20.22</td><td>47.65/19.75</td><td>62.62/34.23</td></tr><tr><td colspan=\"10\">Vanilla DPR models</td></tr><tr><td>Finetune (128)</td><td>37.99/25.31</td><td>26.23</td><td>0.20/0.02</td><td>0.16/0.00</td><td>20.92/9.52</td><td>14.46/14.08</td><td>26.85/10.54</td><td>30.31/17.20</td><td>19.64/10.95</td></tr><tr><td>Finetune (1k)</td><td>70.87/47.82</td><td>72.49</td><td>0.20/0.02</td><td>90.33/80.20</td><td>43.43/19.81</td><td>30.75/30.50</td><td>52.50/17.33</td><td>44.70/24.92</td><td>50.66/31.51</td></tr></table>"
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "table_caption",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
0.115,
|
| 1154 |
+
0.219,
|
| 1155 |
+
0.884,
|
| 1156 |
+
0.248
|
| 1157 |
+
],
|
| 1158 |
+
"angle": 0,
|
| 1159 |
+
"content": "Table 5: Page- and passage-level \\( R \\) -Precision in the zero-shot setting and with additional fine-tuning of 128 and 1,024 examples. We also compare to a BM25 retriever and a DPR model initialised with BERT weights."
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "text",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
0.115,
|
| 1165 |
+
0.273,
|
| 1166 |
+
0.492,
|
| 1167 |
+
0.402
|
| 1168 |
+
],
|
| 1169 |
+
"angle": 0,
|
| 1170 |
+
"content": "performance being competitive to BM25, and the average few-shot performance being markedly better than the alternatives. The discrepancy in performance between a vanilla DPR model and the leave-one-out multi-task model is especially noticeable when using the smaller of the two datasets, in which case average performance for the latter is more than double that of vanilla DPR."
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "title",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
0.116,
|
| 1176 |
+
0.413,
|
| 1177 |
+
0.283,
|
| 1178 |
+
0.427
|
| 1179 |
+
],
|
| 1180 |
+
"angle": 0,
|
| 1181 |
+
"content": "4.5 Model variants"
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"type": "table",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
0.12,
|
| 1187 |
+
0.442,
|
| 1188 |
+
0.488,
|
| 1189 |
+
0.508
|
| 1190 |
+
],
|
| 1191 |
+
"angle": 0,
|
| 1192 |
+
"content": "<table><tr><td>variant</td><td>FEV</td><td>NQ</td><td>TQA</td></tr><tr><td>Base</td><td>76.38/40.76</td><td>60.91/24.50</td><td>64.77/21.75</td></tr><tr><td>Task markers</td><td>75.84/40.79</td><td>62.31/25.10</td><td>64.04/20.86</td></tr><tr><td>Task-spec. enc.</td><td>73.53/40.02</td><td>61.05/25.52</td><td>64.17/21.23</td></tr></table>"
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "table_caption",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
0.115,
|
| 1198 |
+
0.517,
|
| 1199 |
+
0.49,
|
| 1200 |
+
0.56
|
| 1201 |
+
],
|
| 1202 |
+
"angle": 0,
|
| 1203 |
+
"content": "Table 6: Multi-task model variants evaluated on a subset of tasks (\\(R\\)-precision on validation data at page/passage level)."
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "text",
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
0.115,
|
| 1209 |
+
0.58,
|
| 1210 |
+
0.49,
|
| 1211 |
+
0.756
|
| 1212 |
+
],
|
| 1213 |
+
"angle": 0,
|
| 1214 |
+
"content": "In this set of experiments we compare our base multi-task model with the two variants described in § 3.1. Due to the high memory consumption of the \"task-specific encoders\" variant (requiring one full query encoder per task family, in addition to the passage encoder), it was only possible to perform these evaluations in a restricted setting of three datasets. The results in Table 6 do not reveal a clear winner, suggesting that the base architecture might be the better choice due to its simplicity and generally good performance.\\(^4\\)"
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "title",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
0.116,
|
| 1220 |
+
0.767,
|
| 1221 |
+
0.422,
|
| 1222 |
+
0.782
|
| 1223 |
+
],
|
| 1224 |
+
"angle": 0,
|
| 1225 |
+
"content": "4.6 Adversarial confounder selection"
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
0.115,
|
| 1231 |
+
0.788,
|
| 1232 |
+
0.492,
|
| 1233 |
+
0.852
|
| 1234 |
+
],
|
| 1235 |
+
"angle": 0,
|
| 1236 |
+
"content": "Finally, we evaluate the adversarial confounder selection method described in § 3.2. This involves augmenting our regular training sets with additional confounders for TriviaQA and Natural Ques"
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "text",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
0.51,
|
| 1242 |
+
0.274,
|
| 1243 |
+
0.886,
|
| 1244 |
+
0.417
|
| 1245 |
+
],
|
| 1246 |
+
"angle": 0,
|
| 1247 |
+
"content": "tions, selected using our top multi-task trained model. A new multi-task model is then trained from scratch on this augmented data. Its performance is reported in Table 7, showing an overall improvement across multiple tasks. While this approach is demonstrated here on our multi-task model, it is in fact orthogonal to it, and could be applied to any other neural retrievers trained with a contrastive loss."
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "title",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
0.511,
|
| 1253 |
+
0.432,
|
| 1254 |
+
0.664,
|
| 1255 |
+
0.447
|
| 1256 |
+
],
|
| 1257 |
+
"angle": 0,
|
| 1258 |
+
"content": "5 Related work"
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
0.509,
|
| 1264 |
+
0.459,
|
| 1265 |
+
0.886,
|
| 1266 |
+
0.699
|
| 1267 |
+
],
|
| 1268 |
+
"angle": 0,
|
| 1269 |
+
"content": "The approach most closely related to ours is DPR (Karpukhin et al., 2020), upon which we built all our retrieval systems. This model is covered in detail in § 2.1, in addition to the historical context. Another closely related approach is the Retrieval-Augmented Generation (RAG) model of Lewis et al. (2020b). In its base configuration it augments DPR with a generative reader, and it trains the query encoder end-to-end (differing from traditional retriever-reader architectures which treat the two steps as disjoint). A natural extension of the work we have presented would be to combine RAG with our joint learning approach, to study whether it can lead to further gains in performance or robustness."
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "text",
|
| 1273 |
+
"bbox": [
|
| 1274 |
+
0.509,
|
| 1275 |
+
0.701,
|
| 1276 |
+
0.886,
|
| 1277 |
+
0.911
|
| 1278 |
+
],
|
| 1279 |
+
"angle": 0,
|
| 1280 |
+
"content": "A number of promising techniques to boost retrieval performance have been proposed recently. These are orthogonal to our work, and as such they could be combined with it. Amongst these, pretraining methods form one class. Inverse Cloze Task (Lee et al., 2019) and its extensions (Chang et al., 2020) are self-supervised pre-training methods designed for retrieval in open-domain question answering. Whether such specific pre-training is beneficial to tasks other than question answering remains an open question. CERT (Fang et al., 2020) is an alternative pre-training approach, inspired by some recent advances in computer vision. While to"
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "page_footnote",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
0.115,
|
| 1286 |
+
0.86,
|
| 1287 |
+
0.49,
|
| 1288 |
+
0.911
|
| 1289 |
+
],
|
| 1290 |
+
"angle": 0,
|
| 1291 |
+
"content": "4Not included in this table, due to very poor performance in preliminary experiments, are two further variants: a base model with a single encoder for both queries and passages, and a base model trained from scratch without BERT pre-training."
|
| 1292 |
+
}
|
| 1293 |
+
],
|
| 1294 |
+
[
|
| 1295 |
+
{
|
| 1296 |
+
"type": "table",
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
0.126,
|
| 1299 |
+
0.073,
|
| 1300 |
+
0.882,
|
| 1301 |
+
0.139
|
| 1302 |
+
],
|
| 1303 |
+
"angle": 0,
|
| 1304 |
+
"content": "<table><tr><td rowspan=\"2\">confounders</td><td rowspan=\"2\">Fact Check. FEV</td><td rowspan=\"2\">Ent. L. AY2</td><td rowspan=\"2\">Slot Filling T-REx</td><td rowspan=\"2\">zsRE</td><td colspan=\"3\">Open Domain QA</td><td rowspan=\"2\">Dial. WoW</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>BM25</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25 + adv</td><td>74.79/52.12</td><td>84.86</td><td>71.36/61.40</td><td>80.04/54.08</td><td>59.25/40.11</td><td>44.08/41.04</td><td>59.19/34.17</td><td>41.04/24.62</td></tr></table>"
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "table_caption",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
0.115,
|
| 1310 |
+
0.148,
|
| 1311 |
+
0.884,
|
| 1312 |
+
0.178
|
| 1313 |
+
],
|
| 1314 |
+
"angle": 0,
|
| 1315 |
+
"content": "Table 7: Comparison of two confounder selection methods for the multi-task model: simple BM25, and BM25 augmented with adversarial confounders (\\(R\\)-precision on validation data at page/passage level)."
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "text",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
0.115,
|
| 1321 |
+
0.203,
|
| 1322 |
+
0.49,
|
| 1323 |
+
0.284
|
| 1324 |
+
],
|
| 1325 |
+
"angle": 0,
|
| 1326 |
+
"content": "our knowledge this has not been applied to retrieval problems, we believe it might be promising due to its focus on sentence-level semantics (as opposed to the more standard masked language modelling pre-training, which focuses on the token-level)."
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "text",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
0.115,
|
| 1332 |
+
0.285,
|
| 1333 |
+
0.492,
|
| 1334 |
+
0.429
|
| 1335 |
+
],
|
| 1336 |
+
"angle": 0,
|
| 1337 |
+
"content": "Another class of orthogonal improvements to dense retrieval involves models which embed passages into multiple fixed-size vectors. Of these, ColBERT (Khattab and Zaharia, 2020) and MEBERT (Luan et al., 2020) are two representative examples. One further approach is ColBERT-QA (Khattab et al., 2020), which additionally uses a data augmentation strategy closely related to our own approach described in § 3.2."
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "text",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
0.115,
|
| 1343 |
+
0.43,
|
| 1344 |
+
0.49,
|
| 1345 |
+
0.577
|
| 1346 |
+
],
|
| 1347 |
+
"angle": 0,
|
| 1348 |
+
"content": "Finally two entity linkers, GENRE (Cao et al., 2020) and BLINK (Wu et al., 2020), are worth mentioning. Being trained specifically for entity linking, these models will generally outperform retrieval-based approaches on that task. While they are not comparable to retrieval models and will not generally be applicable to information retrieval tasks, we mention them here to provide readers with a fuller context of the existing literature."
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "title",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
0.116,
|
| 1354 |
+
0.59,
|
| 1355 |
+
0.258,
|
| 1356 |
+
0.606
|
| 1357 |
+
],
|
| 1358 |
+
"angle": 0,
|
| 1359 |
+
"content": "6 Conclusions"
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "text",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
0.115,
|
| 1365 |
+
0.618,
|
| 1366 |
+
0.49,
|
| 1367 |
+
0.699
|
| 1368 |
+
],
|
| 1369 |
+
"angle": 0,
|
| 1370 |
+
"content": "We have conducted a large-scale experimental study on knowledge-intensive tasks, and how retrieval models that tackle them seek the required information from knowledge bases such as Wikipedia."
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "text",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
0.115,
|
| 1376 |
+
0.7,
|
| 1377 |
+
0.492,
|
| 1378 |
+
0.811
|
| 1379 |
+
],
|
| 1380 |
+
"angle": 0,
|
| 1381 |
+
"content": "The study started with the question of whether the way in which information is embedded for retrieval purposes is universal. Section 4.2 provided evidence that to a large extent it is, with a single \"universal\" retriever, trained jointly on 8 datasets, often performing comparably to task-specific models."
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "text",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
0.115,
|
| 1387 |
+
0.814,
|
| 1388 |
+
0.492,
|
| 1389 |
+
0.911
|
| 1390 |
+
],
|
| 1391 |
+
"angle": 0,
|
| 1392 |
+
"content": "Armed with this knowledge, in Section 4.3 we plugged our single model in a larger pipeline, in order to see its contribution to the downstream performance on a wide range of knowledge-intensive tasks. This led to an overall improvement in downstream performance, setting new top results for a"
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "text",
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
0.511,
|
| 1398 |
+
0.203,
|
| 1399 |
+
0.816,
|
| 1400 |
+
0.218
|
| 1401 |
+
],
|
| 1402 |
+
"angle": 0,
|
| 1403 |
+
"content": "number of tasks in the KILT benchmark."
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "text",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.51,
|
| 1409 |
+
0.22,
|
| 1410 |
+
0.886,
|
| 1411 |
+
0.333
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "Next, in Section 4.4, we evaluated the model's performance in the zero-shot and few-shot settings. By evaluating on a wide range of tasks, we were able to show that our proposed approach performs comparably to BM25 in the zero-shot setting, and quickly overtakes it even with minimal in-domain training."
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "text",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.51,
|
| 1420 |
+
0.334,
|
| 1421 |
+
0.886,
|
| 1422 |
+
0.43
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "In Section 4.5 we evaluated a number of more complex variants of the model involving task specialisation, but failed to see clear performance improvements. Finally, in Section 4.6 we saw how a simple iterative approach to data augmentation can lead to better performance."
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "text",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.51,
|
| 1431 |
+
0.432,
|
| 1432 |
+
0.886,
|
| 1433 |
+
0.641
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": "In the coming months we will provide a pretrained snapshot of our best-performing model, in the form of a BERT checkpoint. As shown, this model will be useful in zero-shot and few-shot settings as a better performing alternative to both IR-based approaches such as BM25, as well as task-specific models. The multi-task training approach demonstrated here can also be useful in industry settings where several retrieval operations may need to be performed on the same piece of content, and the deployment of multiple task-specific models might not be possible due to space or computational performance concerns."
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "title",
|
| 1440 |
+
"bbox": [
|
| 1441 |
+
0.513,
|
| 1442 |
+
0.671,
|
| 1443 |
+
0.611,
|
| 1444 |
+
0.685
|
| 1445 |
+
],
|
| 1446 |
+
"angle": 0,
|
| 1447 |
+
"content": "References"
|
| 1448 |
+
},
|
| 1449 |
+
{
|
| 1450 |
+
"type": "ref_text",
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
0.512,
|
| 1453 |
+
0.695,
|
| 1454 |
+
0.886,
|
| 1455 |
+
0.735
|
| 1456 |
+
],
|
| 1457 |
+
"angle": 0,
|
| 1458 |
+
"content": "Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. ArXiv, abs/2010.00904."
|
| 1459 |
+
},
|
| 1460 |
+
{
|
| 1461 |
+
"type": "ref_text",
|
| 1462 |
+
"bbox": [
|
| 1463 |
+
0.512,
|
| 1464 |
+
0.749,
|
| 1465 |
+
0.886,
|
| 1466 |
+
0.815
|
| 1467 |
+
],
|
| 1468 |
+
"angle": 0,
|
| 1469 |
+
"content": "Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representations."
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "ref_text",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
0.512,
|
| 1475 |
+
0.829,
|
| 1476 |
+
0.886,
|
| 1477 |
+
0.884
|
| 1478 |
+
],
|
| 1479 |
+
"angle": 0,
|
| 1480 |
+
"content": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational"
|
| 1481 |
+
},
|
| 1482 |
+
{
|
| 1483 |
+
"type": "list",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
0.512,
|
| 1486 |
+
0.695,
|
| 1487 |
+
0.886,
|
| 1488 |
+
0.884
|
| 1489 |
+
],
|
| 1490 |
+
"angle": 0,
|
| 1491 |
+
"content": null
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"type": "page_footnote",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
0.532,
|
| 1497 |
+
0.895,
|
| 1498 |
+
0.816,
|
| 1499 |
+
0.91
|
| 1500 |
+
],
|
| 1501 |
+
"angle": 0,
|
| 1502 |
+
"content": "E.g. fact checking and hate speech detection."
|
| 1503 |
+
}
|
| 1504 |
+
],
|
| 1505 |
+
[
|
| 1506 |
+
{
|
| 1507 |
+
"type": "ref_text",
|
| 1508 |
+
"bbox": [
|
| 1509 |
+
0.135,
|
| 1510 |
+
0.077,
|
| 1511 |
+
0.491,
|
| 1512 |
+
0.116
|
| 1513 |
+
],
|
| 1514 |
+
"angle": 0,
|
| 1515 |
+
"content": "Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics."
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "ref_text",
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
0.119,
|
| 1521 |
+
0.128,
|
| 1522 |
+
0.492,
|
| 1523 |
+
0.194
|
| 1524 |
+
],
|
| 1525 |
+
"angle": 0,
|
| 1526 |
+
"content": "Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391-407."
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "ref_text",
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
0.119,
|
| 1532 |
+
0.207,
|
| 1533 |
+
0.491,
|
| 1534 |
+
0.324
|
| 1535 |
+
],
|
| 1536 |
+
"angle": 0,
|
| 1537 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics."
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "ref_text",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
0.119,
|
| 1543 |
+
0.337,
|
| 1544 |
+
0.49,
|
| 1545 |
+
0.388
|
| 1546 |
+
],
|
| 1547 |
+
"angle": 0,
|
| 1548 |
+
"content": "Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. CERT: Contrastive self-supervised learning for language understanding. ArXiv, abs/2005.12766."
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "ref_text",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
0.119,
|
| 1554 |
+
0.402,
|
| 1555 |
+
0.491,
|
| 1556 |
+
0.479
|
| 1557 |
+
],
|
| 1558 |
+
"angle": 0,
|
| 1559 |
+
"content": "Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 675-684. ACM."
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "ref_text",
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
0.119,
|
| 1565 |
+
0.492,
|
| 1566 |
+
0.49,
|
| 1567 |
+
0.545
|
| 1568 |
+
],
|
| 1569 |
+
"angle": 0,
|
| 1570 |
+
"content": "Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Artificial Intelligence and Statistics, pages 482-490."
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "ref_text",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
0.119,
|
| 1576 |
+
0.557,
|
| 1577 |
+
0.491,
|
| 1578 |
+
0.662
|
| 1579 |
+
],
|
| 1580 |
+
"angle": 0,
|
| 1581 |
+
"content": "Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spanirol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 782-792, Edinburgh, Scotland, UK. Association for Computational Linguistics."
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "ref_text",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
0.119,
|
| 1587 |
+
0.675,
|
| 1588 |
+
0.49,
|
| 1589 |
+
0.78
|
| 1590 |
+
],
|
| 1591 |
+
"angle": 0,
|
| 1592 |
+
"content": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, page 2333-2338, New York, NY, USA. Association for Computing Machinery."
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "ref_text",
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
0.119,
|
| 1598 |
+
0.792,
|
| 1599 |
+
0.49,
|
| 1600 |
+
0.831
|
| 1601 |
+
],
|
| 1602 |
+
"angle": 0,
|
| 1603 |
+
"content": "J. Johnson, M. Douze, and H. Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, pages 1-1."
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "ref_text",
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
0.119,
|
| 1609 |
+
0.843,
|
| 1610 |
+
0.491,
|
| 1611 |
+
0.909
|
| 1612 |
+
],
|
| 1613 |
+
"angle": 0,
|
| 1614 |
+
"content": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics"
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "list",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
0.119,
|
| 1620 |
+
0.077,
|
| 1621 |
+
0.492,
|
| 1622 |
+
0.909
|
| 1623 |
+
],
|
| 1624 |
+
"angle": 0,
|
| 1625 |
+
"content": null
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "ref_text",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
0.533,
|
| 1631 |
+
0.077,
|
| 1632 |
+
0.885,
|
| 1633 |
+
0.116
|
| 1634 |
+
],
|
| 1635 |
+
"angle": 0,
|
| 1636 |
+
"content": "(Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics."
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "ref_text",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
0.514,
|
| 1642 |
+
0.127,
|
| 1643 |
+
0.885,
|
| 1644 |
+
0.219
|
| 1645 |
+
],
|
| 1646 |
+
"angle": 0,
|
| 1647 |
+
"content": "Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics."
|
| 1648 |
+
},
|
| 1649 |
+
{
|
| 1650 |
+
"type": "ref_text",
|
| 1651 |
+
"bbox": [
|
| 1652 |
+
0.514,
|
| 1653 |
+
0.23,
|
| 1654 |
+
0.885,
|
| 1655 |
+
0.269
|
| 1656 |
+
],
|
| 1657 |
+
"angle": 0,
|
| 1658 |
+
"content": "Omar Khattab, Christopher Potts, and Matei Zaharia. 2020. Relevance-guided supervision for OpenQA with ColBERT. ArXiv, abs/2007.00814."
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "ref_text",
|
| 1662 |
+
"bbox": [
|
| 1663 |
+
0.514,
|
| 1664 |
+
0.28,
|
| 1665 |
+
0.885,
|
| 1666 |
+
0.346
|
| 1667 |
+
],
|
| 1668 |
+
"angle": 0,
|
| 1669 |
+
"content": "Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd international ACM SIGIR conference on Research and development in Information Retrieval."
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "ref_text",
|
| 1673 |
+
"bbox": [
|
| 1674 |
+
0.514,
|
| 1675 |
+
0.357,
|
| 1676 |
+
0.884,
|
| 1677 |
+
0.396
|
| 1678 |
+
],
|
| 1679 |
+
"angle": 0,
|
| 1680 |
+
"content": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR)."
|
| 1681 |
+
},
|
| 1682 |
+
{
|
| 1683 |
+
"type": "ref_text",
|
| 1684 |
+
"bbox": [
|
| 1685 |
+
0.514,
|
| 1686 |
+
0.407,
|
| 1687 |
+
0.885,
|
| 1688 |
+
0.525
|
| 1689 |
+
],
|
| 1690 |
+
"angle": 0,
|
| 1691 |
+
"content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics."
|
| 1692 |
+
},
|
| 1693 |
+
{
|
| 1694 |
+
"type": "ref_text",
|
| 1695 |
+
"bbox": [
|
| 1696 |
+
0.514,
|
| 1697 |
+
0.535,
|
| 1698 |
+
0.885,
|
| 1699 |
+
0.614
|
| 1700 |
+
],
|
| 1701 |
+
"angle": 0,
|
| 1702 |
+
"content": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics."
|
| 1703 |
+
},
|
| 1704 |
+
{
|
| 1705 |
+
"type": "ref_text",
|
| 1706 |
+
"bbox": [
|
| 1707 |
+
0.514,
|
| 1708 |
+
0.625,
|
| 1709 |
+
0.885,
|
| 1710 |
+
0.716
|
| 1711 |
+
],
|
| 1712 |
+
"angle": 0,
|
| 1713 |
+
"content": "Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics."
|
| 1714 |
+
},
|
| 1715 |
+
{
|
| 1716 |
+
"type": "ref_text",
|
| 1717 |
+
"bbox": [
|
| 1718 |
+
0.514,
|
| 1719 |
+
0.728,
|
| 1720 |
+
0.885,
|
| 1721 |
+
0.845
|
| 1722 |
+
],
|
| 1723 |
+
"angle": 0,
|
| 1724 |
+
"content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics."
|
| 1725 |
+
},
|
| 1726 |
+
{
|
| 1727 |
+
"type": "ref_text",
|
| 1728 |
+
"bbox": [
|
| 1729 |
+
0.514,
|
| 1730 |
+
0.856,
|
| 1731 |
+
0.885,
|
| 1732 |
+
0.908
|
| 1733 |
+
],
|
| 1734 |
+
"angle": 0,
|
| 1735 |
+
"content": "Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe"
|
| 1736 |
+
},
|
| 1737 |
+
{
|
| 1738 |
+
"type": "list",
|
| 1739 |
+
"bbox": [
|
| 1740 |
+
0.514,
|
| 1741 |
+
0.077,
|
| 1742 |
+
0.885,
|
| 1743 |
+
0.908
|
| 1744 |
+
],
|
| 1745 |
+
"angle": 0,
|
| 1746 |
+
"content": null
|
| 1747 |
+
}
|
| 1748 |
+
],
|
| 1749 |
+
[
|
| 1750 |
+
{
|
| 1751 |
+
"type": "ref_text",
|
| 1752 |
+
"bbox": [
|
| 1753 |
+
0.135,
|
| 1754 |
+
0.077,
|
| 1755 |
+
0.49,
|
| 1756 |
+
0.119
|
| 1757 |
+
],
|
| 1758 |
+
"angle": 0,
|
| 1759 |
+
"content": "Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems (NeurIPS)."
|
| 1760 |
+
},
|
| 1761 |
+
{
|
| 1762 |
+
"type": "ref_text",
|
| 1763 |
+
"bbox": [
|
| 1764 |
+
0.119,
|
| 1765 |
+
0.127,
|
| 1766 |
+
0.492,
|
| 1767 |
+
0.219
|
| 1768 |
+
],
|
| 1769 |
+
"angle": 0,
|
| 1770 |
+
"content": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics."
|
| 1771 |
+
},
|
| 1772 |
+
{
|
| 1773 |
+
"type": "ref_text",
|
| 1774 |
+
"bbox": [
|
| 1775 |
+
0.119,
|
| 1776 |
+
0.23,
|
| 1777 |
+
0.492,
|
| 1778 |
+
0.297
|
| 1779 |
+
],
|
| 1780 |
+
"angle": 0,
|
| 1781 |
+
"content": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692."
|
| 1782 |
+
},
|
| 1783 |
+
{
|
| 1784 |
+
"type": "ref_text",
|
| 1785 |
+
"bbox": [
|
| 1786 |
+
0.119,
|
| 1787 |
+
0.307,
|
| 1788 |
+
0.492,
|
| 1789 |
+
0.358
|
| 1790 |
+
],
|
| 1791 |
+
"angle": 0,
|
| 1792 |
+
"content": "Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and attentional representations for text retrieval. *ArXiv*, abs/2005.00181."
|
| 1793 |
+
},
|
| 1794 |
+
{
|
| 1795 |
+
"type": "ref_text",
|
| 1796 |
+
"bbox": [
|
| 1797 |
+
0.119,
|
| 1798 |
+
0.37,
|
| 1799 |
+
0.492,
|
| 1800 |
+
0.411
|
| 1801 |
+
],
|
| 1802 |
+
"angle": 0,
|
| 1803 |
+
"content": "Christopher D Manning, Hinrich Schütze, and Prabhakar Raghavan. 2008. Introduction to information retrieval. Cambridge university press."
|
| 1804 |
+
},
|
| 1805 |
+
{
|
| 1806 |
+
"type": "ref_text",
|
| 1807 |
+
"bbox": [
|
| 1808 |
+
0.119,
|
| 1809 |
+
0.421,
|
| 1810 |
+
0.492,
|
| 1811 |
+
0.5
|
| 1812 |
+
],
|
| 1813 |
+
"angle": 0,
|
| 1814 |
+
"content": "Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vassilis Plachouras, Tim Roktaschel, and Sebastian Riedel. 2020. KILT: a benchmark for knowledge intensive language tasks. In arXiv:2009.02252."
|
| 1815 |
+
},
|
| 1816 |
+
{
|
| 1817 |
+
"type": "ref_text",
|
| 1818 |
+
"bbox": [
|
| 1819 |
+
0.119,
|
| 1820 |
+
0.511,
|
| 1821 |
+
0.492,
|
| 1822 |
+
0.563
|
| 1823 |
+
],
|
| 1824 |
+
"angle": 0,
|
| 1825 |
+
"content": "Fabio Petroni, Tim Rocttäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? EMNLP."
|
| 1826 |
+
},
|
| 1827 |
+
{
|
| 1828 |
+
"type": "ref_text",
|
| 1829 |
+
"bbox": [
|
| 1830 |
+
0.119,
|
| 1831 |
+
0.574,
|
| 1832 |
+
0.492,
|
| 1833 |
+
0.627
|
| 1834 |
+
],
|
| 1835 |
+
"angle": 0,
|
| 1836 |
+
"content": "Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389."
|
| 1837 |
+
},
|
| 1838 |
+
{
|
| 1839 |
+
"type": "ref_text",
|
| 1840 |
+
"bbox": [
|
| 1841 |
+
0.119,
|
| 1842 |
+
0.638,
|
| 1843 |
+
0.492,
|
| 1844 |
+
0.703
|
| 1845 |
+
],
|
| 1846 |
+
"angle": 0,
|
| 1847 |
+
"content": "Anshumali Shrivastava and Ping Li. 2014. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems (NIPS), pages 2321-2329."
|
| 1848 |
+
},
|
| 1849 |
+
{
|
| 1850 |
+
"type": "ref_text",
|
| 1851 |
+
"bbox": [
|
| 1852 |
+
0.119,
|
| 1853 |
+
0.715,
|
| 1854 |
+
0.492,
|
| 1855 |
+
0.833
|
| 1856 |
+
],
|
| 1857 |
+
"angle": 0,
|
| 1858 |
+
"content": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics."
|
| 1859 |
+
},
|
| 1860 |
+
{
|
| 1861 |
+
"type": "ref_text",
|
| 1862 |
+
"bbox": [
|
| 1863 |
+
0.119,
|
| 1864 |
+
0.843,
|
| 1865 |
+
0.492,
|
| 1866 |
+
0.908
|
| 1867 |
+
],
|
| 1868 |
+
"angle": 0,
|
| 1869 |
+
"content": "Shuohang Wang, Mo Yu, Xiaoxiao Guo, Z. Wang, Tim Klinger, Wei Zhang, S. Chang, G. Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering. In AAAI."
|
| 1870 |
+
},
|
| 1871 |
+
{
|
| 1872 |
+
"type": "list",
|
| 1873 |
+
"bbox": [
|
| 1874 |
+
0.119,
|
| 1875 |
+
0.077,
|
| 1876 |
+
0.492,
|
| 1877 |
+
0.908
|
| 1878 |
+
],
|
| 1879 |
+
"angle": 0,
|
| 1880 |
+
"content": null
|
| 1881 |
+
},
|
| 1882 |
+
{
|
| 1883 |
+
"type": "ref_text",
|
| 1884 |
+
"bbox": [
|
| 1885 |
+
0.513,
|
| 1886 |
+
0.077,
|
| 1887 |
+
0.886,
|
| 1888 |
+
0.196
|
| 1889 |
+
],
|
| 1890 |
+
"angle": 0,
|
| 1891 |
+
"content": "Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics."
|
| 1892 |
+
},
|
| 1893 |
+
{
|
| 1894 |
+
"type": "ref_text",
|
| 1895 |
+
"bbox": [
|
| 1896 |
+
0.513,
|
| 1897 |
+
0.205,
|
| 1898 |
+
0.886,
|
| 1899 |
+
0.297
|
| 1900 |
+
],
|
| 1901 |
+
"angle": 0,
|
| 1902 |
+
"content": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computational Linguistics."
|
| 1903 |
+
},
|
| 1904 |
+
{
|
| 1905 |
+
"type": "ref_text",
|
| 1906 |
+
"bbox": [
|
| 1907 |
+
0.513,
|
| 1908 |
+
0.307,
|
| 1909 |
+
0.886,
|
| 1910 |
+
0.372
|
| 1911 |
+
],
|
| 1912 |
+
"angle": 0,
|
| 1913 |
+
"content": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ArXiv, abs/2007.00808."
|
| 1914 |
+
},
|
| 1915 |
+
{
|
| 1916 |
+
"type": "ref_text",
|
| 1917 |
+
"bbox": [
|
| 1918 |
+
0.513,
|
| 1919 |
+
0.382,
|
| 1920 |
+
0.886,
|
| 1921 |
+
0.462
|
| 1922 |
+
],
|
| 1923 |
+
"angle": 0,
|
| 1924 |
+
"content": "Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 247-256. Association for Computational Linguistics."
|
| 1925 |
+
},
|
| 1926 |
+
{
|
| 1927 |
+
"type": "list",
|
| 1928 |
+
"bbox": [
|
| 1929 |
+
0.513,
|
| 1930 |
+
0.077,
|
| 1931 |
+
0.886,
|
| 1932 |
+
0.462
|
| 1933 |
+
],
|
| 1934 |
+
"angle": 0,
|
| 1935 |
+
"content": null
|
| 1936 |
+
}
|
| 1937 |
+
]
|
| 1938 |
+
]
|
data/2021/2101_00xxx/2101.00117/full.md
CHANGED
|
@@ -1,3 +1,284 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Multi-task Retrieval for Knowledge-Intensive Tasks
|
| 2 |
+
|
| 3 |
+
Jean Maillard* Vladimir Karpukhin* Fabio Petroni
|
| 4 |
+
Wen-tau Yih Barlas Oğuz Veselin Stoyanov Gargi Ghosh
|
| 5 |
+
Facebook AI
|
| 6 |
+
|
| 7 |
+
{jeanm,vladk, fabiopetroni, scottyih, barlaso, ves, gghosh}@fb.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Retrieving relevant contexts from a large corpus is a crucial step for tasks such as open-domain question answering and fact checking. Although neural retrieval outperforms traditional methods like tfidf and BM25, its performance degrades considerably when applied to out-of-domain data. Driven by the question of whether a neural retrieval model can be universal and perform robustly on a wide variety of problems, we propose a multi-task trained model. Our approach not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant. With the help of our retriever, we improve existing models for downstream tasks and closely match or improve the state of the art on multiple benchmarks.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Knowledge-intensive tasks is the common designation for a class of real-world NLP problems which, because of their nature, require large amounts of knowledge about the world (Petroni et al., 2020). For example, open-domain question answering requires producing answers to general factoid questions; fact checking involves determining the veracity of claims based on a database of trusted evidence. Practical solutions to these tasks usually involve an efficient retrieval component that, given an input query, selects a limited subset of relevant information from a large knowledge source. Sophisticated downstream models then consider the input only in the context of the retrieved information, and perform the final task.
|
| 16 |
+
|
| 17 |
+
The standard retrieval component in many systems (e.g., Thorne et al., 2018; Wang et al., 2018; Chen et al., 2017) has long relied on term-matching methods, such as tfidf or BM25 (Robertson and Zaragoza, 2009). These methods rely on efficient algorithms and usually perform reasonably well regardless of the problem. In contrast, recent neural retrieval models, such as ICT (Lee et al., 2019), DPR (Karpukhin et al., 2020) and RAG (Lewis et al., 2020b) achieve better results by learning directly from task-specific training data and going beyond simple keyword matching. While task specialisation results in improved task performance, researchers have observed that a retriever trained for one specific domain will typically achieve low out-of-domain performance, and even lower performance on entirely different tasks (Petroni et al., 2020). This has two implications. First, unlike tfidf or BM25, neural retrieval models are unsuitable for low data regimes such as few- and zero-shot settings. Second, task-specific retrievers complicate practical applications where multiple knowledge-intensive tasks may need to be performed using the same supporting database or over the same input text. It may not be practical to deploy multiple separate specialised models due to computational performance or memory concerns.
|
| 18 |
+
|
| 19 |
+
We ask the following question in this work: can we develop a universal neural retriever? Namely, we target a retriever that can perform well on a wide variety of problems, without task-specific fine-tuning, but, if additional in-domain labelled data is available, it can be further fine-tuned to improve the performance. We perform a large experimental study to attempt to build such a universal retrieval model. We find that, by jointly training on an extensive selection of retrieval tasks, we obtain a model which is not only more robust than previous approaches, but also can lead to better performance on the downstream knowledge-intensive tasks when
|
| 20 |
+
|
| 21 |
+
plugged into an existing system. Our approach combines the benefits from IR-based models with those of task-specific neural retrievers – namely, good performance when no (or not enough) training data is available and high task performance due to its ability to learn highly specialised representations.
|
| 22 |
+
|
| 23 |
+
Our contributions can be summarised as follows.
|
| 24 |
+
|
| 25 |
+
- We propose a single general-purpose "universal" retrieval model, able to perform comparably or better than specialised retriever approaches in both zero-shot (leave-one-out) and few-shot retrieval. We investigate several model variants, shedding light on what are the aspects of the architecture that affect its performance.
|
| 26 |
+
- We show that our model's gains in terms of retrieval directly translate into performance gains for a variety of downstream knowledge-intensive tasks.
|
| 27 |
+
- We will share the implementation as well as our best model. This is in the form of a readily available BERT checkpoint which, as we will show, can be used by NLP practitioners as a strong out-of-the-box retrieval system, but which can also undergo further in-domain training for even higher performance.
|
| 28 |
+
|
| 29 |
+
# 2 Background
|
| 30 |
+
|
| 31 |
+
In this section, we first give an overview of retrieval methods based on sparse and dense representations. We then discuss a wide range of knowledge-intensive NLP tasks, where retrieval plays a crucial role in solving the problems.
|
| 32 |
+
|
| 33 |
+
# 2.1 Retrieval methods
|
| 34 |
+
|
| 35 |
+
Given a large collection of unstructured text passages, information retrieval (IR) can be broadly defined as finding a small set of passages that satisfies an information need, often presented in the form of a short-text query (Manning et al., 2008). Traditional IR methods, such as tfidf and BM25 (Robertson and Zaragoza, 2009), match keywords efficiently with an inverted index. Such methods can be seen as representing queries and passages in high-dimensional, sparse vectors, where each dimension corresponds to a term in the vocabulary and the weight indicates its importance.
|
| 36 |
+
|
| 37 |
+
In contrast to tfidf and BM25, dense retrieval methods encode text as a latent semantic vector of
|
| 38 |
+
|
| 39 |
+
a fixed, much smaller dimensionality. Whether a passage is relevant to a given query is determined by the distance of their vectors (Deerwester et al., 1990). Although dense representations do not encode tokens explicitly and can potentially map paraphrases of completely different tokens to close vectors, performance of early dense retrieval methods was often inferior to term-matching approaches, except when large labelled data is available (Yih et al., 2011; Gao et al., 2011; Huang et al., 2013). Thanks to success of large pre-trained models (Devlin et al., 2019; Liu et al., 2019b), however, recent dense retrieval methods have shown to outperform the sparse counterparts, when fine-tuned on a small set of in-domain labelled data (Karpukhin et al., 2020; Lewis et al., 2020b; Xiong et al., 2020). Efficient index and search of dense vectors are made possible by maximum inner product search (MIPS) algorithms (e.g., Shrivastava and Li, 2014; Guo et al., 2016), as well as tools like FAISS (Johnson et al., 2019).
|
| 40 |
+
|
| 41 |
+
Our work is built upon the Dense Passage Retriever (DPR) architecture of Karpukhin et al. (2020), which was initially proposed for the task of open-domain question answering. DPR is a neural bi-encoder model which embeds queries with an encoder $\pmb{f}(\cdot)$ and passages with a separate encoder $\pmb{g}(\cdot)$ . Given an input query $x$ and a target passage $y$ , we have
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\mathrm {p} (x \mid y) \propto \operatorname {s i m} (x, y),
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where the similarity score $\operatorname{sim}(x, y)$ is defined as the inner product of the embeddings of its arguments, $f(x) \cdot g(y)$ . Given a query at inference time, calculating its similarity with every possible passage would be prohibitive for large knowledge sources. Therefore, DPR makes use of the FAISS library (Johnson et al., 2019) to perform fast approximate nearest neighbour search in sub-linear time.
|
| 48 |
+
|
| 49 |
+
Training is based on a contrastive loss. Given a query $x$ , a relevant passage $y$ , and a set of $n$ irrelevant passages $y_{i}^{-}$ , we train the model by optimising the following negative log likelihood:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\mathcal {L} = - \log \frac {\exp (\sin (x , y))}{\exp (\sin (x , y)) + \sum_ {i = 1} ^ {n} \exp (\sin \left(x , y _ {i} ^ {-}\right))}.
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
As the set of irrelevant passages, we use the relevant passages for other queries within the same batch, as well as a specially selected "hard" confounder. This is a passage which has high lexical
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 1: Training of DPR (Karpukhin et al., 2020), a bi-encoder model for open-domain question answering. Queries and passages are encoded as vectors, and retrieval is performed as a maximum inner product search.
|
| 59 |
+
|
| 60 |
+
overlap with the query (high BM25 score), but is not among the set of relevant passages for the given data point. Karpukhin et al. (2020) have shown that the inclusion of such "hard" confounders leads to substantially improved training results. This training process is illustrated in Figure 1.
|
| 61 |
+
|
| 62 |
+
# 2.2 Knowledge-intensive Tasks
|
| 63 |
+
|
| 64 |
+
For the training and evaluation of all models in the paper we make use of KILT, a benchmark and library of datasets (Petroni et al., 2020). KILT consists of a selection of datasets spanning five varied classes of knowledge-intensive tasks (i.e., question answering, slot filling, fact checking, dialogue, entity linking), with the aim to cover many different ways of seeking knowledge. Input queries can vary wildly from one task to the other, and include classic examples of open-domain retrieval tasks such as natural language questions and claims to be verified, as well as more unusual examples like conversation fragments and long chunks of annotated text. Crucially, all datasets distributed in KILT have been re-aligned such that they are all grounded in the same snapshot of Wikipedia, which the authors distribute. The knowledge required to answer any of the queries in the library of tasks can thus be found within the same unified knowledge source.
|
| 65 |
+
|
| 66 |
+
To illustrate the variety of ways in which the input queries for different tasks can be formulated, we provide a few simple examples in Table 1. In spite of the differences between query formulations, all these tasks share one crucial aspect: they all require a retriever to fetch the relevant passages from the knowledge source, in order to support the final downstream task.
|
| 67 |
+
|
| 68 |
+
# 3 Methods
|
| 69 |
+
|
| 70 |
+
# 3.1 Universal retrieval
|
| 71 |
+
|
| 72 |
+
Using task-specific models to tackle our collection of retrieval tasks would involve completely separate models, one per dataset. As illustrated in Figure 2, this would lead to a proliferation of models and data, down to separate indexed copies of the knowledge source itself (Wikipedia). This setup will form one of our baselines.
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
Figure 2: Two retrieval tasks performed by two fully-specialised models.
|
| 76 |
+
|
| 77 |
+
Multi-task training has been successfully used to allow models to leverage cross-task data, as well as to provide a regularisation effect leading to better generalisation ability (Liu et al., 2019a). We apply this concept to neural retrievers, with the aim of improving performance by jointly leveraging multiple different retrieval datasets.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
(a) Separate query encoders.
|
| 81 |
+
Figure 3: Parameter sharing between neural retrievers.
|
| 82 |
+
|
| 83 |
+

|
| 84 |
+
(b) A single retrieval model.
|
| 85 |
+
|
| 86 |
+
Our base setup is illustrated in Figure 3b and involves using a shared passage encoder — so that a single index of encoded passages can be used — as well as a query encoder that is shared across all tasks. In essence, in this setup a single DPR model is used to perform all retrieval tasks.
|
| 87 |
+
|
| 88 |
+
Due to the complexity of training and evaluating retrieval models (which involves training the retriever, embedding all of Wikipedia, and building an index), our main set of experiments is all based on this configuration, which was found to work well in preliminary experiments. However, in order
|
| 89 |
+
|
| 90 |
+
<table><tr><td>Task</td><td>Example query</td><td>Answer</td><td>Relevant doc.</td></tr><tr><td>Question Answering</td><td>Who is playing the Halftime Show at Super Bowl 2016?</td><td>Coldplay</td><td>The Super Bowl 50 Halftime Show took place on February 7, 2016 ... It was headlined by the British rock group Coldplay.</td></tr><tr><td>Fact Checking</td><td>Bermuda Triangle is in the western part of the Himalayas</td><td>REFUTES</td><td>The Bermuda Triangle ... is a loosely defined region in the western part of the North Atlantic Ocean</td></tr><tr><td>Slot Filling</td><td>Piner Creek [sep] mouth of the watercourse</td><td>Santa Rosa Creek</td><td>Piner Creek discharges to Santa Rosa Creek which in turn ...</td></tr><tr><td>Entity Linking</td><td>Leicestershire take over at top after innings victory. London. [start_ent]West Indian [end_ent] all-rounder Phil Simmons ...</td><td>West Indies cricket team</td><td>The West Indies cricket team is a multi-national men's cricket team representing the Anglophone Caribbean region</td></tr><tr><td>Dialogue</td><td>I am a big fan of Star Trek [sep] I don't know much about it. When did the first episode air? [sep] It debuted in .. [sep] What is the plot of the show?</td><td>William Shatner plays the role of Captain Kirk</td><td>It followed the interstellar adventures of Captain James T. Kirk (William Shatner) and his crew ...</td></tr></table>
|
| 91 |
+
|
| 92 |
+
Table 1: Illustrative examples of some of the tasks within KILT, and how varied their query formulations can be.
|
| 93 |
+
|
| 94 |
+
to report on the performance of alternative architectures, we also investigate the following additional variants in a restricted experimental setting, limited to a few tasks:
|
| 95 |
+
|
| 96 |
+
- Task-specific query encoder. A different query encoder is used for each family of tasks, e.g. all question answering tasks use the same query encoder, but fact checking uses a different one. This is meant to allow for potentially different needs in processing queries, given the fundamentally diverse nature of the tasks at hand. This setup configuration is illustrated in Figure 3a.
|
| 97 |
+
- Task markers. This approach is similar to our base setup, where a single model performs all tasks. Additionally, we introduce specialised tokens which are inserted at the beginning of each query. Their aim is to help the model distinguish between the different tasks, by marking them. We use one task marker for each of the five task classes of KILT, such that all question answering tasks share the same marker.
|
| 98 |
+
|
| 99 |
+
# 3.2 Adversarial confounder selection
|
| 100 |
+
|
| 101 |
+
We saw in § 2.1 how "hard" confounder passages are collected using a BM25 baseline, following the standard approach in DPR. However, any other retriever can be used to select such confounders, including the very retriever being trained, leading to an iterative, self-adversarial training. Concretely, this amounts to following steps: (1) a first version
|
| 102 |
+
|
| 103 |
+
of the retriever is trained with BM25 confounders; (2) new confounders are selected with the trained model, by retrieving high-ranking passages which are not among the set of relevant ones; (3) a second version of the model is trained using the additional new confounders.
|
| 104 |
+
|
| 105 |
+
Intuitively, it is expected that this approach should lead to higher quality confounders compared to those selected by BM25 based on simple keyword matching. Based on our own experience as well as relevant literature (Khattab et al., 2020), this adversarial approach has been shown to work well for question answering.
|
| 106 |
+
|
| 107 |
+
As a way of further pushing the performance of the model, we experiment with this adversarial confounder selection on two datasets, Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). We selected these two datasets since, out of all of the tasks we are considering, they have an easy way of checking whether a certain passage is relevant or not for a given query – namely, by checking whether the answer is present in the passage. This enabled us to automatically build sets of confounders, ensuring relevant passages would be excluded.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>Dataset</td><td>Task class</td><td>#Train</td></tr><tr><td>FEVER</td><td>Fact Checking</td><td>71 k</td></tr><tr><td>AIDA-YAGO 2</td><td>Entity Linking</td><td>18 k</td></tr><tr><td>T-REx</td><td>Slot Filling</td><td>2,284 k</td></tr><tr><td>Zero Shot RE</td><td>Slot Filling</td><td>132 k</td></tr><tr><td>Natural Questions</td><td>QA</td><td>77 k</td></tr><tr><td>HotpotQA</td><td>QA</td><td>69 k</td></tr><tr><td>TriviaQA</td><td>QA</td><td>53 k</td></tr><tr><td>Wizard of Wikipedia</td><td>Dialogue</td><td>80 k</td></tr></table>
|
| 110 |
+
|
| 111 |
+
Table 2: KILT datasets used in this work, and the size of our converted training sets for each.
|
| 112 |
+
|
| 113 |
+
# 4 Experiments
|
| 114 |
+
|
| 115 |
+
# 4.1 Experimental settings
|
| 116 |
+
|
| 117 |
+
Dataset selection For our experiments we select the eight KILT datasets listed in Table 2, which cover all five task classes and include a training split, a validation split, and a held-out test split.
|
| 118 |
+
|
| 119 |
+
Preprocessing Starting from the raw KILT data, we split each Wikipedia article into disjoint 100-token chunks which form our basic retrieval units, following the approach of Wang et al. (2019) and Karpukhin et al. (2020). To maintain the same language introduced in §3, we will simply call these chunks passages.
|
| 120 |
+
|
| 121 |
+
This preprocessing results in a knowledge source of 36 million passages. In order to harmonise all datasets to the same knowledge source, KILT used a mapping strategy based on the BLEU metric to map relevant passages in the original versions of its datasets to passages in its own shared knowledge source (Petroni et al., 2020). Entries included in the KILT training sets which have a mapping BLEU score below 0.5 are likely to be noise, and we exclude them from training.
|
| 122 |
+
|
| 123 |
+
Multi-tasking Training is performed on the union of all training sets. Since two of the training sets are of different orders of magnitude, we use a simple downsampling strategy to bring them to the same order of magnitude as the others. Preliminary experiments with more complex sampling methods, like resampling all datasets so that each epoch would see an equal number of samples from each, found that they had no measurable effect compared to this simpler approach.
|
| 124 |
+
|
| 125 |
+
Encoders Our query and passage encoders are initialised as two distinct BERT base uncased en
|
| 126 |
+
|
| 127 |
+
coders (Devlin et al., 2019), trained separately. As pooling mechanism we find it effective to simply take the [CLS] token representation at the topmost layer.
|
| 128 |
+
|
| 129 |
+
Training We train our models for up to 80 epochs. To select the best checkpoint, we perform full evaluations of the validation set retrieval performance at regular intervals. We use the Adam optimiser (Kingma and Ba, 2015) with a learning rate of $2 \cdot 10^{-5}$ with warmup and a linear decay schedule, and a dropout rate of 0.1. The batch size is set to 128 samples, and in preliminary experiments we found no benefit in increasing this further. We use an additional "hard" confounder per batch, selected based on BM25 score as in (Karpukhin et al., 2020).
|
| 130 |
+
|
| 131 |
+
Downstream evaluation When evaluating our retriever within a larger architecture to perform a knowledge-intensive task, we replicate the DPR + BART setup of Petroni et al. (2020). This uses DPR to retrieve and comprehend the top 3 passages to the query, which is then processed by a task-specific fine-tuned BART model to generate the final answer for the end task.
|
| 132 |
+
|
| 133 |
+
# 4.2 Universal retrieval
|
| 134 |
+
|
| 135 |
+
The results of the evaluations reported in (Petroni et al., 2020) show that retrievers trained for question answering have poor performance outside of their domain. We would like to understand if it is possible to design a single model which can accurately satisfy the information needs of a wide variety of knowledge-intensive tasks. In short: Can a neural retriever be universal?
|
| 136 |
+
|
| 137 |
+
We perform a comprehensive evaluation of several models on the eight tasks of Table 2. The setups we evaluate include eight task-specific models (one trained on each of the eight datasets), for which we measure both in-domain and out-of-domain performance, and a BM25 baseline. Additionally, we include a multi-task trained model - as described in §3.1 - with the hope that it can learn to perform all tasks satisfyingly. This amounts to 10 models evaluated on eight tasks each, for a total of 80 evaluations.
|
| 138 |
+
|
| 139 |
+
To measure retrieval performance, we adopt the main metric used for the KILT benchmark, $R$ -precision. This is calculated as $r / R$ , where $R$ is the total number of relevant passages for a given query, and $r$ is the number of relevant passages returned among the top- $R$ retrieval results. For the
|
| 140 |
+
|
| 141 |
+
<table><tr><td rowspan="2">model</td><td rowspan="2">Fact Check. FEV</td><td rowspan="2">Ent. L. AY2</td><td colspan="2">Slot Filling</td><td colspan="3">Open Domain QA</td><td rowspan="2">Dial. WoW</td></tr><tr><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td></tr><tr><td colspan="9">Task-specific models</td></tr><tr><td>FEVER</td><td>73.60/43.92</td><td>5.62</td><td>19.50/10.02</td><td>42.88/19.98</td><td>36.69/18.05</td><td>23.18/17.59</td><td>45.08/22.24</td><td>41.27/19.85</td></tr><tr><td>AY2</td><td>47.36/37.58</td><td>81.77</td><td>5.52/4.08</td><td>8.94/5.50</td><td>10.22/6.77</td><td>11.69/10.71</td><td>15.11/8.47</td><td>17.59/13.08</td></tr><tr><td>T-REx</td><td>45.63/25.22</td><td>1.05</td><td>69.08/58.54</td><td>71.64/40.95</td><td>17.10/8.71</td><td>22.31/15.63</td><td>18.10/8.06</td><td>4.02/1.83</td></tr><tr><td>zsRE</td><td>70.10/33.12</td><td>0.42</td><td>68.34/57.40</td><td>97.74/78.81</td><td>25.98/13.81</td><td>22.23/18.35</td><td>28.68/14.44</td><td>10.40/2.09</td></tr><tr><td>NQ</td><td>68.16/14.81</td><td>1.44</td><td>31.78/7.20</td><td>61.12/12.92</td><td>63.24/28.13</td><td>29.39/11.33</td><td>48.39/14.42</td><td>30.77/11.81</td></tr><tr><td>HoPo</td><td>56.18/40.03</td><td>2.07</td><td>35.76/27.62</td><td>44.44/31.15</td><td>35.60/23.26</td><td>46.63/43.47</td><td>41.18/29.37</td><td>23.51/16.02</td></tr><tr><td>TQA</td><td>70.06/10.68</td><td>4.95</td><td>32.22/12.52</td><td>60.37/17.43</td><td>45.01/12.97</td><td>32.62/13.05</td><td>65.12/23.79</td><td>41.17/8.11</td></tr><tr><td>WoW</td><td>59.16/42.79</td><td>3.11</td><td>20.92/18.52</td><td>41.14/35.26</td><td>33.27/22.52</td><td>20.36/17.66</td><td>39.37/23.15</td><td>40.32/20.73</td></tr></table>
|
| 142 |
+
|
| 143 |
+
Table 3: Page- and passage-level $R$ -precision on KILT validation data. For the AIDA-YAGO 2 dataset, due to the nature of the task, only page-level retrieval is defined.
|
| 144 |
+
|
| 145 |
+
case of $R = 1$ this is therefore equivalent to precision@1. Table 3 shows retrieval performance on the validation data, with the best performance on a given dataset marked in bold, and the second best performance underlined.
|
| 146 |
+
|
| 147 |
+
While the KILT evaluation focuses on retrieval at the level of Wikipedia pages (thereby marking as "hits" any results that lie within the correct page), we are also interested in performing an evaluation at a more fine-grained level. We therefore also evaluate our models at the passage level, using a modified version of the official KILT evaluation scripts. These are shown as the second number in each column.
|
| 148 |
+
|
| 149 |
+
We straight away notice that the task-specific models tend to achieve high performance on their respective tasks, often taking one of the top two spots. Interestingly, we also note that these neural retrievers consistently outperform the BM25 baseline, showing that the result which Karpukhin et al. (2020) achieved for open-domain question answering also holds for other knowledge-intensive tasks.
|
| 150 |
+
|
| 151 |
+
The results reveal a strong performance for the multi-task model, confirming the hypothesis that a single model can be successfully trained to perform a wide variety of retrieval tasks. With the exception of one dataset, the shared model achieves the best retrieval performance or is within a few percentage points of the top score. We note that the one exception is the Zero-shot RE task (Levy et al., 2017), a trivial task in which the query will always contain the title of the page to be retrieved. Indeed, the model specific to this task manages to achieve a near-perfect score.
|
| 152 |
+
|
| 153 |
+
Another task which stands out for being
|
| 154 |
+
|
| 155 |
+
markedly different in formulation is AIDA-YAGO 2 (Hoffart et al., 2011). As shown in Table 2, models that were not trained on this specific task perform it very poorly. Entity linking is a task that is normally better performed by models which are explicitly designed for it (Cao et al., 2020). We nevertheless include it to showcase the ability of neural retrievers to adapt to it, and note how well the multi-task retriever performs on it in spite of its unusual nature.
|
| 156 |
+
|
| 157 |
+
# 4.3 Downstream performance
|
| 158 |
+
|
| 159 |
+
We saw that our proposed approach achieves strong performance across a variety of retrieval tasks. However, our interest in neural retrievers stems from their use as components within larger systems, to perform tasks such as question answering. Our next experimental question is therefore: Can a universal retriever lead to better downstream performance in knowledge-intensive tasks?
|
| 160 |
+
|
| 161 |
+
We perform a downstream evaluation of our approach used in conjunction with BART (Lewis et al., 2020a) as the generative component or classifier, adopting the same setup as Petroni et al. (2020). Results are reported in Table 4, with bold and underline marking the best and second best scores respectively.
|
| 162 |
+
|
| 163 |
+
The $DPR + BART$ line refers to a setup similar to our own, but with the simpler retriever of Karpukhin et al. (2020). Therefore, comparing its performance to ours gives us a clear indication of the contribution of multi-task training on the overall performance on knowledge-intensive tasks. Our proposed model achieves significantly better performance than this baseline in AY2, zsRE and HoPo; while for the other tasks, the discrepancy
|
| 164 |
+
|
| 165 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Fact Check. FEV</td><td rowspan="2">Ent. L. AY2</td><td rowspan="2">Slot Fill. zsRE</td><td colspan="3">Open Domain QA</td><td rowspan="2">Dial. WoW</td><td rowspan="2">Avg.</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>Multi-task + BART</td><td>86.32</td><td>82.61</td><td>57.95</td><td>39.75</td><td>31.77</td><td>59.60</td><td>15.33</td><td>53.33</td></tr><tr><td>DPR + BART</td><td>86.74</td><td>75.49</td><td>30.43</td><td>41.27</td><td>25.18</td><td>58.55</td><td>15.55</td><td>47.60</td></tr><tr><td>RAG</td><td>86.31</td><td>72.62</td><td>44.74</td><td>44.39</td><td>26.97</td><td>71.27</td><td>13.22</td><td>51.36</td></tr><tr><td>T5</td><td>76.30</td><td>74.05</td><td>9.02</td><td>19.60</td><td>12.64</td><td>18.11</td><td>13.49</td><td>31.89</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 4: KILT test scores on the downstream evaluation. Results in the bottom section are as reported in Petroni et al. (2020). The score metrics are accuracy for fact checking, entirely linking and slot filling; exact match for QA; and F1 score for dialogue. $^3$
|
| 168 |
+
|
| 169 |
+
is always below two points. This fact is reflected in the last column too, showing that on average multi-task training leads to better downstream performance. The model also compares favourably to RAG (Lewis et al., 2020b), a more advanced system in which the query encoder is fine-tuned on the end task.
|
| 170 |
+
|
| 171 |
+
# 4.4 Zero- and few-shot performance
|
| 172 |
+
|
| 173 |
+
Task-specific neural retrievers can achieve higher performance than IR-based methods, but they are not suitable for cases where no training data (or not enough) is available. In those cases, tfidf and BM25 are the better choice. To evaluate the performance of a multi-task retriever as a suitable replacement for them in this scenario, we run a series of experiments in the low data regimes (few-shot and zero-shot).
|
| 174 |
+
|
| 175 |
+
We start by training a set of multi-task retrievers (using the base setup) in the leave-one-out setting for each of the datasets, in order to see how a neural retriever will perform when trained on all domains except for the one it is to be evaluated on. The results of these zero-shot experiments are reported in the second line of Table 5 (again, text here is in bold for the best overall performance, and underlined for second best). They show that, even in the zero-shot setting, the multi-task neural retriever achieves performance that is competitive to BM25, with retrieval being 10 points higher at the page level and 5 points lower at the passage level on average.
|
| 176 |
+
|
| 177 |
+
The advantage of neural retrievers over BM25 lies in their ability to improve with training. We therefore look at few-shot training for each task, and create two smaller copies for each of the orig
|
| 178 |
+
|
| 179 |
+
inal training sets with a random sample of 128 and 1,024 examples respectively. In order to evaluate the suitability of a multi-task trained retriever as a starting checkpoint for few-shot training, we take the various leave-one-out models and finetune them on our few-shot training sets. To check whether multi-task pre-training is effective, we also compare these to DPR models (which are just initialised with BERT weights) fine-tuned on the same data.
|
| 180 |
+
|
| 181 |
+
The bottom two sections of Table 5 report the results. The most dramatic gains from fine-tuning are seen for AY2, an "outlier" task whose formulation differs from that of the other tasks, and which seems to benefit the most from seeing in-domain data. The zsRE performance does not seem to improve from fine-tuning on the smaller dataset, but sees a very big jump when switching to the larger dataset. As a reminder, in this trivial task the title of the page to be retrieved always appears at the start of the query. It is therefore not surprising that models specifically fine-tuned on it can achieve near-perfect scores, as long as enough training data is provided.
|
| 182 |
+
|
| 183 |
+
In spite of the fine-tuning, we note that both DPR and the multi-task model fail to improve on their performance for T-REx, suggesting that large amounts of training data are required to learn this task. Nevertheless, the multi-task model proves itself more robust, and achieves the top performance on it.
|
| 184 |
+
|
| 185 |
+
Finally, we note for 2 out of 8 tasks, namely zsRE and WoW, DPR achieves lower page-level retrieval scores than the multi-task model, but performs better at the passage level. This shows that fine-grained and coarse-grained retrieval performance are not always perfectly correlated.
|
| 186 |
+
|
| 187 |
+
Overall, the experiments show strong results for the multi-task model, with the average zero-shot
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Model</td><td>FEV</td><td>AY2</td><td>T-REx</td><td>zsRE</td><td>NQ</td><td>HoPo</td><td>TQA</td><td>WoW</td><td>Avg.</td></tr><tr><td>BM25</td><td>50.13/40.06</td><td>3.47</td><td>58.60/51.64</td><td>66.43/52.98</td><td>25.83/14.20</td><td>43.95/38.38</td><td>29.44/16.16</td><td>27.50/18.41</td><td>38.17/33.12</td></tr><tr><td colspan="10">Leave-one-out multi-task models</td></tr><tr><td>Zero-shot</td><td>74.11/37.09</td><td>4.16</td><td>67.54/44.84</td><td>73.42/32.65</td><td>47.23/21.50</td><td>34.72/16.52</td><td>49.08/28.06</td><td>36.92/16.19</td><td>48.40/28.12</td></tr><tr><td>Finetune (128)</td><td>75.95/32.75</td><td>32.38</td><td>67.54/44.84</td><td>73.41/32.65</td><td>47.48/14.98</td><td>34.72/27.82</td><td>54.71/19.82</td><td>48.36/17.46</td><td>54.23/27.19</td></tr><tr><td>Finetune (1k)</td><td>73.08/40.83</td><td>70.40</td><td>67.54/44.84</td><td>93.04/58.67</td><td>51.00/19.90</td><td>39.19/35.43</td><td>59.08/20.22</td><td>47.65/19.75</td><td>62.62/34.23</td></tr><tr><td colspan="10">Vanilla DPR models</td></tr><tr><td>Finetune (128)</td><td>37.99/25.31</td><td>26.23</td><td>0.20/0.02</td><td>0.16/0.00</td><td>20.92/9.52</td><td>14.46/14.08</td><td>26.85/10.54</td><td>30.31/17.20</td><td>19.64/10.95</td></tr><tr><td>Finetune (1k)</td><td>70.87/47.82</td><td>72.49</td><td>0.20/0.02</td><td>90.33/80.20</td><td>43.43/19.81</td><td>30.75/30.50</td><td>52.50/17.33</td><td>44.70/24.92</td><td>50.66/31.51</td></tr></table>
|
| 190 |
+
|
| 191 |
+
performance being competitive to BM25, and the average few-shot performance being markedly better than the alternatives. The discrepancy in performance between a vanilla DPR model and the leave-one-out multi-task model is especially noticeable when using the smaller of the two datasets, in which case average performance for the latter is more than double that of vanilla DPR.
|
| 192 |
+
|
| 193 |
+
# 4.5 Model variants
|
| 194 |
+
|
| 195 |
+
Table 5: Page- and passage-level $R$ -Precision in the zero-shot setting and with additional fine-tuning of 128 and 1,024 examples. We also compare to a BM25 retriever and a DPR model initialised with BERT weights.
|
| 196 |
+
|
| 197 |
+
<table><tr><td>variant</td><td>FEV</td><td>NQ</td><td>TQA</td></tr><tr><td>Base</td><td>76.38/40.76</td><td>60.91/24.50</td><td>64.77/21.75</td></tr><tr><td>Task markers</td><td>75.84/40.79</td><td>62.31/25.10</td><td>64.04/20.86</td></tr><tr><td>Task-spec. enc.</td><td>73.53/40.02</td><td>61.05/25.52</td><td>64.17/21.23</td></tr></table>
|
| 198 |
+
|
| 199 |
+
Table 6: Multi-task model variants evaluated on a subset of tasks ( $R$ -precision on validation data at page/passage level).
|
| 200 |
+
|
| 201 |
+
In this set of experiments we compare our base multi-task model with the two variants described in § 3.1. Due to the high memory consumption of the "task-specific encoders" variant (requiring one full query encoder per task family, in addition to the passage encoder), it was only possible to perform these evaluations in a restricted setting of three datasets. The results in Table 6 do not reveal a clear winner, suggesting that the base architecture might be the better choice due to its simplicity and generally good performance. $^4$
|
| 202 |
+
|
| 203 |
+
# 4.6 Adversarial confounder selection
|
| 204 |
+
|
| 205 |
+
Finally, we evaluate the adversarial confounder selection method described in § 3.2. This involves augmenting our regular training sets with additional confounders for TriviaQA and Natural Ques
|
| 206 |
+
|
| 207 |
+
tions, selected using our top multi-task trained model. A new multi-task model is then trained from scratch on this augmented data. Its performance is reported in Table 7, showing an overall improvement across multiple tasks. While this approach is demonstrated here on our multi-task model, it is in fact orthogonal to it, and could be applied to any other neural retrievers trained with a contrastive loss.
|
| 208 |
+
|
| 209 |
+
# 5 Related work
|
| 210 |
+
|
| 211 |
+
The approach most closely related to ours is DPR (Karpukhin et al., 2020), upon which we built all our retrieval systems. This model is covered in detail in § 2.1, in addition to the historical context. Another closely related approach is the Retrieval-Augmented Generation (RAG) model of Lewis et al. (2020b). In its base configuration it augments DPR with a generative reader, and it trains the query encoder end-to-end (differing from traditional retriever-reader architectures which treat the two steps as disjoint). A natural extension of the work we have presented would be to combine RAG with our joint learning approach, to study whether it can lead to further gains in performance or robustness.
|
| 212 |
+
|
| 213 |
+
A number of promising techniques to boost retrieval performance have been proposed recently. These are orthogonal to our work, and as such they could be combined with it. Amongst these, pretraining methods form one class. Inverse Cloze Task (Lee et al., 2019) and its extensions (Chang et al., 2020) are self-supervised pre-training methods designed for retrieval in open-domain question answering. Whether such specific pre-training is beneficial to tasks other than question answering remains an open question. CERT (Fang et al., 2020) is an alternative pre-training approach, inspired by some recent advances in computer vision. While to
|
| 214 |
+
|
| 215 |
+
<table><tr><td rowspan="2">confounders</td><td rowspan="2">Fact Check. FEV</td><td rowspan="2">Ent. L. AY2</td><td rowspan="2">Slot Filling T-REx</td><td rowspan="2">zsRE</td><td colspan="3">Open Domain QA</td><td rowspan="2">Dial. WoW</td></tr><tr><td>NQ</td><td>HoPo</td><td>TQA</td></tr><tr><td>BM25</td><td>74.72/46.96</td><td>83.78</td><td>69.18/53.54</td><td>77.23/41.70</td><td>61.51/28.80</td><td>44.21/38.42</td><td>61.95/24.56</td><td>39.70/24.07</td></tr><tr><td>BM25 + adv</td><td>74.79/52.12</td><td>84.86</td><td>71.36/61.40</td><td>80.04/54.08</td><td>59.25/40.11</td><td>44.08/41.04</td><td>59.19/34.17</td><td>41.04/24.62</td></tr></table>
|
| 216 |
+
|
| 217 |
+
Table 7: Comparison of two confounder selection methods for the multi-task model: simple BM25, and BM25 augmented with adversarial confounders ( $R$ -precision on validation data at page/passage level).
|
| 218 |
+
|
| 219 |
+
our knowledge this has not been applied to retrieval problems, we believe it might be promising due to its focus on sentence-level semantics (as opposed to the more standard masked language modelling pre-training, which focuses on the token-level).
|
| 220 |
+
|
| 221 |
+
Another class of orthogonal improvements to dense retrieval involves models which embed passages into multiple fixed-size vectors. Of these, ColBERT (Khattab and Zaharia, 2020) and MEBERT (Luan et al., 2020) are two representative examples. One further approach is ColBERT-QA (Khattab et al., 2020), which additionally uses a data augmentation strategy closely related to our own approach described in § 3.2.
|
| 222 |
+
|
| 223 |
+
Finally two entity linkers, GENRE (Cao et al., 2020) and BLINK (Wu et al., 2020), are worth mentioning. Being trained specifically for entity linking, these models will generally outperform retrieval-based approaches on that task. While they are not comparable to retrieval models and will not generally be applicable to information retrieval tasks, we mention them here to provide readers with a fuller context of the existing literature.
|
| 224 |
+
|
| 225 |
+
# 6 Conclusions
|
| 226 |
+
|
| 227 |
+
We have conducted a large-scale experimental study on knowledge-intensive tasks, and how retrieval models that tackle them seek the required information from knowledge bases such as Wikipedia.
|
| 228 |
+
|
| 229 |
+
The study started with the question of whether the way in which information is embedded for retrieval purposes is universal. Section 4.2 provided evidence that to a large extent it is, with a single "universal" retriever, trained jointly on 8 datasets, often performing comparably to task-specific models.
|
| 230 |
+
|
| 231 |
+
Armed with this knowledge, in Section 4.3 we plugged our single model in a larger pipeline, in order to see its contribution to the downstream performance on a wide range of knowledge-intensive tasks. This led to an overall improvement in downstream performance, setting new top results for a
|
| 232 |
+
|
| 233 |
+
number of tasks in the KILT benchmark.
|
| 234 |
+
|
| 235 |
+
Next, in Section 4.4, we evaluated the model's performance in the zero-shot and few-shot settings. By evaluating on a wide range of tasks, we were able to show that our proposed approach performs comparably to BM25 in the zero-shot setting, and quickly overtakes it even with minimal in-domain training.
|
| 236 |
+
|
| 237 |
+
In Section 4.5 we evaluated a number of more complex variants of the model involving task specialisation, but failed to see clear performance improvements. Finally, in Section 4.6 we saw how a simple iterative approach to data augmentation can lead to better performance.
|
| 238 |
+
|
| 239 |
+
In the coming months we will provide a pretrained snapshot of our best-performing model, in the form of a BERT checkpoint. As shown, this model will be useful in zero-shot and few-shot settings as a better performing alternative to both IR-based approaches such as BM25, as well as task-specific models. The multi-task training approach demonstrated here can also be useful in industry settings where several retrieval operations may need to be performed on the same piece of content, and the deployment of multiple task-specific models might not be possible due to space or computational performance concerns.
|
| 240 |
+
|
| 241 |
+
# References
|
| 242 |
+
|
| 243 |
+
Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. ArXiv, abs/2010.00904.
|
| 244 |
+
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representations.
|
| 245 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational
|
| 246 |
+
|
| 247 |
+
Linguistics (Volume 1: Long Papers), pages 1870-1879, Vancouver, Canada. Association for Computational Linguistics.
|
| 248 |
+
Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391-407.
|
| 249 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 250 |
+
Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, and Pengtao Xie. 2020. CERT: Contrastive self-supervised learning for language understanding. ArXiv, abs/2005.12766.
|
| 251 |
+
Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 675-684. ACM.
|
| 252 |
+
Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. 2016. Quantization based fast inner product search. In Artificial Intelligence and Statistics, pages 482-490.
|
| 253 |
+
Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spanirol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 782-792, Edinburgh, Scotland, UK. Association for Computational Linguistics.
|
| 254 |
+
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, page 2333-2338, New York, NY, USA. Association for Computing Machinery.
|
| 255 |
+
J. Johnson, M. Douze, and H. Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, pages 1-1.
|
| 256 |
+
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics
|
| 257 |
+
|
| 258 |
+
(Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.
|
| 259 |
+
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
|
| 260 |
+
Omar Khattab, Christopher Potts, and Matei Zaharia. 2020. Relevance-guided supervision for OpenQA with ColBERT. ArXiv, abs/2007.00814.
|
| 261 |
+
Omar Khattab and Matei Zaharia. 2020. ColBERT: Efficient and effective passage search via contextualized late interaction over BERT. In Proceedings of the 43rd international ACM SIGIR conference on Research and development in Information Retrieval.
|
| 262 |
+
Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).
|
| 263 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics.
|
| 264 |
+
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096, Florence, Italy. Association for Computational Linguistics.
|
| 265 |
+
Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 333-342, Vancouver, Canada. Association for Computational Linguistics.
|
| 266 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020a. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 267 |
+
Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Roektaschel, Sebastian Riedel, and Douwe
|
| 268 |
+
|
| 269 |
+
Kiela. 2020b. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems (NeurIPS).
|
| 270 |
+
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.
|
| 271 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
|
| 272 |
+
Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and attentional representations for text retrieval. *ArXiv*, abs/2005.00181.
|
| 273 |
+
Christopher D Manning, Hinrich Schütze, and Prabhakar Raghavan. 2008. Introduction to information retrieval. Cambridge university press.
|
| 274 |
+
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vassilis Plachouras, Tim Roktaschel, and Sebastian Riedel. 2020. KILT: a benchmark for knowledge intensive language tasks. In arXiv:2009.02252.
|
| 275 |
+
Fabio Petroni, Tim Rocttäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? EMNLP.
|
| 276 |
+
Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389.
|
| 277 |
+
Anshumali Shrivastava and Ping Li. 2014. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In Advances in Neural Information Processing Systems (NIPS), pages 2321-2329.
|
| 278 |
+
James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERIFICATION. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 279 |
+
Shuohang Wang, Mo Yu, Xiaoxiao Guo, Z. Wang, Tim Klinger, Wei Zhang, S. Chang, G. Tesauro, Bowen Zhou, and Jing Jiang. 2018. R3: Reinforced ranker-reader for open-domain question answering. In AAAI.
|
| 280 |
+
|
| 281 |
+
Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5878-5882, Hong Kong, China. Association for Computational Linguistics.
|
| 282 |
+
Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero-shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computational Linguistics.
|
| 283 |
+
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul Bennett, Junaid Ahmed, and Arnold Overwijk. 2020. Approximate nearest neighbor negative contrastive learning for dense text retrieval. ArXiv, abs/2007.00808.
|
| 284 |
+
Wen-tau Yih, Kristina Toutanova, John C Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 247-256. Association for Computational Linguistics.
|
data/2021/2101_00xxx/2101.00117/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00121/10de5b4b-1a6e-40ba-bf75-79fb29a7975d_content_list.json
CHANGED
|
@@ -1,3 +1,1679 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "WARP: Word-level Adversarial ReProgramming",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
245,
|
| 8 |
+
80,
|
| 9 |
+
754,
|
| 10 |
+
99
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Karen Hambardzumyan<sup>1</sup>, Hrant Khachatrian<sup>1,2</sup>, Jonathan May<sup>3</sup>",
|
| 17 |
+
"bbox": [
|
| 18 |
+
213,
|
| 19 |
+
133,
|
| 20 |
+
794,
|
| 21 |
+
151
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ YerevaNN, $^{2}$ Yerevan State University,",
|
| 28 |
+
"bbox": [
|
| 29 |
+
344,
|
| 30 |
+
151,
|
| 31 |
+
660,
|
| 32 |
+
167
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "<sup>3</sup>Information Sciences Institute, University of Southern California",
|
| 39 |
+
"bbox": [
|
| 40 |
+
233,
|
| 41 |
+
167,
|
| 42 |
+
769,
|
| 43 |
+
183
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "mahnerak@yerevann.com, hrant@yerevann.com, jonmay@isi.edu",
|
| 50 |
+
"bbox": [
|
| 51 |
+
149,
|
| 52 |
+
185,
|
| 53 |
+
840,
|
| 54 |
+
200
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
262,
|
| 64 |
+
263,
|
| 65 |
+
342,
|
| 66 |
+
279
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
142,
|
| 75 |
+
294,
|
| 76 |
+
460,
|
| 77 |
+
608
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "1 Introduction",
|
| 84 |
+
"text_level": 1,
|
| 85 |
+
"bbox": [
|
| 86 |
+
115,
|
| 87 |
+
624,
|
| 88 |
+
260,
|
| 89 |
+
639
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Language model pretraining has had a tremendous impact on solving many natural language processing tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019). The most popular two approaches take a pretrained model and use a straightforward supervised learning objective. In the first approach, the parameters of the language model are frozen and a task-specific head is trained on top of them (Peters et al., 2018). The second approach fine-tunes all model parameters (Radford et al., 2018). The latter can sometimes yield better results (Peters et al., 2019), while the first one usually offers better stability for smaller datasets. The approach based on frozen features does not require storing task-specific language models.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
114,
|
| 98 |
+
652,
|
| 99 |
+
489,
|
| 100 |
+
910
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "A recent alternative is based on so called adapters (Houlsby et al., 2019; Pfeiffer et al., 2021), a technique that adds new weights at every layer of the pretrained language model while the original parameters are kept frozen. This enables a smaller set of task-specific parameters while achieving results comparable to the fine-tuning approach.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
509,
|
| 109 |
+
263,
|
| 110 |
+
884,
|
| 111 |
+
391
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "Another approach of leveraging pretrained language models for downstream tasks, introduced by Radford et al. (2019), provides \"task descriptions\" without using any labeled examples. GPT-3 (Brown et al., 2020) demonstrates impressive few-shot learning performance with priming: by providing the language model a few inputs and outputs (\"analogies\") as a context. The language model contextually \"learns\" from these examples and outputs the answer with a single forward pass without any trainable parameters. These methods, however, require huge language models (1.5B and 175B parameters, respectively).",
|
| 118 |
+
"bbox": [
|
| 119 |
+
509,
|
| 120 |
+
393,
|
| 121 |
+
884,
|
| 122 |
+
601
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "text",
|
| 128 |
+
"text": "The success of task reformulation-based approaches suggest that language models are capable of solving various natural language processing tasks given a well-crafted prompt. We hypothesize that it is possible to find such prompts. In other words, we can discover extra tokens that, when added to the input, can exploit language model capabilities better than the manually-designed ones.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
509,
|
| 131 |
+
602,
|
| 132 |
+
882,
|
| 133 |
+
731
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"text": "In this paper, we introduce a novel technique to find optimal prompts. We call our method WARP: Word-level Adversarial ReProgramming<sup>1</sup>. The method is inspired by adversarial reprogramming (Elsayed et al., 2019) — a method of adding adversarial perturbations to an input image that reprograms a pretrained neural network to perform classification on a task other than the one it was originally trained for.",
|
| 140 |
+
"bbox": [
|
| 141 |
+
509,
|
| 142 |
+
732,
|
| 143 |
+
882,
|
| 144 |
+
876
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "page_footnote",
|
| 150 |
+
"text": "1Our implementation is publicly available at: https://github.com/YerevaNN/WARP",
|
| 151 |
+
"bbox": [
|
| 152 |
+
510,
|
| 153 |
+
883,
|
| 154 |
+
880,
|
| 155 |
+
908
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "aside_text",
|
| 161 |
+
"text": "arXiv:2101.00121v2 [cs.CL] 2 Jun 2021",
|
| 162 |
+
"bbox": [
|
| 163 |
+
21,
|
| 164 |
+
319,
|
| 165 |
+
60,
|
| 166 |
+
717
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "image",
|
| 172 |
+
"img_path": "images/bf66c6df047a9c7ed30c88e96ed74d3210ac6981e0c447d1635d011b02e78431.jpg",
|
| 173 |
+
"image_caption": [
|
| 174 |
+
"Figure 1: An example of an adversarial program that causes Inception V3 ImageNet model to function as an MNIST classifier, from Elsayed et al. (2019)"
|
| 175 |
+
],
|
| 176 |
+
"image_footnote": [],
|
| 177 |
+
"bbox": [
|
| 178 |
+
189,
|
| 179 |
+
72,
|
| 180 |
+
416,
|
| 181 |
+
231
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "We show that our method, using up to 25K trainable parameters per task, achieves 81.6 test score on the GLUE Leaderboard, outperforming all the other submissions that use up to three orders of magnitude more trainable parameters. We show that it is possible to inject knowledge into WARP models using manually designed initialization of the prompt, which is especially useful on tasks with a small number of examples. Moreover, WARP shows impressive few-shot performance on two tasks from the SuperGLUE benchmark with just 32 examples, outperforming GPT-3 results. Finally, we discuss the advantages of our method in real-life applications.",
|
| 188 |
+
"bbox": [
|
| 189 |
+
114,
|
| 190 |
+
318,
|
| 191 |
+
489,
|
| 192 |
+
544
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "2 Related Work",
|
| 199 |
+
"text_level": 1,
|
| 200 |
+
"bbox": [
|
| 201 |
+
115,
|
| 202 |
+
568,
|
| 203 |
+
272,
|
| 204 |
+
583
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "2.1 Towards Fewer Trainable Parameters",
|
| 211 |
+
"text_level": 1,
|
| 212 |
+
"bbox": [
|
| 213 |
+
115,
|
| 214 |
+
604,
|
| 215 |
+
458,
|
| 216 |
+
619
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "Jiao et al. (2020) show that knowledge distillation may help reduce the size of their model 7.5 times while almost preserving the performance, but finetuning such models still requires storage of separate task-specific models. As seen in Section 6, this approach does not scale when we want to apply it to many tasks at once.",
|
| 223 |
+
"bbox": [
|
| 224 |
+
114,
|
| 225 |
+
632,
|
| 226 |
+
489,
|
| 227 |
+
743
|
| 228 |
+
],
|
| 229 |
+
"page_idx": 1
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"type": "text",
|
| 233 |
+
"text": "Another approach, called Adapters (Houlsby et al., 2019; Pfeiffer et al., 2021), introduces new task-specific parameters that are added at every layer of the Transformer network. Only these newly initialized weights are trained, which allows separation of general and task-specific knowledge. In contrast, our method does not inject task-specific knowledge inside the body of the pretrained language model. Instead, it focuses on learning task-specific input-level prompts.",
|
| 234 |
+
"bbox": [
|
| 235 |
+
114,
|
| 236 |
+
749,
|
| 237 |
+
489,
|
| 238 |
+
910
|
| 239 |
+
],
|
| 240 |
+
"page_idx": 1
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"type": "image",
|
| 244 |
+
"img_path": "images/59ba3c48bea62432583f5692e2aeec03ff6d3e356e04f655e6a4eea8d6fa0e18.jpg",
|
| 245 |
+
"image_caption": [
|
| 246 |
+
"Figure 2: WARP adds a few trainable embeddings around the input, which causes the masked language model to predict the sentiment of the sentence."
|
| 247 |
+
],
|
| 248 |
+
"image_footnote": [],
|
| 249 |
+
"bbox": [
|
| 250 |
+
512,
|
| 251 |
+
71,
|
| 252 |
+
884,
|
| 253 |
+
231
|
| 254 |
+
],
|
| 255 |
+
"page_idx": 1
|
| 256 |
+
},
|
| 257 |
+
{
|
| 258 |
+
"type": "text",
|
| 259 |
+
"text": "2.2 Task Reformulation",
|
| 260 |
+
"text_level": 1,
|
| 261 |
+
"bbox": [
|
| 262 |
+
510,
|
| 263 |
+
310,
|
| 264 |
+
714,
|
| 265 |
+
324
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"text": "In GPT-2, Radford et al. (2019) introduce a completely unsupervised way for transferring knowledge to downstream tasks by reformulating various natural language understanding tasks into language modeling problems. This approach does not make use of the available training examples. Brown et al. (2020) demonstrate an effective few-shot transfer by reformulating downstream tasks into input-output analogies in the context without a need for further fine-tuning. Nonetheless, the number of training examples is limited to the context size and is not scalable to a traditional supervised learning scenario.",
|
| 272 |
+
"bbox": [
|
| 273 |
+
507,
|
| 274 |
+
331,
|
| 275 |
+
884,
|
| 276 |
+
539
|
| 277 |
+
],
|
| 278 |
+
"page_idx": 1
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"text": "Schick and Schütze (2021b) show the effectiveness of reformulating a number of tasks into Cloze-style tasks by fine-tuning masked language models (Devlin et al., 2019). The method, called Pattern Exploited Training (PET), additionally uses training samples and performs few-shot learning even without huge models such as GPT-3.",
|
| 283 |
+
"bbox": [
|
| 284 |
+
509,
|
| 285 |
+
539,
|
| 286 |
+
882,
|
| 287 |
+
651
|
| 288 |
+
],
|
| 289 |
+
"page_idx": 1
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"text": "Our method is also based on masked language models, but unlike PET, we focus on finding the best prompt using the training examples. This eliminates the need for manually-designed prompts, however, our method can also benefit from similar prior knowledge about the task by careful initialization of the prompts.",
|
| 294 |
+
"bbox": [
|
| 295 |
+
509,
|
| 296 |
+
653,
|
| 297 |
+
882,
|
| 298 |
+
765
|
| 299 |
+
],
|
| 300 |
+
"page_idx": 1
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"text": "2.3 Adversarial Reprogramming",
|
| 305 |
+
"text_level": 1,
|
| 306 |
+
"bbox": [
|
| 307 |
+
510,
|
| 308 |
+
776,
|
| 309 |
+
786,
|
| 310 |
+
791
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 1
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "text",
|
| 316 |
+
"text": "Adversarial Reprogramming (Elsayed et al., 2019) demonstrates the reprogramming of pretrained ImageNet classifiers by adding input-level adversarial perturbations to make them perform well on MNIST and CIFAR-10 image classification tasks. The adversarial perturbation is designed to be image padding added to the original input, as illus",
|
| 317 |
+
"bbox": [
|
| 318 |
+
509,
|
| 319 |
+
797,
|
| 320 |
+
882,
|
| 321 |
+
910
|
| 322 |
+
],
|
| 323 |
+
"page_idx": 1
|
| 324 |
+
},
|
| 325 |
+
{
|
| 326 |
+
"type": "image",
|
| 327 |
+
"img_path": "images/44d6b26302cfdab70e7425e45ec40185e6c9dcfad10695e177396540e518eb96.jpg",
|
| 328 |
+
"image_caption": [
|
| 329 |
+
"Figure 3: Illustration of WARP. The prompt tokens [P_1], [P_2], ..., [P_N] are inserted before, between, and after the sentences. Only the prompt and class embeddings are trainable (colored in green). The masked language modeling Head is applied without the decoder; instead, the matrix of [V_1], [V_2], ..., [V_N] is applied as a linear layer. Finally, a regular task-specific loss is computed on the resulting logits."
|
| 330 |
+
],
|
| 331 |
+
"image_footnote": [],
|
| 332 |
+
"bbox": [
|
| 333 |
+
163,
|
| 334 |
+
71,
|
| 335 |
+
838,
|
| 336 |
+
331
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "trated in Figure 1. Then the perturbation parameter is trained to optimize the target classification task objective using the annotated image data.",
|
| 343 |
+
"bbox": [
|
| 344 |
+
114,
|
| 345 |
+
424,
|
| 346 |
+
489,
|
| 347 |
+
470
|
| 348 |
+
],
|
| 349 |
+
"page_idx": 2
|
| 350 |
+
},
|
| 351 |
+
{
|
| 352 |
+
"type": "text",
|
| 353 |
+
"text": "While in the case of image classification it is not obvious why adversarial reprogramming should ever work, e.g. why a network trained on ImageNet should have the capacity to solve MNIST when surrounded with a particular bitmap, for NLP tasks, there is more intuition. Many NLP tasks can be reformulated as language models, a shared space for both program and data.",
|
| 354 |
+
"bbox": [
|
| 355 |
+
114,
|
| 356 |
+
473,
|
| 357 |
+
489,
|
| 358 |
+
601
|
| 359 |
+
],
|
| 360 |
+
"page_idx": 2
|
| 361 |
+
},
|
| 362 |
+
{
|
| 363 |
+
"type": "text",
|
| 364 |
+
"text": "Adversarial reprogramming has been adapted to text classification tasks with LSTM networks in (Neekhara et al., 2019). They operate in the vocabulary space and reprogram a model trained for one task to perform another task. More recently, AutoPrompt (Shin et al., 2020a) attempts to find prompts for large language models automatically without adding any parameters to the model. Unlike AutoPrompt, we perform gradient-based optimization in the space of word embeddings which gives our model more degrees of freedom and eventually better performance on the downstream tasks (Section 6.2).",
|
| 365 |
+
"bbox": [
|
| 366 |
+
114,
|
| 367 |
+
602,
|
| 368 |
+
489,
|
| 369 |
+
810
|
| 370 |
+
],
|
| 371 |
+
"page_idx": 2
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"type": "text",
|
| 375 |
+
"text": "In a more general sense, guiding an NLP model with special tokens appended to the input is an even older idea. In particular, multilingual neural machine translation models use special tokens in the input to control the target language (Ha et al., 2016; Johnson et al., 2017) or politeness",
|
| 376 |
+
"bbox": [
|
| 377 |
+
114,
|
| 378 |
+
813,
|
| 379 |
+
489,
|
| 380 |
+
910
|
| 381 |
+
],
|
| 382 |
+
"page_idx": 2
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"type": "text",
|
| 386 |
+
"text": "of the translation (Sennrich et al., 2016). Another method to reprogram a BERT-based model is proposed by Artetxe et al. (2020), where a model tuned on an English version of a particular task is transformed to work in another language by changing only the embedding matrices.",
|
| 387 |
+
"bbox": [
|
| 388 |
+
509,
|
| 389 |
+
424,
|
| 390 |
+
884,
|
| 391 |
+
520
|
| 392 |
+
],
|
| 393 |
+
"page_idx": 2
|
| 394 |
+
},
|
| 395 |
+
{
|
| 396 |
+
"type": "text",
|
| 397 |
+
"text": "In parallel work, Li and Liang (2021) propose a similar method and successfully apply it on two text generation tasks. Apart from the different types of tasks and our characterization of the task as a form of Adversarial Reprogramming, the main difference between their approach and ours is that they use an additional parameterization trick to stabilize the training.",
|
| 398 |
+
"bbox": [
|
| 399 |
+
509,
|
| 400 |
+
521,
|
| 401 |
+
885,
|
| 402 |
+
651
|
| 403 |
+
],
|
| 404 |
+
"page_idx": 2
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"type": "text",
|
| 408 |
+
"text": "3 WARP",
|
| 409 |
+
"text_level": 1,
|
| 410 |
+
"bbox": [
|
| 411 |
+
510,
|
| 412 |
+
670,
|
| 413 |
+
606,
|
| 414 |
+
686
|
| 415 |
+
],
|
| 416 |
+
"page_idx": 2
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"type": "text",
|
| 420 |
+
"text": "We follow a setup similar to Elsayed et al. (2019) with some NLP-specific modifications depicted in Figure 2.",
|
| 421 |
+
"bbox": [
|
| 422 |
+
509,
|
| 423 |
+
702,
|
| 424 |
+
884,
|
| 425 |
+
749
|
| 426 |
+
],
|
| 427 |
+
"page_idx": 2
|
| 428 |
+
},
|
| 429 |
+
{
|
| 430 |
+
"type": "text",
|
| 431 |
+
"text": "Our goal is to find the best prompt that will make a pretrained masked language model predict the desired answer (verbalizer token) for a training example's masked token $^2$ . We search for such prompts in the (continuous) embedding space. In other words, we want to find parameters $\\Theta = \\{\\Theta^P,\\Theta^V\\}$ for prompt and verbalizer embed",
|
| 432 |
+
"bbox": [
|
| 433 |
+
509,
|
| 434 |
+
752,
|
| 435 |
+
885,
|
| 436 |
+
866
|
| 437 |
+
],
|
| 438 |
+
"page_idx": 2
|
| 439 |
+
},
|
| 440 |
+
{
|
| 441 |
+
"type": "page_footnote",
|
| 442 |
+
"text": "2This approach can be easily extended to autoregressive language modeling.",
|
| 443 |
+
"bbox": [
|
| 444 |
+
510,
|
| 445 |
+
883,
|
| 446 |
+
885,
|
| 447 |
+
910
|
| 448 |
+
],
|
| 449 |
+
"page_idx": 2
|
| 450 |
+
},
|
| 451 |
+
{
|
| 452 |
+
"type": "text",
|
| 453 |
+
"text": "dings, respectively, such that:",
|
| 454 |
+
"bbox": [
|
| 455 |
+
115,
|
| 456 |
+
74,
|
| 457 |
+
337,
|
| 458 |
+
90
|
| 459 |
+
],
|
| 460 |
+
"page_idx": 3
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"type": "equation",
|
| 464 |
+
"text": "\n$$\n\\Theta^ {*} = \\arg \\max _ {\\Theta} (- \\log P _ {\\Theta} (y | x))\n$$\n",
|
| 465 |
+
"text_format": "latex",
|
| 466 |
+
"bbox": [
|
| 467 |
+
181,
|
| 468 |
+
102,
|
| 469 |
+
421,
|
| 470 |
+
124
|
| 471 |
+
],
|
| 472 |
+
"page_idx": 3
|
| 473 |
+
},
|
| 474 |
+
{
|
| 475 |
+
"type": "text",
|
| 476 |
+
"text": "and the probabilities are given by:",
|
| 477 |
+
"bbox": [
|
| 478 |
+
115,
|
| 479 |
+
135,
|
| 480 |
+
371,
|
| 481 |
+
151
|
| 482 |
+
],
|
| 483 |
+
"page_idx": 3
|
| 484 |
+
},
|
| 485 |
+
{
|
| 486 |
+
"type": "equation",
|
| 487 |
+
"text": "\n$$\nP _ {\\Theta} (y | x) = \\frac {\\exp \\Theta_ {y} ^ {V} f \\left(T _ {\\Theta^ {P}} (x)\\right)}{\\sum_ {i \\in C} \\exp \\Theta_ {i} ^ {V} f \\left(T _ {\\Theta^ {P}} (x)\\right)}\n$$\n",
|
| 488 |
+
"text_format": "latex",
|
| 489 |
+
"bbox": [
|
| 490 |
+
168,
|
| 491 |
+
159,
|
| 492 |
+
435,
|
| 493 |
+
206
|
| 494 |
+
],
|
| 495 |
+
"page_idx": 3
|
| 496 |
+
},
|
| 497 |
+
{
|
| 498 |
+
"type": "text",
|
| 499 |
+
"text": "where $T_{\\Theta^P}(x)$ is the template that inserts the prompt embeddings $\\Theta^P$ into predefined positions, $C$ is the set of classes, and $f(x)$ is the masked language model output (without the last decoder layer, which is simply the transposed word embedding matrix). Both $\\Theta^P$ and $\\Theta^V$ are vectors in the same embeddings space as the word embeddings.",
|
| 500 |
+
"bbox": [
|
| 501 |
+
114,
|
| 502 |
+
218,
|
| 503 |
+
487,
|
| 504 |
+
330
|
| 505 |
+
],
|
| 506 |
+
"page_idx": 3
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"type": "text",
|
| 510 |
+
"text": "In Figure 2, the template $T_{\\Theta^P}(x)$ prepends $\\Theta_1^P$ and appends $\\Theta_2^P, \\Theta_3^P, \\Theta_4^P$ parameters to the word embeddings and uses $\\Theta_+^V$ and $\\Theta_-^V$ to calculate the probabilities on the masked token position for positive and negative classes.",
|
| 511 |
+
"bbox": [
|
| 512 |
+
114,
|
| 513 |
+
331,
|
| 514 |
+
489,
|
| 515 |
+
411
|
| 516 |
+
],
|
| 517 |
+
"page_idx": 3
|
| 518 |
+
},
|
| 519 |
+
{
|
| 520 |
+
"type": "text",
|
| 521 |
+
"text": "3.1 Method",
|
| 522 |
+
"text_level": 1,
|
| 523 |
+
"bbox": [
|
| 524 |
+
115,
|
| 525 |
+
422,
|
| 526 |
+
225,
|
| 527 |
+
435
|
| 528 |
+
],
|
| 529 |
+
"page_idx": 3
|
| 530 |
+
},
|
| 531 |
+
{
|
| 532 |
+
"type": "text",
|
| 533 |
+
"text": "Similar to Elsayed et al. (2019), we employ stochastic gradient descent to find the best adversarial perturbation on the text that will minimize the task objective. First, we insert special prompt tokens [P_1], [P_2], ... [P_K] and an additional [MASK] token into the input sequence. These tokens might be placed before or after the sentences, depending on the prompt template.",
|
| 534 |
+
"bbox": [
|
| 535 |
+
114,
|
| 536 |
+
442,
|
| 537 |
+
489,
|
| 538 |
+
571
|
| 539 |
+
],
|
| 540 |
+
"page_idx": 3
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"type": "text",
|
| 544 |
+
"text": "We set the optimization objective to a cross-entropy loss between the head output of the masked language model and the verbalizer tokens [V_1], [V_2], ..., [V_C] for classes 1...C accordingly.",
|
| 545 |
+
"bbox": [
|
| 546 |
+
115,
|
| 547 |
+
571,
|
| 548 |
+
489,
|
| 549 |
+
651
|
| 550 |
+
],
|
| 551 |
+
"page_idx": 3
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"type": "text",
|
| 555 |
+
"text": "The only trainable parameters are the word embeddings for [P_1], ..., [P_K] and [V_1], ..., [V_C]. In case we want to train models for multiple tasks, these are the only task-specific parameters we need to store. The entire \"body\" of the large language model (all attention layers, feedforward layers, and all other word embeddings) remains untouched.",
|
| 556 |
+
"bbox": [
|
| 557 |
+
115,
|
| 558 |
+
652,
|
| 559 |
+
489,
|
| 560 |
+
778
|
| 561 |
+
],
|
| 562 |
+
"page_idx": 3
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"type": "text",
|
| 566 |
+
"text": "Note that, unlike most adversarial attacks, we do not update the embeddings of the original tokens of the input. This follows the intuition from Elsayed et al. (2019), when the pixels of MNIST or CIFAR images are left untouched, and only padding pixels are updated.",
|
| 567 |
+
"bbox": [
|
| 568 |
+
114,
|
| 569 |
+
780,
|
| 570 |
+
489,
|
| 571 |
+
876
|
| 572 |
+
],
|
| 573 |
+
"page_idx": 3
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"type": "text",
|
| 577 |
+
"text": "We train these parameters by minimizing the loss on the training set of the downstream task.",
|
| 578 |
+
"bbox": [
|
| 579 |
+
115,
|
| 580 |
+
877,
|
| 581 |
+
489,
|
| 582 |
+
909
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "3.2 Implementation Details",
|
| 589 |
+
"text_level": 1,
|
| 590 |
+
"bbox": [
|
| 591 |
+
510,
|
| 592 |
+
74,
|
| 593 |
+
742,
|
| 594 |
+
90
|
| 595 |
+
],
|
| 596 |
+
"page_idx": 3
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"type": "text",
|
| 600 |
+
"text": "WARP is implemented in the AllenNLP framework. For all the GLUE benchmark tasks we use the roberta-large (Liu et al., 2019) model from the PyTorch implementation of huggingface transformers (Wolf et al., 2020) library. For the few-shot experiments, we use albert-xxlarge-v2 in order to directly compare to iPET (Schick and Schütze, 2021b). For the GLUE and SuperGLUE tasks we use dataset loaders and metrics implementations from the huggingface datasets library.",
|
| 601 |
+
"bbox": [
|
| 602 |
+
509,
|
| 603 |
+
96,
|
| 604 |
+
885,
|
| 605 |
+
272
|
| 606 |
+
],
|
| 607 |
+
"page_idx": 3
|
| 608 |
+
},
|
| 609 |
+
{
|
| 610 |
+
"type": "text",
|
| 611 |
+
"text": "The prompt tokens are initialized either with word embeddings of [MASK] or similar to the vectors from the word embedding layer. For the answer prompts, we use the masked language model head, which usually consists of a feedforward network and a decoder on top of it, where the weights of the decoder are shared with the word embeddings used for the input. We calculate the softmax over the verbalizer tokens [V_1], ... [V_C].",
|
| 612 |
+
"bbox": [
|
| 613 |
+
509,
|
| 614 |
+
273,
|
| 615 |
+
885,
|
| 616 |
+
434
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "We choose the Adam optimizer with a slanted triangular schedule for the learning rate with $6\\%$ warm-up steps and train for 10-20 epochs on each task. Each batch consists of examples containing at most 1024 tokens and 8 examples.",
|
| 623 |
+
"bbox": [
|
| 624 |
+
509,
|
| 625 |
+
435,
|
| 626 |
+
882,
|
| 627 |
+
514
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 3
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "In order to speed up the training, we disable the dropout of the pretrained language model. All the experiments are performed on two Titan Vs and two RTX 3080 GPUs, with mixed precision training. In practice, WARP is 2.5-3 times faster than regular fine-tuning and 2 times slower than frozen-features experiments in terms of epoch duration with the same batch sizes.",
|
| 634 |
+
"bbox": [
|
| 635 |
+
509,
|
| 636 |
+
514,
|
| 637 |
+
885,
|
| 638 |
+
643
|
| 639 |
+
],
|
| 640 |
+
"page_idx": 3
|
| 641 |
+
},
|
| 642 |
+
{
|
| 643 |
+
"type": "text",
|
| 644 |
+
"text": "Details about the hyperparameters can be found in the Supplementary material.",
|
| 645 |
+
"bbox": [
|
| 646 |
+
509,
|
| 647 |
+
644,
|
| 648 |
+
882,
|
| 649 |
+
676
|
| 650 |
+
],
|
| 651 |
+
"page_idx": 3
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"type": "text",
|
| 655 |
+
"text": "4 Experiments on GLUE",
|
| 656 |
+
"text_level": 1,
|
| 657 |
+
"bbox": [
|
| 658 |
+
510,
|
| 659 |
+
689,
|
| 660 |
+
744,
|
| 661 |
+
706
|
| 662 |
+
],
|
| 663 |
+
"page_idx": 3
|
| 664 |
+
},
|
| 665 |
+
{
|
| 666 |
+
"type": "text",
|
| 667 |
+
"text": "Following prior work, we evaluate our method on the GLUE Benchmark (Wang et al., 2019b), which consists of 9 natural language understanding tasks. Generally, we perform single-task WARP training, with early stopping and model selection using the original validation sets, if not stated otherwise.",
|
| 668 |
+
"bbox": [
|
| 669 |
+
509,
|
| 670 |
+
715,
|
| 671 |
+
882,
|
| 672 |
+
810
|
| 673 |
+
],
|
| 674 |
+
"page_idx": 3
|
| 675 |
+
},
|
| 676 |
+
{
|
| 677 |
+
"type": "text",
|
| 678 |
+
"text": "4.1 Tasks",
|
| 679 |
+
"text_level": 1,
|
| 680 |
+
"bbox": [
|
| 681 |
+
510,
|
| 682 |
+
822,
|
| 683 |
+
603,
|
| 684 |
+
837
|
| 685 |
+
],
|
| 686 |
+
"page_idx": 3
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"type": "text",
|
| 690 |
+
"text": "Almost all the tasks from the GLUE Benchmark are either sentence classification or sentence pair classification tasks, so WARP requires very few modifications to adapt to each of the tasks.",
|
| 691 |
+
"bbox": [
|
| 692 |
+
509,
|
| 693 |
+
845,
|
| 694 |
+
882,
|
| 695 |
+
909
|
| 696 |
+
],
|
| 697 |
+
"page_idx": 3
|
| 698 |
+
},
|
| 699 |
+
{
|
| 700 |
+
"type": "table",
|
| 701 |
+
"img_path": "images/695e02cb81240a9dba7d40bb95d2772afa576e7f8cb11052dad917868f04d448.jpg",
|
| 702 |
+
"table_caption": [],
|
| 703 |
+
"table_footnote": [
|
| 704 |
+
"Table 1: Test set results on GLUE Benchmark. The results are obtained from the GLUE Evaluation server. The subscript next to TinyBERT corresponds to the number of layers in the model. WARP for RTE, STS-B and MRPC are initialized from the MNLI parameters. Results for WNLI are not shown, although they are counted in the averaged GLUE score (AVG column). The last column # shows the number of trainable parameters. WARP's average performance is higher than all models with up to three orders of magnitude more trainable parameters. Fully fine-tuned RoBERTa and the current state-of-the-art method (DeBERT) score higher by 6.5 and 9.2 points, respectively."
|
| 705 |
+
],
|
| 706 |
+
"table_body": "<table><tr><td></td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>RTE</td><td>SST</td><td>MRPC</td><td>CoLA</td><td>STS-B</td><td>AVG</td><td>#</td></tr><tr><td>Human Baselines</td><td>92.0 / 92.8</td><td>91.2</td><td>59.5 / 80.4</td><td>93.6</td><td>97.8</td><td>86.3 / 80.8</td><td>66.4</td><td>92.7 / 92.6</td><td>87.1</td><td></td></tr><tr><td>DeBERT3</td><td>91.9 / 91.6</td><td>99.2</td><td>76.2 / 90.8</td><td>93.2</td><td>97.5</td><td>94.0 / 92.0</td><td>71.5</td><td>92.9 / 92.6</td><td>90.8</td><td>3·109</td></tr><tr><td>RoBERTa</td><td>90.8 / 90.2</td><td>95.4</td><td>74.3 / 90.2</td><td>88.2</td><td>96.7</td><td>92.3 / 89.8</td><td>67.8</td><td>92.2 / 91.9</td><td>88.1</td><td>355·106</td></tr><tr><td>BERTlarge</td><td>86.7 / 85.9</td><td>92.7</td><td>72.1 / 89.3</td><td>70.1</td><td>94.9</td><td>89.3 / 85.4</td><td>60.5</td><td>87.6 / 86.5</td><td>80.5</td><td>355·106</td></tr><tr><td>BERTbase</td><td>84.6 / 83.4</td><td>90.5</td><td>71.2 / 89.2</td><td>66.4</td><td>93.5</td><td>88.9 / 84.8</td><td>52.1</td><td>87.1 / 85.8</td><td>78.3</td><td>110·106</td></tr><tr><td>TinyBERT6</td><td>84.6 / 83.2</td><td>90.4</td><td>71.6 / 89.1</td><td>70.0</td><td>93.1</td><td>87.3 / 82.6</td><td>51.1</td><td>85.0 / 83.7</td><td>78.1</td><td>67·106</td></tr><tr><td>TinyBERT4</td><td>82.5 / 81.8</td><td>87.7</td><td>71.3 / 89.2</td><td>66.6</td><td>92.6</td><td>86.4 / 81.2</td><td>44.1</td><td>81.9 / 80.4</td><td>75.9</td><td>15·106</td></tr><tr><td>ELECTRAsmall</td><td>81.6 / 81.2</td><td>88.3</td><td>70.4 / 88.0</td><td>63.6</td><td>91.1</td><td>89.0 / 84.9</td><td>55.6</td><td>85.6 / 84.6</td><td>77.4</td><td>14·106</td></tr><tr><td>Adapters (BERT)</td><td>85.4 / 85.0</td><td>92.4</td><td>71.5 / 89.4</td><td>71.6</td><td>94.3</td><td>88.7 / 84.3</td><td>59.2</td><td>87.3 / 86.1</td><td>80.2</td><td>1.2·106</td></tr><tr><td>WARP (RoBERTa)</td><td>88.0 / 88.2</td><td>93.5</td><td>68.6 / 87.7</td><td>84.3</td><td>96.3</td><td>88.2 / 83.9</td><td>53.9</td><td>89.5 / 88.8</td><td>81.6</td><td><25K</td></tr></table>",
|
| 707 |
+
"bbox": [
|
| 708 |
+
115,
|
| 709 |
+
71,
|
| 710 |
+
885,
|
| 711 |
+
236
|
| 712 |
+
],
|
| 713 |
+
"page_idx": 4
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "text",
|
| 717 |
+
"text": "SST-2 (Sentence Sentiment Treebank, Socher et al., 2013) is a single sentence binary classification task. For the prompt, we put a [MASK] token after the sentence, and the trainable prompt tokens are both appended and prepended to the sentence.",
|
| 718 |
+
"bbox": [
|
| 719 |
+
114,
|
| 720 |
+
370,
|
| 721 |
+
487,
|
| 722 |
+
450
|
| 723 |
+
],
|
| 724 |
+
"page_idx": 4
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "text",
|
| 728 |
+
"text": "CoLA (Corpus of Linguistic Acceptability, Warstadt et al., 2019) is a single sentence classification task as well, so we treat both the same way with the only difference that as a validation metric we use accuracy for SST-2, and Matthew's correlation for CoLA.",
|
| 729 |
+
"bbox": [
|
| 730 |
+
114,
|
| 731 |
+
451,
|
| 732 |
+
489,
|
| 733 |
+
546
|
| 734 |
+
],
|
| 735 |
+
"page_idx": 4
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"text": "MNLI (MultiNLI, Multi-Genre Natural Language Inference, Williams et al., 2018), QNLI (Question Natural Language Inference, Rajpurkar et al., 2016) and RTE (Recognizing Textual Entailment, Dagan et al., 2006; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) are sentence pair classification tasks. Similar to Schick and Schütze (2021a), we may have prompt tokens before, after and between the two sentences, but the [MASK] token is always put between the sentences. For MNLI, we use matched accuracy as a validation metric and use the same model for the mismatched version. In our few-shot attempt for the RTE task, we use a different training and evaluation setup discussed in Section 5.2. QQP (Quora Question Pairs $^4$ ) and MRPC (Microsoft Research Paraphrase Corpus, Dolan and Brockett, 2005) follow the same prompt pattern as NLI tasks. As a validation metric $F_1$ score is used,",
|
| 740 |
+
"bbox": [
|
| 741 |
+
114,
|
| 742 |
+
548,
|
| 743 |
+
489,
|
| 744 |
+
852
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 4
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"text": "STS-B (Semantic Textual Similarity Bench",
|
| 751 |
+
"bbox": [
|
| 752 |
+
132,
|
| 753 |
+
854,
|
| 754 |
+
487,
|
| 755 |
+
872
|
| 756 |
+
],
|
| 757 |
+
"page_idx": 4
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"text": "mark, Cer et al., 2017), unlike the other tasks in the benchmark, is formulated as a regression task. The prompt pattern is the same, but instead of introducing new embeddings for $[V\\_ 1]$ , $[V\\_ 2]$ , ..., $[V\\_ C]$ verbalizer tokens, we add a regression head to the last hidden state of MLM head and use Mean Squares Error optimization objective, similar to (Liu et al., 2019). Pearson Correlation is used as the validation metric. During inference, we clip the scores within [1, 5].",
|
| 762 |
+
"bbox": [
|
| 763 |
+
509,
|
| 764 |
+
370,
|
| 765 |
+
884,
|
| 766 |
+
530
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 4
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"text": "We follow Liu et al. and train models for MRPC, STS-B, and RTE tasks initialized with the parameters from the best MNLI model but do not apply any task-specific tricks to WNLI (Winograd Schema Challenge NLI, Levesque et al., 2011) and always predict the majority label.",
|
| 773 |
+
"bbox": [
|
| 774 |
+
509,
|
| 775 |
+
532,
|
| 776 |
+
885,
|
| 777 |
+
629
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 4
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"text": "4.2 Results",
|
| 784 |
+
"text_level": 1,
|
| 785 |
+
"bbox": [
|
| 786 |
+
510,
|
| 787 |
+
644,
|
| 788 |
+
616,
|
| 789 |
+
657
|
| 790 |
+
],
|
| 791 |
+
"page_idx": 4
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"type": "text",
|
| 795 |
+
"text": "Table 1 presents the results on the test set obtained from the GLUE evaluation server. Besides our best WARP models, we also include the human baselines, current state-of-the-art model (He et al., 2020), the regular fine-tuned pretrained model we use, and also include relatively small language models, including (Jiao et al., 2020), (Clark et al., 2020), (Houlsby et al., 2019).",
|
| 796 |
+
"bbox": [
|
| 797 |
+
509,
|
| 798 |
+
667,
|
| 799 |
+
884,
|
| 800 |
+
794
|
| 801 |
+
],
|
| 802 |
+
"page_idx": 4
|
| 803 |
+
},
|
| 804 |
+
{
|
| 805 |
+
"type": "text",
|
| 806 |
+
"text": "With the GLUE Score, WARP outperforms all the models that train less than 25 million parameters on the leaderboard. We explain the relatively strong WARP results on textual entailment tasks by the easier reformulation of such tasks. Likewise, we explain the relatively weak performance on CoLA by the difficulties of reformulating the",
|
| 807 |
+
"bbox": [
|
| 808 |
+
509,
|
| 809 |
+
797,
|
| 810 |
+
885,
|
| 811 |
+
910
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 4
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "page_footnote",
|
| 817 |
+
"text": "4https://www.quora.com/q/quoradata/First-QuoraDataset-Release-Question-Pairs",
|
| 818 |
+
"bbox": [
|
| 819 |
+
115,
|
| 820 |
+
882,
|
| 821 |
+
445,
|
| 822 |
+
909
|
| 823 |
+
],
|
| 824 |
+
"page_idx": 4
|
| 825 |
+
},
|
| 826 |
+
{
|
| 827 |
+
"type": "table",
|
| 828 |
+
"img_path": "images/da0365f5b939deefe423b55f356ae63a439c47415b162fadcbcc0c6cee111b40.jpg",
|
| 829 |
+
"table_caption": [],
|
| 830 |
+
"table_footnote": [
|
| 831 |
+
"Table 2: Dev set results on GLUE tasks. The last column shows the number of trainable parameters only. $\\mathsf{WARP}_i$ corresponds to WARP training with prompt consisting of $i$ prompt tokens. $\\mathsf{WARP}_{\\mathsf{MNLI}}$ corresponds to WARP training initialized with the best MNLI parameters. All the models are based on pretrained roberta-large, and for Adapters and WARP-based approaches require to store $355 \\cdot 10^{6}$ frozen parameters shared across all the GLUE tasks. We show the primary validation metric for each task, described at Subsection 4.1. The AVG column shows the average of shown metrics and is not comparable to the Test server GLUE Score. The number of parameters for WARP methods may vary because of a difference in the number of classes. Underlined numbers correspond to our GLUE submission."
|
| 832 |
+
],
|
| 833 |
+
"table_body": "<table><tr><td>train size</td><td>MNLI 392702</td><td>QNLI 104743</td><td>QQP 363846</td><td>RTE 2490</td><td>SST 67349</td><td>MRPC 3668</td><td>CoLA 8551</td><td>STS-B 5749</td><td>AVG</td><td>#</td></tr><tr><td>Fine-Tuning</td><td>90.2</td><td>94.7</td><td>92.2</td><td>86.6</td><td>96.4</td><td>90.9</td><td>68.0</td><td>92.4</td><td>88.9</td><td>\\( {355} \\cdot {10}^{6} \\)</td></tr><tr><td>Adapters</td><td>90.4</td><td>94.7</td><td>88.5</td><td>83.4</td><td>96.3</td><td>92.9</td><td>67.4</td><td>92.5</td><td>88.3</td><td>\\( 3 \\cdot {10}^{6} \\)</td></tr><tr><td>Linear Classifier</td><td>64.2</td><td>78.1</td><td>74.9</td><td>59.2</td><td>88.4</td><td>82.5</td><td>48.9</td><td>71.8</td><td>71.0</td><td>≤ 3072</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{0} \\)</td><td>70.9</td><td>78.8</td><td>77.1</td><td>72.2</td><td>89.8</td><td>83.8</td><td>32.8</td><td>73.8</td><td>72.4</td><td>≤ 3072</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{1} \\)</td><td>83.9</td><td>87.6</td><td>81.6</td><td>72.6</td><td>93.8</td><td>84.7</td><td>46.1</td><td>80.4</td><td>78.8</td><td>≤ 4096</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{2} \\)</td><td>85.4</td><td>88.0</td><td>81.5</td><td>69.7</td><td>94.3</td><td>85.3</td><td>54.4</td><td>80.8</td><td>79.9</td><td>≤ 5120</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{4} \\)</td><td>86.9</td><td>92.4</td><td>83.1</td><td>68.2</td><td>95.9</td><td>85.0</td><td>56.0</td><td>75.5</td><td>80.4</td><td>≤ 7168</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{8} \\)</td><td>87.6</td><td>93.0</td><td>83.8</td><td>72.9</td><td>95.4</td><td>85.6</td><td>57.4</td><td>81.0</td><td>82.1</td><td>< 11K</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{\\text{init }} \\)</td><td>86.8</td><td>90.4</td><td>83.6</td><td>80.1</td><td>96.0</td><td>86.0</td><td>51.7</td><td>86.9</td><td>82.7</td><td>< 11K</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{20} \\)</td><td>\\( \\underline{88.2} \\)</td><td>\\( \\underline{93.5} \\)</td><td>\\( \\underline{84.5} \\)</td><td>75.8</td><td>\\( \\underline{96.0} \\)</td><td>90.8</td><td>\\( \\underline{60.6} \\)</td><td>88.6</td><td>84.8</td><td>< 25K</td></tr><tr><td>\\( {\\mathrm{{WARP}}}_{\\mathrm{{MNLI}}} \\)</td><td></td><td></td><td></td><td>\\( \\underline{86.3} \\)</td><td></td><td>\\( \\underline{91.2} \\)</td><td></td><td>\\( \\underline{91.0} \\)</td><td>86.4</td><td>< 25K</td></tr></table>",
|
| 834 |
+
"bbox": [
|
| 835 |
+
117,
|
| 836 |
+
71,
|
| 837 |
+
884,
|
| 838 |
+
263
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 5
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "text",
|
| 844 |
+
"text": "task into a Cloze task.",
|
| 845 |
+
"bbox": [
|
| 846 |
+
114,
|
| 847 |
+
413,
|
| 848 |
+
282,
|
| 849 |
+
426
|
| 850 |
+
],
|
| 851 |
+
"page_idx": 5
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"type": "text",
|
| 855 |
+
"text": "To further analyze WARP, we conduct several experiments and focus on dev set results. In order to directly compare WARP with existing methods, we report in Table 2 different methods that use RoBERTa, including fine-tuning, linear classifiers on top, AutoPrompt, and Adapters. For WARP experiments, we compare performance with different numbers of prompt tokens.",
|
| 856 |
+
"bbox": [
|
| 857 |
+
114,
|
| 858 |
+
430,
|
| 859 |
+
487,
|
| 860 |
+
558
|
| 861 |
+
],
|
| 862 |
+
"page_idx": 5
|
| 863 |
+
},
|
| 864 |
+
{
|
| 865 |
+
"type": "text",
|
| 866 |
+
"text": "The $\\mathrm{WARP_0}$ model does not introduce any prompt parameters. The only difference between $\\mathrm{WARP_0}$ and Linear Classifier is that for $\\mathrm{WARP_0}$ , [MASK] is added to the input of each sample, and we get sentence representations from the MLM head at the masked position. By contrast, in the case of the Linear Classifier, we use the average of non-special token embeddings as sentence representations. As we can see, pooling with MLM is significantly better.",
|
| 867 |
+
"bbox": [
|
| 868 |
+
114,
|
| 869 |
+
560,
|
| 870 |
+
489,
|
| 871 |
+
720
|
| 872 |
+
],
|
| 873 |
+
"page_idx": 5
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"type": "text",
|
| 877 |
+
"text": "Table 2 shows that, as we decrease the number of trainable prompt parameters, the performance decreases, but the model still works. Similar behavior was observed by Elsayed et al. (2019) in experiments with different padding parameter sizes. However, in contrast to WARP, the number of trainable parameters in that work are much greater than the size of the input.",
|
| 878 |
+
"bbox": [
|
| 879 |
+
114,
|
| 880 |
+
722,
|
| 881 |
+
489,
|
| 882 |
+
851
|
| 883 |
+
],
|
| 884 |
+
"page_idx": 5
|
| 885 |
+
},
|
| 886 |
+
{
|
| 887 |
+
"type": "text",
|
| 888 |
+
"text": "An important benefit of using WARP is that",
|
| 889 |
+
"bbox": [
|
| 890 |
+
134,
|
| 891 |
+
852,
|
| 892 |
+
489,
|
| 893 |
+
869
|
| 894 |
+
],
|
| 895 |
+
"page_idx": 5
|
| 896 |
+
},
|
| 897 |
+
{
|
| 898 |
+
"type": "text",
|
| 899 |
+
"text": "it can be initialized with manual prompts. In addition to the regular models where we initialize with [MASK] tokens, we performed a run on the GLUE datasets with the same prompt [CLS] \"S1\"? [MASK]. \"S2\"! [SEP] for all the tasks (without S2 for single-sentence tasks). We denote these results as WARPinit in Table 2. WARPinit outperforms WARP8 on tasks with relatively few training examples — RTE, MRPC and STSB, which indicates its potential in the low-data regime.",
|
| 900 |
+
"bbox": [
|
| 901 |
+
509,
|
| 902 |
+
412,
|
| 903 |
+
884,
|
| 904 |
+
594
|
| 905 |
+
],
|
| 906 |
+
"page_idx": 5
|
| 907 |
+
},
|
| 908 |
+
{
|
| 909 |
+
"type": "text",
|
| 910 |
+
"text": "5 Few-Shot Experiments",
|
| 911 |
+
"text_level": 1,
|
| 912 |
+
"bbox": [
|
| 913 |
+
510,
|
| 914 |
+
607,
|
| 915 |
+
742,
|
| 916 |
+
625
|
| 917 |
+
],
|
| 918 |
+
"page_idx": 5
|
| 919 |
+
},
|
| 920 |
+
{
|
| 921 |
+
"type": "text",
|
| 922 |
+
"text": "The fact that WARP can be initialized using manually designed natural prompts suggests that we can similarly benefit from such human attribution similar to iPET (Schick and Schütze, 2021b), especially in scenarios with limited training data.",
|
| 923 |
+
"bbox": [
|
| 924 |
+
509,
|
| 925 |
+
634,
|
| 926 |
+
882,
|
| 927 |
+
715
|
| 928 |
+
],
|
| 929 |
+
"page_idx": 5
|
| 930 |
+
},
|
| 931 |
+
{
|
| 932 |
+
"type": "text",
|
| 933 |
+
"text": "5.1 Setup",
|
| 934 |
+
"text_level": 1,
|
| 935 |
+
"bbox": [
|
| 936 |
+
510,
|
| 937 |
+
727,
|
| 938 |
+
603,
|
| 939 |
+
741
|
| 940 |
+
],
|
| 941 |
+
"page_idx": 5
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"type": "text",
|
| 945 |
+
"text": "For our few-shot experiments we build WARP on top of ALBERT (Lan et al., 2020), the same pretrained model used by PET and iPET. To initialize WARP prompts, we use the same Prompt-Verbalizer Patterns (PVP) from Schick and Schütze (2021b): the embeddings for [P_1], [P_2]... [P_N] are initialized with PVP's prompt token embeddings, and embeddings for [V_1], [V_2]... [V_C] are initialized with verbalizer token embeddings for their corre",
|
| 946 |
+
"bbox": [
|
| 947 |
+
509,
|
| 948 |
+
747,
|
| 949 |
+
884,
|
| 950 |
+
910
|
| 951 |
+
],
|
| 952 |
+
"page_idx": 5
|
| 953 |
+
},
|
| 954 |
+
{
|
| 955 |
+
"type": "page_footnote",
|
| 956 |
+
"text": "5Unlike in Table 2,Adapters in Table 1 are built on bert-large-uncased model.",
|
| 957 |
+
"bbox": [
|
| 958 |
+
114,
|
| 959 |
+
882,
|
| 960 |
+
489,
|
| 961 |
+
909
|
| 962 |
+
],
|
| 963 |
+
"page_idx": 5
|
| 964 |
+
},
|
| 965 |
+
{
|
| 966 |
+
"type": "text",
|
| 967 |
+
"text": "sponding classes. Unlike roberta-large, the alberta-xxlarge-v2 uses word embeddings of size 128 (8 times smaller than RoBERTa).",
|
| 968 |
+
"bbox": [
|
| 969 |
+
115,
|
| 970 |
+
74,
|
| 971 |
+
489,
|
| 972 |
+
122
|
| 973 |
+
],
|
| 974 |
+
"page_idx": 6
|
| 975 |
+
},
|
| 976 |
+
{
|
| 977 |
+
"type": "text",
|
| 978 |
+
"text": "5.2 Tasks",
|
| 979 |
+
"text_level": 1,
|
| 980 |
+
"bbox": [
|
| 981 |
+
115,
|
| 982 |
+
139,
|
| 983 |
+
206,
|
| 984 |
+
153
|
| 985 |
+
],
|
| 986 |
+
"page_idx": 6
|
| 987 |
+
},
|
| 988 |
+
{
|
| 989 |
+
"type": "text",
|
| 990 |
+
"text": "In order to compare with GPT-3, PET, and iPET, we use two tasks from FewGLUE (Schick and Schütze, 2021b), which is a few-shot subset of the SuperGLUE benchmark (Wang et al., 2019a) consisting of 32 examples for each task. The dataset also provides 20000 additional unlabeled examples, however, we do not make use of them and work in a purely supervised setup.",
|
| 991 |
+
"bbox": [
|
| 992 |
+
114,
|
| 993 |
+
162,
|
| 994 |
+
489,
|
| 995 |
+
291
|
| 996 |
+
],
|
| 997 |
+
"page_idx": 6
|
| 998 |
+
},
|
| 999 |
+
{
|
| 1000 |
+
"type": "text",
|
| 1001 |
+
"text": "CB: CommitmentBank (de Marneffe et al., 2019) is a textual entailment task which we treat like the other sentence pair classification tasks. To initialize the prompt we use the template [CLS] \"h\"? [MASK]. \"p\" [SEP]. We also initialize [V-1], [V-2], [V-3] token embeddings with _yes, _no and _maybe (respectively for entailment, contradiction and neutral).",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
114,
|
| 1004 |
+
294,
|
| 1005 |
+
489,
|
| 1006 |
+
443
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 6
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "text",
|
| 1012 |
+
"text": "RTE: Unlike experiments on the RTE task for the full-sized training in the GLUE benchmark, we do not initialize the model with vectors from MNLI. Instead, the prompt is initialized exactly the same way as in the CB task. The only difference is that we have only the two tokens [V_1] and [V_2] initialized with _yes and _instead (for entailment and not_ entailment, respectively).",
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
114,
|
| 1015 |
+
445,
|
| 1016 |
+
489,
|
| 1017 |
+
589
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 6
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "text",
|
| 1023 |
+
"text": "5.3 Model Selection",
|
| 1024 |
+
"text_level": 1,
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
115,
|
| 1027 |
+
606,
|
| 1028 |
+
287,
|
| 1029 |
+
621
|
| 1030 |
+
],
|
| 1031 |
+
"page_idx": 6
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"type": "text",
|
| 1035 |
+
"text": "Although all trainable parameters are manually initialized in this setup, different random seeds can yield different results because of the order the training examples appear during an epoch.",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
114,
|
| 1038 |
+
631,
|
| 1039 |
+
487,
|
| 1040 |
+
695
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 6
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "text",
|
| 1046 |
+
"text": "In the few-shot setup we cannot access the original validation set. Thus, we disable early stopping and simply pick the last checkpoint.",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
114,
|
| 1049 |
+
697,
|
| 1050 |
+
487,
|
| 1051 |
+
745
|
| 1052 |
+
],
|
| 1053 |
+
"page_idx": 6
|
| 1054 |
+
},
|
| 1055 |
+
{
|
| 1056 |
+
"type": "text",
|
| 1057 |
+
"text": "In order to find the best initial learning rate, we conduct 20 runs of WARP with the same learning rate each time by randomly choosing 16 training examples and taking the rest for a development set. We repeat this for all candidate learning rates and choose the one with the best average validation performance across all the random seeds.",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
114,
|
| 1060 |
+
747,
|
| 1061 |
+
487,
|
| 1062 |
+
859
|
| 1063 |
+
],
|
| 1064 |
+
"page_idx": 6
|
| 1065 |
+
},
|
| 1066 |
+
{
|
| 1067 |
+
"type": "text",
|
| 1068 |
+
"text": "Finally, in order to eliminate the effect of different random seeds, we build an ensemble model from 20 WARP runs using simple majority vote.",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
114,
|
| 1071 |
+
860,
|
| 1072 |
+
487,
|
| 1073 |
+
910
|
| 1074 |
+
],
|
| 1075 |
+
"page_idx": 6
|
| 1076 |
+
},
|
| 1077 |
+
{
|
| 1078 |
+
"type": "table",
|
| 1079 |
+
"img_path": "images/f93f634ede20138b3d197860da84b1da9ba45c1f474b223b4b8b540f02b43405.jpg",
|
| 1080 |
+
"table_caption": [],
|
| 1081 |
+
"table_footnote": [],
|
| 1082 |
+
"table_body": "<table><tr><td></td><td>Model</td><td>CB\nF1 / Acc.</td><td>RTE\nAcc.</td></tr><tr><td rowspan=\"6\">dev</td><td>GPT-3 Small</td><td>26.1 / 42.9</td><td>52.3</td></tr><tr><td>GPT-3 Med</td><td>40.4 / 58.9</td><td>48.4</td></tr><tr><td>GPT-3</td><td>57.2 / 82.1</td><td>72.9</td></tr><tr><td>PET (ALBERT)</td><td>59.4 / 85.1</td><td>69.8</td></tr><tr><td>iPET (ALBERT)</td><td>92.4 / 92.9</td><td>74.0</td></tr><tr><td>WARPinit (ALBERT)</td><td>84.0 / 87.5</td><td>71.8</td></tr><tr><td rowspan=\"4\">test</td><td>GPT-3</td><td>52.0 / 75.6</td><td>69.0</td></tr><tr><td>PET (ALBERT)</td><td>60.2 / 87.2</td><td>67.2</td></tr><tr><td>iPET (ALBERT)</td><td>79.9 / 88.8</td><td>70.8</td></tr><tr><td>WARPinit (ALBERT)</td><td>70.2 / 82.4</td><td>69.1</td></tr></table>",
|
| 1083 |
+
"bbox": [
|
| 1084 |
+
512,
|
| 1085 |
+
72,
|
| 1086 |
+
884,
|
| 1087 |
+
271
|
| 1088 |
+
],
|
| 1089 |
+
"page_idx": 6
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"text": "Table 3: Results on SuperGLUE benchmark. The results for the test set are obtained from SuperGLUE evaluation server. We only show systems performing in a similar few-shot training setup using 32 examples.",
|
| 1094 |
+
"bbox": [
|
| 1095 |
+
509,
|
| 1096 |
+
279,
|
| 1097 |
+
884,
|
| 1098 |
+
337
|
| 1099 |
+
],
|
| 1100 |
+
"page_idx": 6
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"text": "5.4 Results",
|
| 1105 |
+
"text_level": 1,
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
510,
|
| 1108 |
+
362,
|
| 1109 |
+
615,
|
| 1110 |
+
376
|
| 1111 |
+
],
|
| 1112 |
+
"page_idx": 6
|
| 1113 |
+
},
|
| 1114 |
+
{
|
| 1115 |
+
"type": "text",
|
| 1116 |
+
"text": "As seen in Table 3, WARP outperforms PET and GPT-3 baselines, but stays behind iPET on both tasks. GPT-3 has 170B parameters, but none of them is being trained for the given tasks. PET and iPET have 255M parameters, and all of them are trained for these tasks. Additionally, they leverage unlabeled examples using distillation. WARP has roughly the same 255M parameters, but only 1024 of them are trained for any single model. An ensemble of 20 WARP models has slightly more than 20K trainable parameters.",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
509,
|
| 1119 |
+
382,
|
| 1120 |
+
884,
|
| 1121 |
+
561
|
| 1122 |
+
],
|
| 1123 |
+
"page_idx": 6
|
| 1124 |
+
},
|
| 1125 |
+
{
|
| 1126 |
+
"type": "text",
|
| 1127 |
+
"text": "6 Discussion",
|
| 1128 |
+
"text_level": 1,
|
| 1129 |
+
"bbox": [
|
| 1130 |
+
510,
|
| 1131 |
+
573,
|
| 1132 |
+
636,
|
| 1133 |
+
587
|
| 1134 |
+
],
|
| 1135 |
+
"page_idx": 6
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "text",
|
| 1139 |
+
"text": "6.1 Interpreting tokens learned by WARP",
|
| 1140 |
+
"text_level": 1,
|
| 1141 |
+
"bbox": [
|
| 1142 |
+
509,
|
| 1143 |
+
599,
|
| 1144 |
+
855,
|
| 1145 |
+
614
|
| 1146 |
+
],
|
| 1147 |
+
"page_idx": 6
|
| 1148 |
+
},
|
| 1149 |
+
{
|
| 1150 |
+
"type": "text",
|
| 1151 |
+
"text": "WARP learns prompt embeddings in a continuous space. In this section, we explore those embeddings by looking at the nearby token vectors. Table 6 in the Supplementary material lists the closest tokens (in terms of cosine similarity) to the learned embeddings. All GLUE tasks are initialized with [MASK] token, except for RTE, MRPC, and STS-B, which are initialized from the pretrained MNLI model. The prompt tokens of the solutions for those three tasks are quite close to the ones from the MNLI solution. We have seen similar behavior on SuperGLUE experiments with manual initializations. The solution for CoLA (which is one of the worst-performing tasks) is close to the initialized point.",
|
| 1152 |
+
"bbox": [
|
| 1153 |
+
509,
|
| 1154 |
+
619,
|
| 1155 |
+
884,
|
| 1156 |
+
860
|
| 1157 |
+
],
|
| 1158 |
+
"page_idx": 6
|
| 1159 |
+
},
|
| 1160 |
+
{
|
| 1161 |
+
"type": "text",
|
| 1162 |
+
"text": "We do not see any prompt tokens that are meaningful in the context of the tasks. As expected, the verbalized tokens are more interpretable. For",
|
| 1163 |
+
"bbox": [
|
| 1164 |
+
509,
|
| 1165 |
+
860,
|
| 1166 |
+
882,
|
| 1167 |
+
910
|
| 1168 |
+
],
|
| 1169 |
+
"page_idx": 6
|
| 1170 |
+
},
|
| 1171 |
+
{
|
| 1172 |
+
"type": "image",
|
| 1173 |
+
"img_path": "images/1c15f1e61742af3895fa977d64e9716372756814a9a1b5d5cb49ae9f271faf4b.jpg",
|
| 1174 |
+
"image_caption": [
|
| 1175 |
+
"Figure 4: The effect of the training data size for SST-2 task (dev set). Horizontal axis is the number of training examples. Solid lines represent median over 10 runs, and the error bars show minimum and maximum performance. All methods use roberta-large model. The results for AutoPrompt and fine-tuning are taken from (Shin et al., 2020b)"
|
| 1176 |
+
],
|
| 1177 |
+
"image_footnote": [],
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
117,
|
| 1180 |
+
70,
|
| 1181 |
+
489,
|
| 1182 |
+
273
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 7
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "text",
|
| 1188 |
+
"text": "example, the embedding for the \"contradiction\" class of MNLI is close to the token \"Unless\". The embeddings for \"negative\" and \"positive\" classes of SST-2 task are close to \"defective\" and \"important\", respectively. Other verbalized tokens are non-interpretable (e.g. \"470\" or word pieces with non-Latin characters).",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
114,
|
| 1191 |
+
420,
|
| 1192 |
+
489,
|
| 1193 |
+
533
|
| 1194 |
+
],
|
| 1195 |
+
"page_idx": 7
|
| 1196 |
+
},
|
| 1197 |
+
{
|
| 1198 |
+
"type": "text",
|
| 1199 |
+
"text": "6.2 Comparison with AutoPrompt",
|
| 1200 |
+
"text_level": 1,
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
115,
|
| 1203 |
+
548,
|
| 1204 |
+
401,
|
| 1205 |
+
563
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 7
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "text",
|
| 1211 |
+
"text": "AutoPrompt (Shin et al., 2020b) learns a prompt for the given task in the finite space of vocabulary tokens. Their best version uses 3 or 6 prompt tokens and reaches $91.2\\%$ accuracy on the development set of SST-2. The search space of WARP is significantly larger, which allows WARP to get better performance with just a single prompt token $(93.8\\%)$ .",
|
| 1212 |
+
"bbox": [
|
| 1213 |
+
114,
|
| 1214 |
+
569,
|
| 1215 |
+
489,
|
| 1216 |
+
697
|
| 1217 |
+
],
|
| 1218 |
+
"page_idx": 7
|
| 1219 |
+
},
|
| 1220 |
+
{
|
| 1221 |
+
"type": "text",
|
| 1222 |
+
"text": "AutoPrompt does not achieve meaningful results on RTE or CB tasks. WARP succeeds on both without manual initialization. Moreover, with manual initialization, WARP gets good performance on both tasks even with just 32 examples (Table 3).",
|
| 1223 |
+
"bbox": [
|
| 1224 |
+
114,
|
| 1225 |
+
699,
|
| 1226 |
+
489,
|
| 1227 |
+
795
|
| 1228 |
+
],
|
| 1229 |
+
"page_idx": 7
|
| 1230 |
+
},
|
| 1231 |
+
{
|
| 1232 |
+
"type": "text",
|
| 1233 |
+
"text": "Figure 4 shows the dependence of the accuracy on SST-2 development set from the number of training samples. Both WARP and AutoPrompt use 10 prompt tokens. With a few hundred training samples or fewer, the difference between the two algorithms is not significant. WARP starts to perform better with more training samples.",
|
| 1234 |
+
"bbox": [
|
| 1235 |
+
114,
|
| 1236 |
+
797,
|
| 1237 |
+
489,
|
| 1238 |
+
910
|
| 1239 |
+
],
|
| 1240 |
+
"page_idx": 7
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "table",
|
| 1244 |
+
"img_path": "images/0f6552d6fd8a632b3d31fc9c98c4089f2236df42ebfca8ea8593dc4911b5a54d.jpg",
|
| 1245 |
+
"table_caption": [],
|
| 1246 |
+
"table_footnote": [],
|
| 1247 |
+
"table_body": "<table><tr><td>Approach</td><td># of parameters to store</td></tr><tr><td>Linear probing</td><td>M + ECN</td></tr><tr><td>Full fine-tuning</td><td>MN</td></tr><tr><td>Single layer</td><td>M + NE(E + C)</td></tr><tr><td>TinyBERT</td><td>M0N</td></tr><tr><td>Adapters</td><td>M + NEE'</td></tr><tr><td>WARP</td><td>M + NE(C + K)</td></tr></table>",
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
512,
|
| 1250 |
+
71,
|
| 1251 |
+
857,
|
| 1252 |
+
189
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "Table 4: The number of parameters to be stored to serve $N$ text classification tasks with at most $C$ classes each, using a pretrained language model with $M$ parameters. $E$ is the dimension of embeddings (1024 in the case of RoBERTa). In TinyBERT, $M_0$ can be up to 10 times less than $M$ . In Adapters, $E'$ is roughly equal to $E$ , as the number of layers to which adapters are attached roughly compensates the smaller size of the bottleneck layer. In WARP, $K$ is the number of prompts (usually fewer than 10).",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
509,
|
| 1261 |
+
198,
|
| 1262 |
+
884,
|
| 1263 |
+
340
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 7
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "text",
|
| 1269 |
+
"text": "Shin et al. (2020b) include results with a manually designed prompt which performs pretty well (shown as a dashed line). We also compare with the manually initialized version of WARP, which performs very well with just 100 examples.",
|
| 1270 |
+
"bbox": [
|
| 1271 |
+
509,
|
| 1272 |
+
367,
|
| 1273 |
+
882,
|
| 1274 |
+
448
|
| 1275 |
+
],
|
| 1276 |
+
"page_idx": 7
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "text",
|
| 1280 |
+
"text": "6.3 Real-world applications",
|
| 1281 |
+
"text_level": 1,
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
510,
|
| 1284 |
+
460,
|
| 1285 |
+
744,
|
| 1286 |
+
475
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 7
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "text",
|
| 1292 |
+
"text": "The importance of NLP systems like WARP can be demonstrated by the following application. Suppose we want to build a system that needs to serve $N >> 1$ classification tasks simultaneously. Let the number of classes for each task be bounded by $C$ . The system can be based on a large pretrained language model with $M$ parameters, using word embedding size $E$ . How many parameters should the system store in the device memory to be able to serve all $N$ tasks?",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
509,
|
| 1295 |
+
481,
|
| 1296 |
+
882,
|
| 1297 |
+
640
|
| 1298 |
+
],
|
| 1299 |
+
"page_idx": 7
|
| 1300 |
+
},
|
| 1301 |
+
{
|
| 1302 |
+
"type": "text",
|
| 1303 |
+
"text": "If we take the approach with frozen features, we can reuse $M$ parameters for all tasks and store additional ECN task-specific parameters. This is optimal in terms of storage but will not perform well. The other extreme is to fine-tune the whole model for each task and store at least MN parameters. Table 4 shows the trade-offs offered by the other solutions. Methods like TinyBERT decrease the number of parameters from $MN$ by only $M$ . WARP, on the other hand, needs to store only $M + NE(C + K)$ parameters, where $K$ is the number of trainable prompt tokens.",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
509,
|
| 1306 |
+
643,
|
| 1307 |
+
882,
|
| 1308 |
+
835
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "page_footnote",
|
| 1314 |
+
"text": "6 SENT. this movie was ____ as a prompt, and “terrible” and “fantastic” as verbalizer tokens",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
510,
|
| 1317 |
+
847,
|
| 1318 |
+
882,
|
| 1319 |
+
876
|
| 1320 |
+
],
|
| 1321 |
+
"page_idx": 7
|
| 1322 |
+
},
|
| 1323 |
+
{
|
| 1324 |
+
"type": "page_footnote",
|
| 1325 |
+
"text": "7 SENT, and finally, the movie overall was very ____! as a prompt, and \"good\" and \"bad\" as verbalizer tokens",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
510,
|
| 1328 |
+
878,
|
| 1329 |
+
875,
|
| 1330 |
+
909
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 7
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "text",
|
| 1336 |
+
"text": "In practice, WARP additionally allows performing inference on inputs for different tasks in parallel, using samples of multiple tasks in the same batch. Every input sentence can be concatenated with task-specific pretrained prompts in advance. Then, the forward pass of the network is identical for all tasks. The final task-specific linear layers can be concatenated to form a single large linear layer with at most $NC$ output neurons.",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
114,
|
| 1339 |
+
74,
|
| 1340 |
+
489,
|
| 1341 |
+
219
|
| 1342 |
+
],
|
| 1343 |
+
"page_idx": 8
|
| 1344 |
+
},
|
| 1345 |
+
{
|
| 1346 |
+
"type": "text",
|
| 1347 |
+
"text": "This approach can be especially useful in the systems that provide machine learning models as a service. By storing one copy of a pretrained language model, it is possible to serve a large number of user-specific models in parallel with little overhead.",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
114,
|
| 1350 |
+
222,
|
| 1351 |
+
489,
|
| 1352 |
+
317
|
| 1353 |
+
],
|
| 1354 |
+
"page_idx": 8
|
| 1355 |
+
},
|
| 1356 |
+
{
|
| 1357 |
+
"type": "text",
|
| 1358 |
+
"text": "7 Conclusion",
|
| 1359 |
+
"text_level": 1,
|
| 1360 |
+
"bbox": [
|
| 1361 |
+
115,
|
| 1362 |
+
338,
|
| 1363 |
+
247,
|
| 1364 |
+
355
|
| 1365 |
+
],
|
| 1366 |
+
"page_idx": 8
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "text",
|
| 1370 |
+
"text": "In this paper we have proposed an alternative way to transfer knowledge from large pretrained language models to downstream tasks by appending carefully optimized embeddings to the input text. The method outperforms existing methods with significantly more trainable parameters on GLUE benchmark tasks and shows an impressive performance in a few-shot setting on two SuperGLUE tasks. On the sentiment analysis task, the performance is comparable to the fully fine-tuned language models. This method can save a lot of storage in software applications designed to serve large numbers of sentence classification tasks.",
|
| 1371 |
+
"bbox": [
|
| 1372 |
+
114,
|
| 1373 |
+
370,
|
| 1374 |
+
489,
|
| 1375 |
+
580
|
| 1376 |
+
],
|
| 1377 |
+
"page_idx": 8
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "text",
|
| 1381 |
+
"text": "Acknowledgments",
|
| 1382 |
+
"text_level": 1,
|
| 1383 |
+
"bbox": [
|
| 1384 |
+
115,
|
| 1385 |
+
601,
|
| 1386 |
+
278,
|
| 1387 |
+
618
|
| 1388 |
+
],
|
| 1389 |
+
"page_idx": 8
|
| 1390 |
+
},
|
| 1391 |
+
{
|
| 1392 |
+
"type": "text",
|
| 1393 |
+
"text": "This work is based in part on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government.",
|
| 1394 |
+
"bbox": [
|
| 1395 |
+
114,
|
| 1396 |
+
633,
|
| 1397 |
+
489,
|
| 1398 |
+
825
|
| 1399 |
+
],
|
| 1400 |
+
"page_idx": 8
|
| 1401 |
+
},
|
| 1402 |
+
{
|
| 1403 |
+
"type": "text",
|
| 1404 |
+
"text": "The work was supported by the RA Science Committee, in the frames of the research project No. 20TTAT-AIa024. Most experiments were performed on GPUs donated by NVIDIA.",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
114,
|
| 1407 |
+
829,
|
| 1408 |
+
489,
|
| 1409 |
+
892
|
| 1410 |
+
],
|
| 1411 |
+
"page_idx": 8
|
| 1412 |
+
},
|
| 1413 |
+
{
|
| 1414 |
+
"type": "text",
|
| 1415 |
+
"text": "References",
|
| 1416 |
+
"text_level": 1,
|
| 1417 |
+
"bbox": [
|
| 1418 |
+
512,
|
| 1419 |
+
74,
|
| 1420 |
+
610,
|
| 1421 |
+
89
|
| 1422 |
+
],
|
| 1423 |
+
"page_idx": 8
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "list",
|
| 1427 |
+
"sub_type": "ref_text",
|
| 1428 |
+
"list_items": [
|
| 1429 |
+
"Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.",
|
| 1430 |
+
"Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge.",
|
| 1431 |
+
"Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge.",
|
| 1432 |
+
"T. Brown, B. Mann, Nick Ryder, Melanie Subbiah, J. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, G. Krüger, T. Henighan, R. Child, Aditya Ramesh, D. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, E. Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, J. Clark, Christopher Berner, Sam McCandlish, A. Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*, abs/2005.14165.",
|
| 1433 |
+
"Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1434 |
+
"Kevin Clark, Minh-Thang Luong, Quoc Le, and Christopher D. Manning. 2020. Pre-training transformers as energy-based cloze models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 285-294, Online. Association for Computational Linguistics.",
|
| 1435 |
+
"Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising textual entailment, pages 177-190. Springer.",
|
| 1436 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics."
|
| 1437 |
+
],
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
512,
|
| 1440 |
+
98,
|
| 1441 |
+
884,
|
| 1442 |
+
910
|
| 1443 |
+
],
|
| 1444 |
+
"page_idx": 8
|
| 1445 |
+
},
|
| 1446 |
+
{
|
| 1447 |
+
"type": "list",
|
| 1448 |
+
"sub_type": "ref_text",
|
| 1449 |
+
"list_items": [
|
| 1450 |
+
"William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing.",
|
| 1451 |
+
"Gamaleldin F. Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein. 2019. Adversarial reprogramming of neural networks. In International Conference on Learning Representations.",
|
| 1452 |
+
"Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1-9. Association for Computational Linguistics.",
|
| 1453 |
+
"Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder.",
|
| 1454 |
+
"Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.",
|
| 1455 |
+
"N. Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and S. Gelly. 2019. Parameter-efficient transfer learning for nlp. In ICML.",
|
| 1456 |
+
"Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.",
|
| 1457 |
+
"Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
|
| 1458 |
+
"Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In International Conference on Learning Representations.",
|
| 1459 |
+
"Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47.",
|
| 1460 |
+
"Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.",
|
| 1461 |
+
"Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019."
|
| 1462 |
+
],
|
| 1463 |
+
"bbox": [
|
| 1464 |
+
117,
|
| 1465 |
+
76,
|
| 1466 |
+
489,
|
| 1467 |
+
908
|
| 1468 |
+
],
|
| 1469 |
+
"page_idx": 9
|
| 1470 |
+
},
|
| 1471 |
+
{
|
| 1472 |
+
"type": "list",
|
| 1473 |
+
"sub_type": "ref_text",
|
| 1474 |
+
"list_items": [
|
| 1475 |
+
"RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692.",
|
| 1476 |
+
"Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The Commitment Bank: Investigating projection in naturally occurring discourse. Proceedings of Sinn und Bedeutung, 23(2):107-124.",
|
| 1477 |
+
"Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, and Farinaz Koushanfar. 2019. Adversarial reprogramming of text classification neural networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5216-5225, Hong Kong, China. Association for Computational Linguistics.",
|
| 1478 |
+
"Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
|
| 1479 |
+
"Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics.",
|
| 1480 |
+
"Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computational Linguistics.",
|
| 1481 |
+
"Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.",
|
| 1482 |
+
"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
|
| 1483 |
+
"Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
|
| 1484 |
+
"Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the"
|
| 1485 |
+
],
|
| 1486 |
+
"bbox": [
|
| 1487 |
+
512,
|
| 1488 |
+
76,
|
| 1489 |
+
882,
|
| 1490 |
+
908
|
| 1491 |
+
],
|
| 1492 |
+
"page_idx": 9
|
| 1493 |
+
},
|
| 1494 |
+
{
|
| 1495 |
+
"type": "list",
|
| 1496 |
+
"sub_type": "ref_text",
|
| 1497 |
+
"list_items": [
|
| 1498 |
+
"16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255-269, Online. Association for Computational Linguistics.",
|
| 1499 |
+
"Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.",
|
| 1500 |
+
"Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35-40, San Diego, California. Association for Computational Linguistics.",
|
| 1501 |
+
"Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020a. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.",
|
| 1502 |
+
"Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020b. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.",
|
| 1503 |
+
"Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631-1642.",
|
| 1504 |
+
"Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In NeurIPS.",
|
| 1505 |
+
"Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.",
|
| 1506 |
+
"Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.",
|
| 1507 |
+
"Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American"
|
| 1508 |
+
],
|
| 1509 |
+
"bbox": [
|
| 1510 |
+
117,
|
| 1511 |
+
76,
|
| 1512 |
+
489,
|
| 1513 |
+
908
|
| 1514 |
+
],
|
| 1515 |
+
"page_idx": 10
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "list",
|
| 1519 |
+
"sub_type": "ref_text",
|
| 1520 |
+
"list_items": [
|
| 1521 |
+
"Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.",
|
| 1522 |
+
"Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics."
|
| 1523 |
+
],
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
512,
|
| 1526 |
+
76,
|
| 1527 |
+
884,
|
| 1528 |
+
309
|
| 1529 |
+
],
|
| 1530 |
+
"page_idx": 10
|
| 1531 |
+
},
|
| 1532 |
+
{
|
| 1533 |
+
"type": "text",
|
| 1534 |
+
"text": "A Hyperparameters",
|
| 1535 |
+
"text_level": 1,
|
| 1536 |
+
"bbox": [
|
| 1537 |
+
115,
|
| 1538 |
+
74,
|
| 1539 |
+
309,
|
| 1540 |
+
91
|
| 1541 |
+
],
|
| 1542 |
+
"page_idx": 11
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "text",
|
| 1546 |
+
"text": "For each of the tasks, we performed hyperparameter search in the following space:",
|
| 1547 |
+
"bbox": [
|
| 1548 |
+
115,
|
| 1549 |
+
99,
|
| 1550 |
+
487,
|
| 1551 |
+
131
|
| 1552 |
+
],
|
| 1553 |
+
"page_idx": 11
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "list",
|
| 1557 |
+
"sub_type": "text",
|
| 1558 |
+
"list_items": [
|
| 1559 |
+
"- Learning rate is chosen from the set $\\{10^{-2}, 3 \\cdot 10^{-3}, 10^{-3}, 3 \\cdot 10^{-4}, 10^{-4}, 3 \\cdot 10^{-5}\\}$ ,",
|
| 1560 |
+
"- Number of epochs is chosen as either 10 or 20. This determines the behavior of the slanted triangular learning rate scheduler.",
|
| 1561 |
+
"- Initialization is performed either with the embedding of the [MASK] token, or randomly initialized from a normal distribution, with the mean and variance taken from the matrix of RoBERTa's word embeddings."
|
| 1562 |
+
],
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
137,
|
| 1565 |
+
145,
|
| 1566 |
+
487,
|
| 1567 |
+
344
|
| 1568 |
+
],
|
| 1569 |
+
"page_idx": 11
|
| 1570 |
+
},
|
| 1571 |
+
{
|
| 1572 |
+
"type": "text",
|
| 1573 |
+
"text": "The hyperparameter search took roughly 4 days on two Titan V GPUs. The final choices for each task are shown in Table 5. Initialization with [MASK] performed better than the random initialization.",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
115,
|
| 1576 |
+
357,
|
| 1577 |
+
487,
|
| 1578 |
+
436
|
| 1579 |
+
],
|
| 1580 |
+
"page_idx": 11
|
| 1581 |
+
},
|
| 1582 |
+
{
|
| 1583 |
+
"type": "text",
|
| 1584 |
+
"text": "We disable all dropouts inside Transformer. We use huggingface implementation of AdamW optimizer with weight decay disabled. The gradient is normalized to the value 1.0. For the batch sampling we use bucketing with padding noise of 0.1. In order to use the device memory more effectively, we also set maximum number of tokens per batch to 2048. The maximum sequence length is truncated to 512 tokens. We enable mixed precision and pad all sequence lengths to the multiples of 8 for the effective usage of TensorCores<sup>8</sup>.",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
115,
|
| 1587 |
+
439,
|
| 1588 |
+
489,
|
| 1589 |
+
615
|
| 1590 |
+
],
|
| 1591 |
+
"page_idx": 11
|
| 1592 |
+
},
|
| 1593 |
+
{
|
| 1594 |
+
"type": "text",
|
| 1595 |
+
"text": "B Learned Tokens",
|
| 1596 |
+
"text_level": 1,
|
| 1597 |
+
"bbox": [
|
| 1598 |
+
510,
|
| 1599 |
+
74,
|
| 1600 |
+
687,
|
| 1601 |
+
87
|
| 1602 |
+
],
|
| 1603 |
+
"page_idx": 11
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "text",
|
| 1607 |
+
"text": "Table 6 lists the closest vocabulary words to the learned embeddings. Most tasks have two input sentences, so the prompts consist of three parts: one is added before the first sentence, the second one is added between the sentences and the third one is appended next to the second sentence. For the single-sentence tasks, the second and third parts of the prompt are simply concatenated. Each task has trainable verbalizer tokens, one per output class.",
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
509,
|
| 1610 |
+
99,
|
| 1611 |
+
882,
|
| 1612 |
+
258
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 11
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "text",
|
| 1618 |
+
"text": "The prompts of RTE, MRPC and STS-B are pretty similar to MNLI's prompts, as the models for these tasks were initialized from pretrained MNLI models. The other tasks were initialized with [MASK] tokens. The final model for CoLA didn't move too far from its initialization.",
|
| 1619 |
+
"bbox": [
|
| 1620 |
+
509,
|
| 1621 |
+
260,
|
| 1622 |
+
882,
|
| 1623 |
+
355
|
| 1624 |
+
],
|
| 1625 |
+
"page_idx": 11
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "text",
|
| 1629 |
+
"text": "$^{8}$ https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html",
|
| 1630 |
+
"bbox": [
|
| 1631 |
+
115,
|
| 1632 |
+
625,
|
| 1633 |
+
495,
|
| 1634 |
+
651
|
| 1635 |
+
],
|
| 1636 |
+
"page_idx": 11
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "table",
|
| 1640 |
+
"img_path": "images/a9295a0bccdf65735d4676d7c71494de3b7c950ed83cc9165b73d507bcf67ac9.jpg",
|
| 1641 |
+
"table_caption": [],
|
| 1642 |
+
"table_footnote": [
|
| 1643 |
+
"Table 5: Hyperparameters of our best-performing models. [MASK] means the prompts are initialized with the word embedding of same token, and MNLI means the prompt is initialized with the prompts of out best MNLI run."
|
| 1644 |
+
],
|
| 1645 |
+
"table_body": "<table><tr><td>Task</td><td>Learning rate</td><td>Epochs</td><td>Init.</td></tr><tr><td>MNLI</td><td>0.001</td><td>10</td><td>[MASK]</td></tr><tr><td>QNLI</td><td>0.001</td><td>10</td><td>[MASK]</td></tr><tr><td>QQP</td><td>0.0003</td><td>20</td><td>[MASK]</td></tr><tr><td>RTE</td><td>0.001</td><td>20</td><td>MNLI</td></tr><tr><td>SST-2</td><td>0.003</td><td>20</td><td>[MASK]</td></tr><tr><td>MRPC</td><td>0.001</td><td>20</td><td>MNLI</td></tr><tr><td>CoLA</td><td>0.001</td><td>20</td><td>[MASK]</td></tr><tr><td>STS-B</td><td>0.001</td><td>20</td><td>MNLI</td></tr></table>",
|
| 1646 |
+
"bbox": [
|
| 1647 |
+
117,
|
| 1648 |
+
673,
|
| 1649 |
+
484,
|
| 1650 |
+
825
|
| 1651 |
+
],
|
| 1652 |
+
"page_idx": 11
|
| 1653 |
+
},
|
| 1654 |
+
{
|
| 1655 |
+
"type": "table",
|
| 1656 |
+
"img_path": "images/2a9c0bf80fbf532476dc797dfe5c728c537dee038e1209939341f1179c64b3fc.jpg",
|
| 1657 |
+
"table_caption": [],
|
| 1658 |
+
"table_footnote": [],
|
| 1659 |
+
"table_body": "<table><tr><td rowspan=\"6\">MNLI</td><td rowspan=\"3\">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow Ale .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a (@ [MASK] _dL aHJ E [MASK] _aKH</td></tr><tr><td>after</td><td>_<!-_informing inyl _entit dim</td></tr><tr><td rowspan=\"3\">Verbalizers</td><td>entailment</td><td>_categories</td></tr><tr><td>neutral</td><td>gomery</td></tr><tr><td>contradiction</td><td>Unless</td></tr><tr><td rowspan=\"5\">QNLI</td><td rowspan=\"3\">Prompts</td><td>before</td><td>*. _neigh [MASK] U {}</td></tr><tr><td>between</td><td>aG-aG- [MASK] olitan _pronouns [MASK] [MASK] [MASK] @@@[MASK]_Choi [MASK]</td></tr><tr><td>after</td><td></td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>entailment</td><td>_VIDE</td></tr><tr><td>not_ entailment</td><td>470</td></tr><tr><td rowspan=\"5\">QQP</td><td rowspan=\"3\">Prompts</td><td>before</td><td>_resembling_swarm_Calm_Membership</td></tr><tr><td>between</td><td>.derive rics [MASK] alias iary [MASK] _omnip [MASK] [MASK] [MASK] _sham</td></tr><tr><td>after</td><td>[MASK] _forb [MASK] _Firefly _THEY</td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>notDuplicate</td><td>ende</td></tr><tr><td>duplicate</td><td>_sugg</td></tr><tr><td rowspan=\"5\">RTE</td><td rowspan=\"3\">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow ALE .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a (@ [MASK] _aHJ femin [MASK] _aK</td></tr><tr><td>after</td><td>ahiahi _informing # _entit OOOO</td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>entailment</td><td>e!</td></tr><tr><td>not_ entailment</td><td>_blames</td></tr><tr><td rowspan=\"5\">SST-2</td><td rowspan=\"3\">Prompts</td><td>before</td><td>choes _charms_sorely _"akijakij</td></tr><tr><td>between</td><td>a ffe Pae _charred _masked [MASK] _Fall _babys _smartest ik /</td></tr><tr><td>after</td><td>dL forums _bio _mang A+</td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>negative</td><td>_defective</td></tr><tr><td>positive</td><td>很重要的</td></tr><tr><td rowspan=\"5\">MRPC</td><td rowspan=\"3\">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow rison .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a jay [MASK] _dL aHJ femin [MASK] .?</td></tr><tr><td>after</td><td>_> _informing # _entit OOOO</td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>entailment</td><td>_categories</td></tr><tr><td>neutral</td><td>gomery</td></tr><tr><td rowspan=\"5\">CoLA</td><td rowspan=\"3\">Prompts</td><td>before</td><td>[MASK] [MASK] [MASK] [MASK]</td></tr><tr><td>between</td><td>[MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK]</td></tr><tr><td>after</td><td>[MASK] [MASK] [MASK] [MASK] [MASK]</td></tr><tr><td rowspan=\"2\">Verbalizers</td><td>unacceptable</td><td>_additionally</td></tr><tr><td>acceptable</td><td>o</td></tr><tr><td rowspan=\"3\">STS-B</td><td rowspan=\"3\">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A</td></tr><tr><td>between</td><td>_Kers irin [/ _a (@ [MASK] _dL AhAHAhAH femin [MASK] _aKH</td></tr><tr><td>after</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A</td></tr></table>",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
117,
|
| 1662 |
+
101,
|
| 1663 |
+
890,
|
| 1664 |
+
820
|
| 1665 |
+
],
|
| 1666 |
+
"page_idx": 12
|
| 1667 |
+
},
|
| 1668 |
+
{
|
| 1669 |
+
"type": "text",
|
| 1670 |
+
"text": "Table 6: The closest words to the prompt and verbalizer token embeddings for the best model for each task. We use cosine distance to measure the distance. [MASK] tokens highlighted in bold indicate the positions we use to output the prediction.",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
114,
|
| 1673 |
+
831,
|
| 1674 |
+
884,
|
| 1675 |
+
875
|
| 1676 |
+
],
|
| 1677 |
+
"page_idx": 12
|
| 1678 |
+
}
|
| 1679 |
+
]
|
data/2021/2101_00xxx/2101.00121/10de5b4b-1a6e-40ba-bf75-79fb29a7975d_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00121/full.md
CHANGED
|
@@ -1,3 +1,322 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WARP: Word-level Adversarial ReProgramming
|
| 2 |
+
|
| 3 |
+
Karen Hambardzumyan<sup>1</sup>, Hrant Khachatrian<sup>1,2</sup>, Jonathan May<sup>3</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ YerevaNN, $^{2}$ Yerevan State University,
|
| 6 |
+
|
| 7 |
+
<sup>3</sup>Information Sciences Institute, University of Southern California
|
| 8 |
+
|
| 9 |
+
mahnerak@yerevann.com, hrant@yerevann.com, jonmay@isi.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Transfer learning from pretrained language models recently became the dominant approach for solving many NLP tasks. A common approach to transfer learning for multiple tasks that maximize parameter sharing trains one or more task-specific layers on top of the language model. In this paper, we present an alternative approach based on adversarial reprogramming, which extends earlier work on automatic prompt generation. Adversarial reprogramming attempts to learn task-specific word embeddings that, when concatenated to the input text, instruct the language model to solve the specified task. Using up to 25K trainable parameters per task, this approach outperforms all existing methods with up to 25M trainable parameters on the public leaderboard of the GLUE benchmark. Our method, initialized with task-specific human-readable prompts, also works in a few-shot setting, outperforming GPT-3 on two SuperGLUE tasks with just 32 training samples.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Language model pretraining has had a tremendous impact on solving many natural language processing tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019). The most popular two approaches take a pretrained model and use a straightforward supervised learning objective. In the first approach, the parameters of the language model are frozen and a task-specific head is trained on top of them (Peters et al., 2018). The second approach fine-tunes all model parameters (Radford et al., 2018). The latter can sometimes yield better results (Peters et al., 2019), while the first one usually offers better stability for smaller datasets. The approach based on frozen features does not require storing task-specific language models.
|
| 18 |
+
|
| 19 |
+
A recent alternative is based on so called adapters (Houlsby et al., 2019; Pfeiffer et al., 2021), a technique that adds new weights at every layer of the pretrained language model while the original parameters are kept frozen. This enables a smaller set of task-specific parameters while achieving results comparable to the fine-tuning approach.
|
| 20 |
+
|
| 21 |
+
Another approach of leveraging pretrained language models for downstream tasks, introduced by Radford et al. (2019), provides "task descriptions" without using any labeled examples. GPT-3 (Brown et al., 2020) demonstrates impressive few-shot learning performance with priming: by providing the language model a few inputs and outputs ("analogies") as a context. The language model contextually "learns" from these examples and outputs the answer with a single forward pass without any trainable parameters. These methods, however, require huge language models (1.5B and 175B parameters, respectively).
|
| 22 |
+
|
| 23 |
+
The success of task reformulation-based approaches suggest that language models are capable of solving various natural language processing tasks given a well-crafted prompt. We hypothesize that it is possible to find such prompts. In other words, we can discover extra tokens that, when added to the input, can exploit language model capabilities better than the manually-designed ones.
|
| 24 |
+
|
| 25 |
+
In this paper, we introduce a novel technique to find optimal prompts. We call our method WARP: Word-level Adversarial ReProgramming<sup>1</sup>. The method is inspired by adversarial reprogramming (Elsayed et al., 2019) — a method of adding adversarial perturbations to an input image that reprograms a pretrained neural network to perform classification on a task other than the one it was originally trained for.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1: An example of an adversarial program that causes Inception V3 ImageNet model to function as an MNIST classifier, from Elsayed et al. (2019)
|
| 29 |
+
|
| 30 |
+
We show that our method, using up to 25K trainable parameters per task, achieves 81.6 test score on the GLUE Leaderboard, outperforming all the other submissions that use up to three orders of magnitude more trainable parameters. We show that it is possible to inject knowledge into WARP models using manually designed initialization of the prompt, which is especially useful on tasks with a small number of examples. Moreover, WARP shows impressive few-shot performance on two tasks from the SuperGLUE benchmark with just 32 examples, outperforming GPT-3 results. Finally, we discuss the advantages of our method in real-life applications.
|
| 31 |
+
|
| 32 |
+
# 2 Related Work
|
| 33 |
+
|
| 34 |
+
# 2.1 Towards Fewer Trainable Parameters
|
| 35 |
+
|
| 36 |
+
Jiao et al. (2020) show that knowledge distillation may help reduce the size of their model 7.5 times while almost preserving the performance, but finetuning such models still requires storage of separate task-specific models. As seen in Section 6, this approach does not scale when we want to apply it to many tasks at once.
|
| 37 |
+
|
| 38 |
+
Another approach, called Adapters (Houlsby et al., 2019; Pfeiffer et al., 2021), introduces new task-specific parameters that are added at every layer of the Transformer network. Only these newly initialized weights are trained, which allows separation of general and task-specific knowledge. In contrast, our method does not inject task-specific knowledge inside the body of the pretrained language model. Instead, it focuses on learning task-specific input-level prompts.
|
| 39 |
+
|
| 40 |
+

|
| 41 |
+
Figure 2: WARP adds a few trainable embeddings around the input, which causes the masked language model to predict the sentiment of the sentence.
|
| 42 |
+
|
| 43 |
+
# 2.2 Task Reformulation
|
| 44 |
+
|
| 45 |
+
In GPT-2, Radford et al. (2019) introduce a completely unsupervised way for transferring knowledge to downstream tasks by reformulating various natural language understanding tasks into language modeling problems. This approach does not make use of the available training examples. Brown et al. (2020) demonstrate an effective few-shot transfer by reformulating downstream tasks into input-output analogies in the context without a need for further fine-tuning. Nonetheless, the number of training examples is limited to the context size and is not scalable to a traditional supervised learning scenario.
|
| 46 |
+
|
| 47 |
+
Schick and Schütze (2021b) show the effectiveness of reformulating a number of tasks into Cloze-style tasks by fine-tuning masked language models (Devlin et al., 2019). The method, called Pattern Exploited Training (PET), additionally uses training samples and performs few-shot learning even without huge models such as GPT-3.
|
| 48 |
+
|
| 49 |
+
Our method is also based on masked language models, but unlike PET, we focus on finding the best prompt using the training examples. This eliminates the need for manually-designed prompts, however, our method can also benefit from similar prior knowledge about the task by careful initialization of the prompts.
|
| 50 |
+
|
| 51 |
+
# 2.3 Adversarial Reprogramming
|
| 52 |
+
|
| 53 |
+
Adversarial Reprogramming (Elsayed et al., 2019) demonstrates the reprogramming of pretrained ImageNet classifiers by adding input-level adversarial perturbations to make them perform well on MNIST and CIFAR-10 image classification tasks. The adversarial perturbation is designed to be image padding added to the original input, as illus
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
Figure 3: Illustration of WARP. The prompt tokens [P_1], [P_2], ..., [P_N] are inserted before, between, and after the sentences. Only the prompt and class embeddings are trainable (colored in green). The masked language modeling Head is applied without the decoder; instead, the matrix of [V_1], [V_2], ..., [V_N] is applied as a linear layer. Finally, a regular task-specific loss is computed on the resulting logits.
|
| 57 |
+
|
| 58 |
+
trated in Figure 1. Then the perturbation parameter is trained to optimize the target classification task objective using the annotated image data.
|
| 59 |
+
|
| 60 |
+
While in the case of image classification it is not obvious why adversarial reprogramming should ever work, e.g. why a network trained on ImageNet should have the capacity to solve MNIST when surrounded with a particular bitmap, for NLP tasks, there is more intuition. Many NLP tasks can be reformulated as language models, a shared space for both program and data.
|
| 61 |
+
|
| 62 |
+
Adversarial reprogramming has been adapted to text classification tasks with LSTM networks in (Neekhara et al., 2019). They operate in the vocabulary space and reprogram a model trained for one task to perform another task. More recently, AutoPrompt (Shin et al., 2020a) attempts to find prompts for large language models automatically without adding any parameters to the model. Unlike AutoPrompt, we perform gradient-based optimization in the space of word embeddings which gives our model more degrees of freedom and eventually better performance on the downstream tasks (Section 6.2).
|
| 63 |
+
|
| 64 |
+
In a more general sense, guiding an NLP model with special tokens appended to the input is an even older idea. In particular, multilingual neural machine translation models use special tokens in the input to control the target language (Ha et al., 2016; Johnson et al., 2017) or politeness
|
| 65 |
+
|
| 66 |
+
of the translation (Sennrich et al., 2016). Another method to reprogram a BERT-based model is proposed by Artetxe et al. (2020), where a model tuned on an English version of a particular task is transformed to work in another language by changing only the embedding matrices.
|
| 67 |
+
|
| 68 |
+
In parallel work, Li and Liang (2021) propose a similar method and successfully apply it on two text generation tasks. Apart from the different types of tasks and our characterization of the task as a form of Adversarial Reprogramming, the main difference between their approach and ours is that they use an additional parameterization trick to stabilize the training.
|
| 69 |
+
|
| 70 |
+
# 3 WARP
|
| 71 |
+
|
| 72 |
+
We follow a setup similar to Elsayed et al. (2019) with some NLP-specific modifications depicted in Figure 2.
|
| 73 |
+
|
| 74 |
+
Our goal is to find the best prompt that will make a pretrained masked language model predict the desired answer (verbalizer token) for a training example's masked token $^2$ . We search for such prompts in the (continuous) embedding space. In other words, we want to find parameters $\Theta = \{\Theta^P,\Theta^V\}$ for prompt and verbalizer embed
|
| 75 |
+
|
| 76 |
+
dings, respectively, such that:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\Theta^ {*} = \arg \max _ {\Theta} (- \log P _ {\Theta} (y | x))
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
and the probabilities are given by:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
P _ {\Theta} (y | x) = \frac {\exp \Theta_ {y} ^ {V} f \left(T _ {\Theta^ {P}} (x)\right)}{\sum_ {i \in C} \exp \Theta_ {i} ^ {V} f \left(T _ {\Theta^ {P}} (x)\right)}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
where $T_{\Theta^P}(x)$ is the template that inserts the prompt embeddings $\Theta^P$ into predefined positions, $C$ is the set of classes, and $f(x)$ is the masked language model output (without the last decoder layer, which is simply the transposed word embedding matrix). Both $\Theta^P$ and $\Theta^V$ are vectors in the same embeddings space as the word embeddings.
|
| 89 |
+
|
| 90 |
+
In Figure 2, the template $T_{\Theta^P}(x)$ prepends $\Theta_1^P$ and appends $\Theta_2^P, \Theta_3^P, \Theta_4^P$ parameters to the word embeddings and uses $\Theta_+^V$ and $\Theta_-^V$ to calculate the probabilities on the masked token position for positive and negative classes.
|
| 91 |
+
|
| 92 |
+
# 3.1 Method
|
| 93 |
+
|
| 94 |
+
Similar to Elsayed et al. (2019), we employ stochastic gradient descent to find the best adversarial perturbation on the text that will minimize the task objective. First, we insert special prompt tokens [P_1], [P_2], ... [P_K] and an additional [MASK] token into the input sequence. These tokens might be placed before or after the sentences, depending on the prompt template.
|
| 95 |
+
|
| 96 |
+
We set the optimization objective to a cross-entropy loss between the head output of the masked language model and the verbalizer tokens [V_1], [V_2], ..., [V_C] for classes 1...C accordingly.
|
| 97 |
+
|
| 98 |
+
The only trainable parameters are the word embeddings for [P_1], ..., [P_K] and [V_1], ..., [V_C]. In case we want to train models for multiple tasks, these are the only task-specific parameters we need to store. The entire "body" of the large language model (all attention layers, feedforward layers, and all other word embeddings) remains untouched.
|
| 99 |
+
|
| 100 |
+
Note that, unlike most adversarial attacks, we do not update the embeddings of the original tokens of the input. This follows the intuition from Elsayed et al. (2019), when the pixels of MNIST or CIFAR images are left untouched, and only padding pixels are updated.
|
| 101 |
+
|
| 102 |
+
We train these parameters by minimizing the loss on the training set of the downstream task.
|
| 103 |
+
|
| 104 |
+
# 3.2 Implementation Details
|
| 105 |
+
|
| 106 |
+
WARP is implemented in the AllenNLP framework. For all the GLUE benchmark tasks we use the roberta-large (Liu et al., 2019) model from the PyTorch implementation of huggingface transformers (Wolf et al., 2020) library. For the few-shot experiments, we use albert-xxlarge-v2 in order to directly compare to iPET (Schick and Schütze, 2021b). For the GLUE and SuperGLUE tasks we use dataset loaders and metrics implementations from the huggingface datasets library.
|
| 107 |
+
|
| 108 |
+
The prompt tokens are initialized either with word embeddings of [MASK] or similar to the vectors from the word embedding layer. For the answer prompts, we use the masked language model head, which usually consists of a feedforward network and a decoder on top of it, where the weights of the decoder are shared with the word embeddings used for the input. We calculate the softmax over the verbalizer tokens [V_1], ... [V_C].
|
| 109 |
+
|
| 110 |
+
We choose the Adam optimizer with a slanted triangular schedule for the learning rate with $6\%$ warm-up steps and train for 10-20 epochs on each task. Each batch consists of examples containing at most 1024 tokens and 8 examples.
|
| 111 |
+
|
| 112 |
+
In order to speed up the training, we disable the dropout of the pretrained language model. All the experiments are performed on two Titan Vs and two RTX 3080 GPUs, with mixed precision training. In practice, WARP is 2.5-3 times faster than regular fine-tuning and 2 times slower than frozen-features experiments in terms of epoch duration with the same batch sizes.
|
| 113 |
+
|
| 114 |
+
Details about the hyperparameters can be found in the Supplementary material.
|
| 115 |
+
|
| 116 |
+
# 4 Experiments on GLUE
|
| 117 |
+
|
| 118 |
+
Following prior work, we evaluate our method on the GLUE Benchmark (Wang et al., 2019b), which consists of 9 natural language understanding tasks. Generally, we perform single-task WARP training, with early stopping and model selection using the original validation sets, if not stated otherwise.
|
| 119 |
+
|
| 120 |
+
# 4.1 Tasks
|
| 121 |
+
|
| 122 |
+
Almost all the tasks from the GLUE Benchmark are either sentence classification or sentence pair classification tasks, so WARP requires very few modifications to adapt to each of the tasks.
|
| 123 |
+
|
| 124 |
+
<table><tr><td></td><td>MNLI</td><td>QNLI</td><td>QQP</td><td>RTE</td><td>SST</td><td>MRPC</td><td>CoLA</td><td>STS-B</td><td>AVG</td><td>#</td></tr><tr><td>Human Baselines</td><td>92.0 / 92.8</td><td>91.2</td><td>59.5 / 80.4</td><td>93.6</td><td>97.8</td><td>86.3 / 80.8</td><td>66.4</td><td>92.7 / 92.6</td><td>87.1</td><td></td></tr><tr><td>DeBERT3</td><td>91.9 / 91.6</td><td>99.2</td><td>76.2 / 90.8</td><td>93.2</td><td>97.5</td><td>94.0 / 92.0</td><td>71.5</td><td>92.9 / 92.6</td><td>90.8</td><td>3·109</td></tr><tr><td>RoBERTa</td><td>90.8 / 90.2</td><td>95.4</td><td>74.3 / 90.2</td><td>88.2</td><td>96.7</td><td>92.3 / 89.8</td><td>67.8</td><td>92.2 / 91.9</td><td>88.1</td><td>355·106</td></tr><tr><td>BERTlarge</td><td>86.7 / 85.9</td><td>92.7</td><td>72.1 / 89.3</td><td>70.1</td><td>94.9</td><td>89.3 / 85.4</td><td>60.5</td><td>87.6 / 86.5</td><td>80.5</td><td>355·106</td></tr><tr><td>BERTbase</td><td>84.6 / 83.4</td><td>90.5</td><td>71.2 / 89.2</td><td>66.4</td><td>93.5</td><td>88.9 / 84.8</td><td>52.1</td><td>87.1 / 85.8</td><td>78.3</td><td>110·106</td></tr><tr><td>TinyBERT6</td><td>84.6 / 83.2</td><td>90.4</td><td>71.6 / 89.1</td><td>70.0</td><td>93.1</td><td>87.3 / 82.6</td><td>51.1</td><td>85.0 / 83.7</td><td>78.1</td><td>67·106</td></tr><tr><td>TinyBERT4</td><td>82.5 / 81.8</td><td>87.7</td><td>71.3 / 89.2</td><td>66.6</td><td>92.6</td><td>86.4 / 81.2</td><td>44.1</td><td>81.9 / 80.4</td><td>75.9</td><td>15·106</td></tr><tr><td>ELECTRAsmall</td><td>81.6 / 81.2</td><td>88.3</td><td>70.4 / 88.0</td><td>63.6</td><td>91.1</td><td>89.0 / 84.9</td><td>55.6</td><td>85.6 / 84.6</td><td>77.4</td><td>14·106</td></tr><tr><td>Adapters (BERT)</td><td>85.4 / 85.0</td><td>92.4</td><td>71.5 / 89.4</td><td>71.6</td><td>94.3</td><td>88.7 / 84.3</td><td>59.2</td><td>87.3 / 86.1</td><td>80.2</td><td>1.2·106</td></tr><tr><td>WARP (RoBERTa)</td><td>88.0 / 88.2</td><td>93.5</td><td>68.6 / 87.7</td><td>84.3</td><td>96.3</td><td>88.2 / 83.9</td><td>53.9</td><td>89.5 / 88.8</td><td>81.6</td><td><25K</td></tr></table>
|
| 125 |
+
|
| 126 |
+
Table 1: Test set results on GLUE Benchmark. The results are obtained from the GLUE Evaluation server. The subscript next to TinyBERT corresponds to the number of layers in the model. WARP for RTE, STS-B and MRPC are initialized from the MNLI parameters. Results for WNLI are not shown, although they are counted in the averaged GLUE score (AVG column). The last column # shows the number of trainable parameters. WARP's average performance is higher than all models with up to three orders of magnitude more trainable parameters. Fully fine-tuned RoBERTa and the current state-of-the-art method (DeBERT) score higher by 6.5 and 9.2 points, respectively.
|
| 127 |
+
|
| 128 |
+
SST-2 (Sentence Sentiment Treebank, Socher et al., 2013) is a single sentence binary classification task. For the prompt, we put a [MASK] token after the sentence, and the trainable prompt tokens are both appended and prepended to the sentence.
|
| 129 |
+
|
| 130 |
+
CoLA (Corpus of Linguistic Acceptability, Warstadt et al., 2019) is a single sentence classification task as well, so we treat both the same way with the only difference that as a validation metric we use accuracy for SST-2, and Matthew's correlation for CoLA.
|
| 131 |
+
|
| 132 |
+
MNLI (MultiNLI, Multi-Genre Natural Language Inference, Williams et al., 2018), QNLI (Question Natural Language Inference, Rajpurkar et al., 2016) and RTE (Recognizing Textual Entailment, Dagan et al., 2006; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009) are sentence pair classification tasks. Similar to Schick and Schütze (2021a), we may have prompt tokens before, after and between the two sentences, but the [MASK] token is always put between the sentences. For MNLI, we use matched accuracy as a validation metric and use the same model for the mismatched version. In our few-shot attempt for the RTE task, we use a different training and evaluation setup discussed in Section 5.2. QQP (Quora Question Pairs $^4$ ) and MRPC (Microsoft Research Paraphrase Corpus, Dolan and Brockett, 2005) follow the same prompt pattern as NLI tasks. As a validation metric $F_1$ score is used,
|
| 133 |
+
|
| 134 |
+
STS-B (Semantic Textual Similarity Bench
|
| 135 |
+
|
| 136 |
+
mark, Cer et al., 2017), unlike the other tasks in the benchmark, is formulated as a regression task. The prompt pattern is the same, but instead of introducing new embeddings for $[V\_ 1]$ , $[V\_ 2]$ , ..., $[V\_ C]$ verbalizer tokens, we add a regression head to the last hidden state of MLM head and use Mean Squares Error optimization objective, similar to (Liu et al., 2019). Pearson Correlation is used as the validation metric. During inference, we clip the scores within [1, 5].
|
| 137 |
+
|
| 138 |
+
We follow Liu et al. and train models for MRPC, STS-B, and RTE tasks initialized with the parameters from the best MNLI model but do not apply any task-specific tricks to WNLI (Winograd Schema Challenge NLI, Levesque et al., 2011) and always predict the majority label.
|
| 139 |
+
|
| 140 |
+
# 4.2 Results
|
| 141 |
+
|
| 142 |
+
Table 1 presents the results on the test set obtained from the GLUE evaluation server. Besides our best WARP models, we also include the human baselines, current state-of-the-art model (He et al., 2020), the regular fine-tuned pretrained model we use, and also include relatively small language models, including (Jiao et al., 2020), (Clark et al., 2020), (Houlsby et al., 2019).
|
| 143 |
+
|
| 144 |
+
With the GLUE Score, WARP outperforms all the models that train less than 25 million parameters on the leaderboard. We explain the relatively strong WARP results on textual entailment tasks by the easier reformulation of such tasks. Likewise, we explain the relatively weak performance on CoLA by the difficulties of reformulating the
|
| 145 |
+
|
| 146 |
+
<table><tr><td>train size</td><td>MNLI 392702</td><td>QNLI 104743</td><td>QQP 363846</td><td>RTE 2490</td><td>SST 67349</td><td>MRPC 3668</td><td>CoLA 8551</td><td>STS-B 5749</td><td>AVG</td><td>#</td></tr><tr><td>Fine-Tuning</td><td>90.2</td><td>94.7</td><td>92.2</td><td>86.6</td><td>96.4</td><td>90.9</td><td>68.0</td><td>92.4</td><td>88.9</td><td>\( {355} \cdot {10}^{6} \)</td></tr><tr><td>Adapters</td><td>90.4</td><td>94.7</td><td>88.5</td><td>83.4</td><td>96.3</td><td>92.9</td><td>67.4</td><td>92.5</td><td>88.3</td><td>\( 3 \cdot {10}^{6} \)</td></tr><tr><td>Linear Classifier</td><td>64.2</td><td>78.1</td><td>74.9</td><td>59.2</td><td>88.4</td><td>82.5</td><td>48.9</td><td>71.8</td><td>71.0</td><td>≤ 3072</td></tr><tr><td>\( {\mathrm{{WARP}}}_{0} \)</td><td>70.9</td><td>78.8</td><td>77.1</td><td>72.2</td><td>89.8</td><td>83.8</td><td>32.8</td><td>73.8</td><td>72.4</td><td>≤ 3072</td></tr><tr><td>\( {\mathrm{{WARP}}}_{1} \)</td><td>83.9</td><td>87.6</td><td>81.6</td><td>72.6</td><td>93.8</td><td>84.7</td><td>46.1</td><td>80.4</td><td>78.8</td><td>≤ 4096</td></tr><tr><td>\( {\mathrm{{WARP}}}_{2} \)</td><td>85.4</td><td>88.0</td><td>81.5</td><td>69.7</td><td>94.3</td><td>85.3</td><td>54.4</td><td>80.8</td><td>79.9</td><td>≤ 5120</td></tr><tr><td>\( {\mathrm{{WARP}}}_{4} \)</td><td>86.9</td><td>92.4</td><td>83.1</td><td>68.2</td><td>95.9</td><td>85.0</td><td>56.0</td><td>75.5</td><td>80.4</td><td>≤ 7168</td></tr><tr><td>\( {\mathrm{{WARP}}}_{8} \)</td><td>87.6</td><td>93.0</td><td>83.8</td><td>72.9</td><td>95.4</td><td>85.6</td><td>57.4</td><td>81.0</td><td>82.1</td><td>< 11K</td></tr><tr><td>\( {\mathrm{{WARP}}}_{\text{init }} \)</td><td>86.8</td><td>90.4</td><td>83.6</td><td>80.1</td><td>96.0</td><td>86.0</td><td>51.7</td><td>86.9</td><td>82.7</td><td>< 11K</td></tr><tr><td>\( {\mathrm{{WARP}}}_{20} \)</td><td>\( \underline{88.2} \)</td><td>\( \underline{93.5} \)</td><td>\( \underline{84.5} \)</td><td>75.8</td><td>\( \underline{96.0} \)</td><td>90.8</td><td>\( \underline{60.6} \)</td><td>88.6</td><td>84.8</td><td>< 25K</td></tr><tr><td>\( {\mathrm{{WARP}}}_{\mathrm{{MNLI}}} \)</td><td></td><td></td><td></td><td>\( \underline{86.3} \)</td><td></td><td>\( \underline{91.2} \)</td><td></td><td>\( \underline{91.0} \)</td><td>86.4</td><td>< 25K</td></tr></table>
|
| 147 |
+
|
| 148 |
+
Table 2: Dev set results on GLUE tasks. The last column shows the number of trainable parameters only. $\mathsf{WARP}_i$ corresponds to WARP training with prompt consisting of $i$ prompt tokens. $\mathsf{WARP}_{\mathsf{MNLI}}$ corresponds to WARP training initialized with the best MNLI parameters. All the models are based on pretrained roberta-large, and for Adapters and WARP-based approaches require to store $355 \cdot 10^{6}$ frozen parameters shared across all the GLUE tasks. We show the primary validation metric for each task, described at Subsection 4.1. The AVG column shows the average of shown metrics and is not comparable to the Test server GLUE Score. The number of parameters for WARP methods may vary because of a difference in the number of classes. Underlined numbers correspond to our GLUE submission.
|
| 149 |
+
|
| 150 |
+
task into a Cloze task.
|
| 151 |
+
|
| 152 |
+
To further analyze WARP, we conduct several experiments and focus on dev set results. In order to directly compare WARP with existing methods, we report in Table 2 different methods that use RoBERTa, including fine-tuning, linear classifiers on top, AutoPrompt, and Adapters. For WARP experiments, we compare performance with different numbers of prompt tokens.
|
| 153 |
+
|
| 154 |
+
The $\mathrm{WARP_0}$ model does not introduce any prompt parameters. The only difference between $\mathrm{WARP_0}$ and Linear Classifier is that for $\mathrm{WARP_0}$ , [MASK] is added to the input of each sample, and we get sentence representations from the MLM head at the masked position. By contrast, in the case of the Linear Classifier, we use the average of non-special token embeddings as sentence representations. As we can see, pooling with MLM is significantly better.
|
| 155 |
+
|
| 156 |
+
Table 2 shows that, as we decrease the number of trainable prompt parameters, the performance decreases, but the model still works. Similar behavior was observed by Elsayed et al. (2019) in experiments with different padding parameter sizes. However, in contrast to WARP, the number of trainable parameters in that work are much greater than the size of the input.
|
| 157 |
+
|
| 158 |
+
An important benefit of using WARP is that
|
| 159 |
+
|
| 160 |
+
it can be initialized with manual prompts. In addition to the regular models where we initialize with [MASK] tokens, we performed a run on the GLUE datasets with the same prompt [CLS] "S1"? [MASK]. "S2"! [SEP] for all the tasks (without S2 for single-sentence tasks). We denote these results as WARPinit in Table 2. WARPinit outperforms WARP8 on tasks with relatively few training examples — RTE, MRPC and STSB, which indicates its potential in the low-data regime.
|
| 161 |
+
|
| 162 |
+
# 5 Few-Shot Experiments
|
| 163 |
+
|
| 164 |
+
The fact that WARP can be initialized using manually designed natural prompts suggests that we can similarly benefit from such human attribution similar to iPET (Schick and Schütze, 2021b), especially in scenarios with limited training data.
|
| 165 |
+
|
| 166 |
+
# 5.1 Setup
|
| 167 |
+
|
| 168 |
+
For our few-shot experiments we build WARP on top of ALBERT (Lan et al., 2020), the same pretrained model used by PET and iPET. To initialize WARP prompts, we use the same Prompt-Verbalizer Patterns (PVP) from Schick and Schütze (2021b): the embeddings for [P_1], [P_2]... [P_N] are initialized with PVP's prompt token embeddings, and embeddings for [V_1], [V_2]... [V_C] are initialized with verbalizer token embeddings for their corre
|
| 169 |
+
|
| 170 |
+
sponding classes. Unlike roberta-large, the alberta-xxlarge-v2 uses word embeddings of size 128 (8 times smaller than RoBERTa).
|
| 171 |
+
|
| 172 |
+
# 5.2 Tasks
|
| 173 |
+
|
| 174 |
+
In order to compare with GPT-3, PET, and iPET, we use two tasks from FewGLUE (Schick and Schütze, 2021b), which is a few-shot subset of the SuperGLUE benchmark (Wang et al., 2019a) consisting of 32 examples for each task. The dataset also provides 20000 additional unlabeled examples, however, we do not make use of them and work in a purely supervised setup.
|
| 175 |
+
|
| 176 |
+
CB: CommitmentBank (de Marneffe et al., 2019) is a textual entailment task which we treat like the other sentence pair classification tasks. To initialize the prompt we use the template [CLS] "h"? [MASK]. "p" [SEP]. We also initialize [V-1], [V-2], [V-3] token embeddings with _yes, _no and _maybe (respectively for entailment, contradiction and neutral).
|
| 177 |
+
|
| 178 |
+
RTE: Unlike experiments on the RTE task for the full-sized training in the GLUE benchmark, we do not initialize the model with vectors from MNLI. Instead, the prompt is initialized exactly the same way as in the CB task. The only difference is that we have only the two tokens [V_1] and [V_2] initialized with _yes and _instead (for entailment and not_ entailment, respectively).
|
| 179 |
+
|
| 180 |
+
# 5.3 Model Selection
|
| 181 |
+
|
| 182 |
+
Although all trainable parameters are manually initialized in this setup, different random seeds can yield different results because of the order the training examples appear during an epoch.
|
| 183 |
+
|
| 184 |
+
In the few-shot setup we cannot access the original validation set. Thus, we disable early stopping and simply pick the last checkpoint.
|
| 185 |
+
|
| 186 |
+
In order to find the best initial learning rate, we conduct 20 runs of WARP with the same learning rate each time by randomly choosing 16 training examples and taking the rest for a development set. We repeat this for all candidate learning rates and choose the one with the best average validation performance across all the random seeds.
|
| 187 |
+
|
| 188 |
+
Finally, in order to eliminate the effect of different random seeds, we build an ensemble model from 20 WARP runs using simple majority vote.
|
| 189 |
+
|
| 190 |
+
<table><tr><td></td><td>Model</td><td>CB
|
| 191 |
+
F1 / Acc.</td><td>RTE
|
| 192 |
+
Acc.</td></tr><tr><td rowspan="6">dev</td><td>GPT-3 Small</td><td>26.1 / 42.9</td><td>52.3</td></tr><tr><td>GPT-3 Med</td><td>40.4 / 58.9</td><td>48.4</td></tr><tr><td>GPT-3</td><td>57.2 / 82.1</td><td>72.9</td></tr><tr><td>PET (ALBERT)</td><td>59.4 / 85.1</td><td>69.8</td></tr><tr><td>iPET (ALBERT)</td><td>92.4 / 92.9</td><td>74.0</td></tr><tr><td>WARPinit (ALBERT)</td><td>84.0 / 87.5</td><td>71.8</td></tr><tr><td rowspan="4">test</td><td>GPT-3</td><td>52.0 / 75.6</td><td>69.0</td></tr><tr><td>PET (ALBERT)</td><td>60.2 / 87.2</td><td>67.2</td></tr><tr><td>iPET (ALBERT)</td><td>79.9 / 88.8</td><td>70.8</td></tr><tr><td>WARPinit (ALBERT)</td><td>70.2 / 82.4</td><td>69.1</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 3: Results on SuperGLUE benchmark. The results for the test set are obtained from SuperGLUE evaluation server. We only show systems performing in a similar few-shot training setup using 32 examples.
|
| 195 |
+
|
| 196 |
+
# 5.4 Results
|
| 197 |
+
|
| 198 |
+
As seen in Table 3, WARP outperforms PET and GPT-3 baselines, but stays behind iPET on both tasks. GPT-3 has 170B parameters, but none of them is being trained for the given tasks. PET and iPET have 255M parameters, and all of them are trained for these tasks. Additionally, they leverage unlabeled examples using distillation. WARP has roughly the same 255M parameters, but only 1024 of them are trained for any single model. An ensemble of 20 WARP models has slightly more than 20K trainable parameters.
|
| 199 |
+
|
| 200 |
+
# 6 Discussion
|
| 201 |
+
|
| 202 |
+
# 6.1 Interpreting tokens learned by WARP
|
| 203 |
+
|
| 204 |
+
WARP learns prompt embeddings in a continuous space. In this section, we explore those embeddings by looking at the nearby token vectors. Table 6 in the Supplementary material lists the closest tokens (in terms of cosine similarity) to the learned embeddings. All GLUE tasks are initialized with [MASK] token, except for RTE, MRPC, and STS-B, which are initialized from the pretrained MNLI model. The prompt tokens of the solutions for those three tasks are quite close to the ones from the MNLI solution. We have seen similar behavior on SuperGLUE experiments with manual initializations. The solution for CoLA (which is one of the worst-performing tasks) is close to the initialized point.
|
| 205 |
+
|
| 206 |
+
We do not see any prompt tokens that are meaningful in the context of the tasks. As expected, the verbalized tokens are more interpretable. For
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Figure 4: The effect of the training data size for SST-2 task (dev set). Horizontal axis is the number of training examples. Solid lines represent median over 10 runs, and the error bars show minimum and maximum performance. All methods use roberta-large model. The results for AutoPrompt and fine-tuning are taken from (Shin et al., 2020b)
|
| 210 |
+
|
| 211 |
+
example, the embedding for the "contradiction" class of MNLI is close to the token "Unless". The embeddings for "negative" and "positive" classes of SST-2 task are close to "defective" and "important", respectively. Other verbalized tokens are non-interpretable (e.g. "470" or word pieces with non-Latin characters).
|
| 212 |
+
|
| 213 |
+
# 6.2 Comparison with AutoPrompt
|
| 214 |
+
|
| 215 |
+
AutoPrompt (Shin et al., 2020b) learns a prompt for the given task in the finite space of vocabulary tokens. Their best version uses 3 or 6 prompt tokens and reaches $91.2\%$ accuracy on the development set of SST-2. The search space of WARP is significantly larger, which allows WARP to get better performance with just a single prompt token $(93.8\%)$ .
|
| 216 |
+
|
| 217 |
+
AutoPrompt does not achieve meaningful results on RTE or CB tasks. WARP succeeds on both without manual initialization. Moreover, with manual initialization, WARP gets good performance on both tasks even with just 32 examples (Table 3).
|
| 218 |
+
|
| 219 |
+
Figure 4 shows the dependence of the accuracy on SST-2 development set from the number of training samples. Both WARP and AutoPrompt use 10 prompt tokens. With a few hundred training samples or fewer, the difference between the two algorithms is not significant. WARP starts to perform better with more training samples.
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Approach</td><td># of parameters to store</td></tr><tr><td>Linear probing</td><td>M + ECN</td></tr><tr><td>Full fine-tuning</td><td>MN</td></tr><tr><td>Single layer</td><td>M + NE(E + C)</td></tr><tr><td>TinyBERT</td><td>M0N</td></tr><tr><td>Adapters</td><td>M + NEE'</td></tr><tr><td>WARP</td><td>M + NE(C + K)</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 4: The number of parameters to be stored to serve $N$ text classification tasks with at most $C$ classes each, using a pretrained language model with $M$ parameters. $E$ is the dimension of embeddings (1024 in the case of RoBERTa). In TinyBERT, $M_0$ can be up to 10 times less than $M$ . In Adapters, $E'$ is roughly equal to $E$ , as the number of layers to which adapters are attached roughly compensates the smaller size of the bottleneck layer. In WARP, $K$ is the number of prompts (usually fewer than 10).
|
| 224 |
+
|
| 225 |
+
Shin et al. (2020b) include results with a manually designed prompt which performs pretty well (shown as a dashed line). We also compare with the manually initialized version of WARP, which performs very well with just 100 examples.
|
| 226 |
+
|
| 227 |
+
# 6.3 Real-world applications
|
| 228 |
+
|
| 229 |
+
The importance of NLP systems like WARP can be demonstrated by the following application. Suppose we want to build a system that needs to serve $N >> 1$ classification tasks simultaneously. Let the number of classes for each task be bounded by $C$ . The system can be based on a large pretrained language model with $M$ parameters, using word embedding size $E$ . How many parameters should the system store in the device memory to be able to serve all $N$ tasks?
|
| 230 |
+
|
| 231 |
+
If we take the approach with frozen features, we can reuse $M$ parameters for all tasks and store additional ECN task-specific parameters. This is optimal in terms of storage but will not perform well. The other extreme is to fine-tune the whole model for each task and store at least MN parameters. Table 4 shows the trade-offs offered by the other solutions. Methods like TinyBERT decrease the number of parameters from $MN$ by only $M$ . WARP, on the other hand, needs to store only $M + NE(C + K)$ parameters, where $K$ is the number of trainable prompt tokens.
|
| 232 |
+
|
| 233 |
+
In practice, WARP additionally allows performing inference on inputs for different tasks in parallel, using samples of multiple tasks in the same batch. Every input sentence can be concatenated with task-specific pretrained prompts in advance. Then, the forward pass of the network is identical for all tasks. The final task-specific linear layers can be concatenated to form a single large linear layer with at most $NC$ output neurons.
|
| 234 |
+
|
| 235 |
+
This approach can be especially useful in the systems that provide machine learning models as a service. By storing one copy of a pretrained language model, it is possible to serve a large number of user-specific models in parallel with little overhead.
|
| 236 |
+
|
| 237 |
+
# 7 Conclusion
|
| 238 |
+
|
| 239 |
+
In this paper we have proposed an alternative way to transfer knowledge from large pretrained language models to downstream tasks by appending carefully optimized embeddings to the input text. The method outperforms existing methods with significantly more trainable parameters on GLUE benchmark tasks and shows an impressive performance in a few-shot setting on two SuperGLUE tasks. On the sentiment analysis task, the performance is comparable to the fully fine-tuned language models. This method can save a lot of storage in software applications designed to serve large numbers of sentence classification tasks.
|
| 240 |
+
|
| 241 |
+
# Acknowledgments
|
| 242 |
+
|
| 243 |
+
This work is based in part on research sponsored by Air Force Research Laboratory (AFRL) under agreement number FA8750-19-1-1000. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation therein. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Laboratory, DARPA or the U.S. Government.
|
| 244 |
+
|
| 245 |
+
The work was supported by the RA Science Committee, in the frames of the research project No. 20TTAT-AIa024. Most experiments were performed on GPUs donated by NVIDIA.
|
| 246 |
+
|
| 247 |
+
# References
|
| 248 |
+
|
| 249 |
+
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
|
| 250 |
+
Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge.
|
| 251 |
+
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge.
|
| 252 |
+
T. Brown, B. Mann, Nick Ryder, Melanie Subbiah, J. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, G. Krüger, T. Henighan, R. Child, Aditya Ramesh, D. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, E. Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, J. Clark, Christopher Berner, Sam McCandlish, A. Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. *ArXiv*, abs/2005.14165.
|
| 253 |
+
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.
|
| 254 |
+
Kevin Clark, Minh-Thang Luong, Quoc Le, and Christopher D. Manning. 2020. Pre-training transformers as energy-based cloze models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 285-294, Online. Association for Computational Linguistics.
|
| 255 |
+
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising textual entailment, pages 177-190. Springer.
|
| 256 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 257 |
+
|
| 258 |
+
William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing.
|
| 259 |
+
Gamaleldin F. Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein. 2019. Adversarial reprogramming of neural networks. In International Conference on Learning Representations.
|
| 260 |
+
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1-9. Association for Computational Linguistics.
|
| 261 |
+
Thanh-Le Ha, Jan Niehues, and Alexander Waibel. 2016. Toward multilingual neural machine translation with universal encoder and decoder.
|
| 262 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. DeBERTa: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
|
| 263 |
+
N. Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and S. Gelly. 2019. Parameter-efficient transfer learning for nlp. In ICML.
|
| 264 |
+
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2020. TinyBERT: Distilling BERT for natural language understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4163-4174, Online. Association for Computational Linguistics.
|
| 265 |
+
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.
|
| 266 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In International Conference on Learning Representations.
|
| 267 |
+
Hector J Levesque, Ernest Davis, and Leora Morgenstern. 2011. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, page 47.
|
| 268 |
+
Xiang Lisa Li and Percy Liang. 2021. Prefix-Tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
|
| 269 |
+
Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019.
|
| 270 |
+
|
| 271 |
+
RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692.
|
| 272 |
+
Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The Commitment Bank: Investigating projection in naturally occurring discourse. Proceedings of Sinn und Bedeutung, 23(2):107-124.
|
| 273 |
+
Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, and Farinaz Koushanfar. 2019. Adversarial reprogramming of text classification neural networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5216-5225, Hong Kong, China. Association for Computational Linguistics.
|
| 274 |
+
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 275 |
+
Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics.
|
| 276 |
+
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2021. AdapterFusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487-503, Online. Association for Computational Linguistics.
|
| 277 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training.
|
| 278 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 279 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
|
| 280 |
+
Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the
|
| 281 |
+
|
| 282 |
+
16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 255-269, Online. Association for Computational Linguistics.
|
| 283 |
+
Timo Schick and Hinrich Schütze. 2021b. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339-2352, Online. Association for Computational Linguistics.
|
| 284 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Controlling politeness in neural machine translation via side constraints. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 35-40, San Diego, California. Association for Computational Linguistics.
|
| 285 |
+
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020a. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.
|
| 286 |
+
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, and Sameer Singh. 2020b. AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4222-4235, Online. Association for Computational Linguistics.
|
| 287 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631-1642.
|
| 288 |
+
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In NeurIPS.
|
| 289 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
|
| 290 |
+
Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
|
| 291 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American
|
| 292 |
+
|
| 293 |
+
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 294 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 295 |
+
|
| 296 |
+
# A Hyperparameters
|
| 297 |
+
|
| 298 |
+
For each of the tasks, we performed hyperparameter search in the following space:
|
| 299 |
+
|
| 300 |
+
- Learning rate is chosen from the set $\{10^{-2}, 3 \cdot 10^{-3}, 10^{-3}, 3 \cdot 10^{-4}, 10^{-4}, 3 \cdot 10^{-5}\}$ ,
|
| 301 |
+
- Number of epochs is chosen as either 10 or 20. This determines the behavior of the slanted triangular learning rate scheduler.
|
| 302 |
+
- Initialization is performed either with the embedding of the [MASK] token, or randomly initialized from a normal distribution, with the mean and variance taken from the matrix of RoBERTa's word embeddings.
|
| 303 |
+
|
| 304 |
+
The hyperparameter search took roughly 4 days on two Titan V GPUs. The final choices for each task are shown in Table 5. Initialization with [MASK] performed better than the random initialization.
|
| 305 |
+
|
| 306 |
+
We disable all dropouts inside Transformer. We use huggingface implementation of AdamW optimizer with weight decay disabled. The gradient is normalized to the value 1.0. For the batch sampling we use bucketing with padding noise of 0.1. In order to use the device memory more effectively, we also set maximum number of tokens per batch to 2048. The maximum sequence length is truncated to 512 tokens. We enable mixed precision and pad all sequence lengths to the multiples of 8 for the effective usage of TensorCores<sup>8</sup>.
|
| 307 |
+
|
| 308 |
+
# B Learned Tokens
|
| 309 |
+
|
| 310 |
+
Table 6 lists the closest vocabulary words to the learned embeddings. Most tasks have two input sentences, so the prompts consist of three parts: one is added before the first sentence, the second one is added between the sentences and the third one is appended next to the second sentence. For the single-sentence tasks, the second and third parts of the prompt are simply concatenated. Each task has trainable verbalizer tokens, one per output class.
|
| 311 |
+
|
| 312 |
+
The prompts of RTE, MRPC and STS-B are pretty similar to MNLI's prompts, as the models for these tasks were initialized from pretrained MNLI models. The other tasks were initialized with [MASK] tokens. The final model for CoLA didn't move too far from its initialization.
|
| 313 |
+
|
| 314 |
+
$^{8}$ https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html
|
| 315 |
+
|
| 316 |
+
<table><tr><td>Task</td><td>Learning rate</td><td>Epochs</td><td>Init.</td></tr><tr><td>MNLI</td><td>0.001</td><td>10</td><td>[MASK]</td></tr><tr><td>QNLI</td><td>0.001</td><td>10</td><td>[MASK]</td></tr><tr><td>QQP</td><td>0.0003</td><td>20</td><td>[MASK]</td></tr><tr><td>RTE</td><td>0.001</td><td>20</td><td>MNLI</td></tr><tr><td>SST-2</td><td>0.003</td><td>20</td><td>[MASK]</td></tr><tr><td>MRPC</td><td>0.001</td><td>20</td><td>MNLI</td></tr><tr><td>CoLA</td><td>0.001</td><td>20</td><td>[MASK]</td></tr><tr><td>STS-B</td><td>0.001</td><td>20</td><td>MNLI</td></tr></table>
|
| 317 |
+
|
| 318 |
+
Table 5: Hyperparameters of our best-performing models. [MASK] means the prompts are initialized with the word embedding of same token, and MNLI means the prompt is initialized with the prompts of out best MNLI run.
|
| 319 |
+
|
| 320 |
+
<table><tr><td rowspan="6">MNLI</td><td rowspan="3">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow Ale .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a (@ [MASK] _dL aHJ E [MASK] _aKH</td></tr><tr><td>after</td><td>_<!-_informing inyl _entit dim</td></tr><tr><td rowspan="3">Verbalizers</td><td>entailment</td><td>_categories</td></tr><tr><td>neutral</td><td>gomery</td></tr><tr><td>contradiction</td><td>Unless</td></tr><tr><td rowspan="5">QNLI</td><td rowspan="3">Prompts</td><td>before</td><td>*. _neigh [MASK] U {}</td></tr><tr><td>between</td><td>aG-aG- [MASK] olitan _pronouns [MASK] [MASK] [MASK] @@@[MASK]_Choi [MASK]</td></tr><tr><td>after</td><td></td></tr><tr><td rowspan="2">Verbalizers</td><td>entailment</td><td>_VIDE</td></tr><tr><td>not_ entailment</td><td>470</td></tr><tr><td rowspan="5">QQP</td><td rowspan="3">Prompts</td><td>before</td><td>_resembling_swarm_Calm_Membership</td></tr><tr><td>between</td><td>.derive rics [MASK] alias iary [MASK] _omnip [MASK] [MASK] [MASK] _sham</td></tr><tr><td>after</td><td>[MASK] _forb [MASK] _Firefly _THEY</td></tr><tr><td rowspan="2">Verbalizers</td><td>notDuplicate</td><td>ende</td></tr><tr><td>duplicate</td><td>_sugg</td></tr><tr><td rowspan="5">RTE</td><td rowspan="3">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow ALE .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a (@ [MASK] _aHJ femin [MASK] _aK</td></tr><tr><td>after</td><td>ahiahi _informing # _entit OOOO</td></tr><tr><td rowspan="2">Verbalizers</td><td>entailment</td><td>e!</td></tr><tr><td>not_ entailment</td><td>_blames</td></tr><tr><td rowspan="5">SST-2</td><td rowspan="3">Prompts</td><td>before</td><td>choes _charms_sorely _"akijakij</td></tr><tr><td>between</td><td>a ffe Pae _charred _masked [MASK] _Fall _babys _smartest ik /</td></tr><tr><td>after</td><td>dL forums _bio _mang A+</td></tr><tr><td rowspan="2">Verbalizers</td><td>negative</td><td>_defective</td></tr><tr><td>positive</td><td>很重要的</td></tr><tr><td rowspan="5">MRPC</td><td rowspan="3">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-Tomorrow rison .aGj * .</td></tr><tr><td>between</td><td>_MUCH irin [/ _a jay [MASK] _dL aHJ femin [MASK] .?</td></tr><tr><td>after</td><td>_> _informing # _entit OOOO</td></tr><tr><td rowspan="2">Verbalizers</td><td>entailment</td><td>_categories</td></tr><tr><td>neutral</td><td>gomery</td></tr><tr><td rowspan="5">CoLA</td><td rowspan="3">Prompts</td><td>before</td><td>[MASK] [MASK] [MASK] [MASK]</td></tr><tr><td>between</td><td>[MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK] [MASK]</td></tr><tr><td>after</td><td>[MASK] [MASK] [MASK] [MASK] [MASK]</td></tr><tr><td rowspan="2">Verbalizers</td><td>unacceptable</td><td>_additionally</td></tr><tr><td>acceptable</td><td>o</td></tr><tr><td rowspan="3">STS-B</td><td rowspan="3">Prompts</td><td>before</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A</td></tr><tr><td>between</td><td>_Kers irin [/ _a (@ [MASK] _dL AhAHAhAH femin [MASK] _aKH</td></tr><tr><td>after</td><td>A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A-A</td></tr></table>
|
| 321 |
+
|
| 322 |
+
Table 6: The closest words to the prompt and verbalizer token embeddings for the best model for each task. We use cosine distance to measure the distance. [MASK] tokens highlighted in bold indicate the positions we use to output the prediction.
|
data/2021/2101_00xxx/2101.00121/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00133/a7973c66-170a-4f81-81e1-4c4a0eed068c_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00133/a7973c66-170a-4f81-81e1-4c4a0eed068c_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00133/full.md
CHANGED
|
@@ -1,3 +1,412 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned
|
| 2 |
+
|
| 3 |
+
<table><tr><td>Sewon Min2</td><td>SEWON@CS.WASHINGTON.EDU</td></tr><tr><td>Jordan Boyd-Graber3</td><td>JBG@UMIACS.umd.EDU</td></tr><tr><td>Chris Alberti1</td><td>CHRISALBERTI@google.com</td></tr><tr><td>Danqi Chen4</td><td>DANQIC@CS.PRINCETON.EDU</td></tr><tr><td>Eunsol Choi5</td><td>EUNSOL@CS.UTEXAS.EDU</td></tr><tr><td>Michael Collins1</td><td>MJCOLLINS@google.com</td></tr><tr><td>Kelvin Guu1</td><td>KGUU@google.com</td></tr><tr><td>Hannaneh Hajishirzi1</td><td>HANNANEH@CS.WASHINGTON.EDU</td></tr><tr><td>Kenton Lee1</td><td>KENTONL@google.com</td></tr><tr><td>Jennimaria Palomaki1</td><td>JPALOMAKI@google.com</td></tr><tr><td>Colin Raffel1</td><td>CRAFFEL@google.com</td></tr><tr><td>Adam Roberts1</td><td>ADAROB@google.com</td></tr><tr><td>Tom Kwiatkowski1</td><td>TOMKWIAT@google.com</td></tr></table>
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Google Research, New York City, NY, USA; Mountain View, CA, USA; Seattle, WA, USA
|
| 6 |
+
$^{2}$ University of Washington, Seattle, WA, USA
|
| 7 |
+
<sup>3</sup> University of Maryland, College Park, MD, USA
|
| 8 |
+
4Princeton University, Princeton, NJ, USA
|
| 9 |
+
<sup>5</sup>University of Texas at Austin, Austin, TX, USA
|
| 10 |
+
|
| 11 |
+
Patrick Lewis $^{T1}$ , Yuxiang Wu $^{T1}$ , Heinrich Kuttler $^{T1}$ , Linqing Liu $^{T1}$ , Pasquale Minervini $^{T1}$ , Pontus Stenetorp $^{T1}$ , Sebastian Riedel $^{T1,3}$ , Sohee Yang $^{T2}$ , Minjoon Seo $^{T2}$ , Gautier Izacard $^{T3,7}$ , Fabio Petroni $^{T3,7}$ , Lucas Hosseini $^{T3,7}$ , Nicola De Cao $^{T3}$ , Edouard Grave $^{T3,7}$ , Ikuya Yamada $^{T4}$ , Sonse Shimaoka $^{T4}$ , Masatoshi Suzuki $^{T4}$ , Shumpei Miyawaki $^{T4}$ , Shun Sato $^{T4}$ , Ryo Takahashi $^{T4}$ , Jun Suzuki $^{T4}$ , Martin Fajcik $^{T5}$ , Martin Docekal $^{T5}$ , Karel Ondrej $^{T5}$ , Pavel Smrz $^{T5}$ , Hao Cheng $^{T6}$ , Yelong Shen $^{T6}$ , Xiaodong Liu $^{T6}$ , Pengcheng He $^{T6}$ , Weizhu Chen $^{T6}$ , Jianfeng Gao $^{T6}$ , Barlas Oguz $^{T7}$ , Xilun Chen $^{T7}$ , Vladimir Karpukhin $^{T7}$ , Stan Peshterliev $^{T7}$ , Dmytro Okhonko $^{T7}$ , Michael Schlichtkrull $^{T7}$ , Sonal Gupta $^{T7}$ , Yashar Mehdad $^{T7}$ , Wen-tau Yih $^{T7}$
|
| 12 |
+
|
| 13 |
+
$^{\mathrm{T1}}$ UCLNLP & Facebook AI; $^{\mathrm{T2}}$ NAVER Clova; $^{\mathrm{T3}}$ Facebook AI Paris & London;
|
| 14 |
+
$^{\mathrm{T4}}$ Studio Ousia, Tohoku University & RIKEN; $^{\mathrm{T5}}$ Brno University of Technology;
|
| 15 |
+
$^{\mathrm{T6}}$ Microsoft Research & Dynamic 365 AI; ${}^{\mathrm{T7}}$ Facebook AI
|
| 16 |
+
|
| 17 |
+
Editors: Hugo Jair Escalante and Katja Hofmann
|
| 18 |
+
|
| 19 |
+
# Abstract
|
| 20 |
+
|
| 21 |
+
We review the EfficientQA competition<sup>1</sup> from NeurIPS 2020<sup>2</sup>. The competition focused on open-domain question answering (QA), where systems take natural language questions as input and return natural language answers. The aim of the competition was to build systems that can predict correct answers while also satisfying strict on-disk memory budgets. These memory budgets were designed to encourage contestants to explore the trade-off between storing retrieval corpora or the parameters of learned models. In this report, we describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.
|
| 22 |
+
|
| 23 |
+
Keywords: Question answering, Memory efficiency, Knowledge representation
|
| 24 |
+
|
| 25 |
+
# 1. Introduction
|
| 26 |
+
|
| 27 |
+
Open-domain question answering (QA) is emerging as a benchmark method of measuring computational systems' abilities to retrieve, represent, and read knowledge (Voorhees and Tice, 2000; Chen et al., 2017; Seo et al., 2019; Lee et al., 2019). Recently, this task has been addressed by a diverse set of approaches that navigate multiple documents (Min et al., 2019b; Asai et al., 2019), index large corpora of text (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020), represent world knowledge in the parameters of a neural network (Roberts et al., 2020), or consolidate knowledge from multiple passages (Izacard and Grave, 2020). More comprehensive background is provided in Chen and Yih (2020).
|
| 28 |
+
|
| 29 |
+
The EfficientQA competition, held at NeurIPS 2020, required contestants to build self-contained systems that contain all of the knowledge required to answer open-domain questions. There were no constraints on how the knowledge is stored—it could be in documents, databases, the parameters of a neural network, or any other form. However, the competition encouraged systems that store and access this knowledge using the smallest number of bytes, including code, corpora, and model parameters. Specifically, EfficientQA had four tracks: 1) best accuracy overall (unconstrained); 2) best accuracy, system size under 6GiB; 3) best accuracy, system size under 500MiB; 4) smallest system to get $25\%$ accuracy. These memory budgets were designed to encourage contestants to explore the trade-off between storing and accessing large, redundant, retrieval corpora, structured data stores, or the parameters of large learned models.
|
| 30 |
+
|
| 31 |
+
This paper summarizes the findings from the competition. Section 2 describes the competition in detail; Section 3 presents a description of the best performing submissions in each track; Section 4 introduces a new human evaluation of system accuracy, and Section 5 provides results and an analysis based on automatic and human evaluation. Finally, Section 6 pits the top systems against human trivia experts.
|
| 32 |
+
|
| 33 |
+
# 1.1. Key takeaways
|
| 34 |
+
|
| 35 |
+
The top submissions in each of EfficientQA's four tracks significantly outperformed the provided baselines. All top submissions use a retrieval corpus and a neural network answering module. However, the nature of the retrieval corpus and answering module differs drastically across the tracks (Table 1).
|
| 36 |
+
|
| 37 |
+
<table><tr><td>Track</td><td>Model</td><td>Affiliation</td><td>retr</td><td>answer</td><td>others</td></tr><tr><td rowspan="4">Unrest- ricted</td><td>REALM</td><td>Organizers</td><td>p</td><td>ext</td><td></td></tr><tr><td>DPR</td><td>Organizers</td><td>p</td><td>ext</td><td></td></tr><tr><td>MS UnitedQA</td><td>Microsoft & Dynamics 365</td><td>p</td><td>ext+gen</td><td></td></tr><tr><td>FB Hybrid</td><td>Facebook AI</td><td>p</td><td>gen</td><td>lists/tables</td></tr><tr><td rowspan="5">6GiB</td><td>DPR-subset</td><td>Organizers</td><td>p (pruned)</td><td>ext</td><td></td></tr><tr><td>T5-XL+SSM</td><td>Organizers</td><td>X</td><td>gen</td><td></td></tr><tr><td>FB system</td><td>FAIR-Paris&London</td><td>p (pruned)</td><td>gen</td><td>lists, lrzip compression</td></tr><tr><td>Ousia-Tohoku Soseki</td><td>Studio Ousia, Tohoku U & RIKEN</td><td>p (pruned)</td><td>ext</td><td>ZPAQ compression</td></tr><tr><td>BUT R2-D2</td><td>Brno U of Technology</td><td>p (pruned)</td><td>ext+gen</td><td></td></tr><tr><td rowspan="3">500MiB</td><td>T5-Small+SSM</td><td>Organizers</td><td>X</td><td>gen</td><td></td></tr><tr><td>UCLNLP-FB system</td><td>UCL & FAIR</td><td>(q,a)</td><td>-</td><td>data augmentation</td></tr><tr><td>NAVER RDR</td><td>NAVER Clova</td><td>p (pruned)</td><td>ext</td><td>single Transformer</td></tr><tr><td rowspan="2">25% smallest</td><td>T5-XL+SSM</td><td>Organizers</td><td>X</td><td>gen</td><td></td></tr><tr><td>UCLNLP-FB system (29M)</td><td>UCL & FAIR</td><td>(q,a)</td><td>-</td><td>data augmentation</td></tr></table>
|
| 38 |
+
|
| 39 |
+
Table 1: A list of the baselines and systems from participants, along with the team affiliation and key distinction between systems. retr means what the systems retrieves, e.g., passages $(p)$ , question-answer pairs $((q, a))$ , or none (X). pruned indicates the Wikipedia corpus is pruned. answer indicates whether the answer is extracted (ext) or generated (gen). Other key distinctions are shown in the last column.
|
| 40 |
+
|
| 41 |
+
Unrestricted and 6GiB tracks The top submissions to the unrestricted track and the 6GiB track (Section 3.1; 3.2) outperformed the state-of-the-art baselines from April 2020 by nearly $20\%$ . They achieved this improvement by combining the state-of-the-art retrieval systems (Karpukhin et al., 2020; Mao et al., 2020) with answer generation (Izacard and Grave, 2020); leveraging the state-of-the-art in text generation (Raffel et al., 2020) and text encoding (Clark et al., 2020); modeling not only text but also tables and lists from Wikipedia; and combining the extractive and generative answer prediction. The top submissions to the 6GiB track additionally massively reduced the size of their indexed corpus and made use of the state-of-the-art in compression, with minimal impact on accuracy.
|
| 42 |
+
|
| 43 |
+
500MiB and smallest tracks To get under 500MB (Section 3.3), the systems made more drastic changes. The submission from Naver Clova drastically reduced the size of their indexed corpus and reused a single Transformer model for the retriever and the reader, winning the 500MiB track according to the human evaluation. The even smaller UCLNLP-FB system took a novel approach in generating a large corpus of question-answer pairs, indexing it, and retrieving the most similar question to the input question. This approach, with two systems with different sizes in question-answer corpus, won both the 500MiB track and the smallest $25\%$ track according to the automatic evaluation.
|
| 44 |
+
|
| 45 |
+
Automatic vs. human evaluation The human evaluation supports the observation that automatic metrics often incorrectly penalize correct predictions in the open-domain setting (Voorhees and Tice, 2000; Roberts et al., 2020). We also investigate the effect of question ambiguity on evaluation—the questions from NQ are often ambiguous without the associated evidence document (Min et al., 2020). In Section 4 we define an annotation
|
| 46 |
+
|
| 47 |
+
scheme that supports multiple estimations of accuracy, corresponding to different definitions of correctness for answers to ambiguous questions. Almost all systems' accuracy increased by $20\% - 25\%$ under the strictest definition of correctness. The increase doubled when we relaxed the definition of correctness to permit any semantically valid interpretation of the question. We present a discussion as well as suggestions for future evaluation (Section 5.2).
|
| 48 |
+
|
| 49 |
+
# 2. Competition Overview
|
| 50 |
+
|
| 51 |
+
Data The competition uses English questions and answers from the Natural Questions dataset (Kwiatkowski et al., 2019, NQ): real user questions issued to the Google search engine, along with reference answers from Wikipedia<sup>3</sup>. Real user questions in NQ are interpretable outside of a document context (unlike SQuAD (Rajpurkar et al., 2016)), and less amenable to traditional IR approaches than over-complete trivia questions (Joshi et al., 2017), as discussed in Lee et al. (2019). While the original NQ task was posed as a reading comprehension task, in which evidence documents are provided, recent work has adapted the NQ data to the open-domain setting (Lee et al., 2019; Min et al., 2019a,b; Asai et al., 2019; Guu et al., 2020; Roberts et al., 2020; Karpukhin et al., 2020) by taking examples with short answers (up to five tokens) and discarding evidence documents. In the open-domain setting, NQ contains 88k one-way annotated training examples and 4k five-way annotated development examples. For EfficientQA, we introduce a new test and development set constructed in the same way as the original NQ, but labeled slightly after (early 2019 rather than through 2018).<sup>4</sup> Our test set was kept hidden from contestants, and submissions were made by uploading solutions to an automatic leaderboard.
|
| 52 |
+
|
| 53 |
+
System size All submissions to the three restricted tracks were submitted as self-contained Docker<sup>5</sup> images. We defined system size to be the on-disk, at-rest size of this image. We chose this approach to avoid confusion about distinctions between data, model parameters, and code. However, this choice did lead to a significant engineering efforts for the very smallest systems, which were not able to build on the standard Docker templates for the predominant deep-learning libraries. While the organizers did provide a small reference system based on T5 (Roberts et al., 2020) that used TensorFlow serving<sup>6</sup>, this did not support most submissions, and the very smallest systems required clever compilation strategies on top of their modeling enhancements.
|
| 54 |
+
|
| 55 |
+
Evaluation metrics We measured the performance of different systems through automatic and human evaluation. For automatic evaluation, the accuracy of each systems' predicted answers is judged against reference annotations, annotated by five human workers. We follow the literature in using exact match between predicted and reference answers after minor normalization (Lee et al., 2019). Due to the ambiguities inherent in language and question-answering in general, five reference answers are often not exhaustive, and systems predict correct answers that are judged incorrect according to the automatic metrics.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 1: Memory footprint of each system component.
|
| 59 |
+
|
| 60 |
+
To rectify this, we sent predictions from each of the systems for further human rating by three raters, to get a better estimation of accuracy, as detailed in Section 4.
|
| 61 |
+
|
| 62 |
+
Competition schedule The competition was announced in June 2020, along with baselines and tutorials. The official leaderboard for restricted settings was launched on September 14th, 2020, and participants had two months to submit their systems (November 14th, 2020). Finally, predictions from the top systems from each leaderboard were sent for human evaluation, which was completed in the end of November. The same set of top systems was invited to present their systems at the NeurIPS event on December 12th and in this paper. The human vs. computer competition was held on December 6th (details in Section 6).
|
| 63 |
+
|
| 64 |
+
In total, we had 39 submissions from 18 unique teams, seven of which were affiliated or co-affiliated with universities.
|
| 65 |
+
|
| 66 |
+
# 3. Systems
|
| 67 |
+
|
| 68 |
+
We describe a set of provided baselines and systems from participants. Systems from participants include the top 1-3 submissions per track based on automatic measure, considering margins between the submissions. Table 1 summarizes the key distinctions between the systems, and Figure 1 shows the memory footprint of each system component.
|
| 69 |
+
|
| 70 |
+
# 3.1. Unconstrained track
|
| 71 |
+
|
| 72 |
+
Both of the top two submissions make use of the previous state-of-the-art system, such as DPR (Karpukhin et al., 2020) and a generative reader (Izacard and Grave, 2020). They are greatly enhanced by a better training objective, aggregating the answers from the extractive and generative models, or incorporating lists and tables from Wikipedia.
|
| 73 |
+
|
| 74 |
+
Baselines: REALM/DPR Both REALM (Guu et al., 2020) and DPR (Karpukhine et al., 2020) use the retrieval-reader framework. They retrieve the top $K$ passages from Wikipedia by mapping the question and each passage to dense vectors and employing a maximum inner product search. The top $K$ passages are then fed into an extractive reader which predicts the start and the end position of the answer. A key distinction between REALM and DPR is that REALM is trained with a self-supervised, joint objective, while DPR is trained in a pipeline manner with gold positives and distantly supervised negatives.
|
| 75 |
+
|
| 76 |
+
MS UnitedQA The UnitedQA system consists of three components: Retrieval, Reading, and Re-ranking. First, it uses DPR to fetch top 100 passages from the English Wikipedia dump for a given question. Second, a hybrid approach, combining both generative and extractive readers, is used to produce answer candidates from the collection of retrieved passages. The generative reader is Fusion-in-Decoder (Izacard and Grave, 2020) based on T5 (Raffel et al., 2020), and the extractive reader is based on ELECTRA (Clark et al., 2020). Several techniques are applied to facilitate the training of the readers: posterior differential regularization (PDR, Cheng et al. (2020b), improved loss-term (Cheng et al., 2020a) are explored for the extractive model, and adversarial training (Ju et al., 2019) approach, attention bias (Lewis et al., 2020a) for the generative model. At the final re-ranking stage, it combines the generative and extractive model predictions with linear interpolation, and produce the final answer. More details of the system can be found in Cheng et al. (2021).
|
| 77 |
+
|
| 78 |
+
FB Hybrid This system uses a retriever-reader architecture. The retriever is a combination of DPR and Generation Augmented Retrieval (GAR) (Mao et al., 2020), where each of them retrieves 50 passages. The DPR encoders are trained iteratively, where better hard negatives are mined at each step using the model at the previous step—two iterations are found to be sufficient. Its dense index includes lists and tables, as well as regular text passages. Specifically, 455907 tables and infoboxes are processed from the NQ training corpus using a trivial linearization, which concatenates the text representation of each row with a newline character.<sup>7</sup> Tables are chunked at 100 tokens with the header included in each chunk. The reader is Fusion-in-Decoder based on T5-large, which is given 100 passages from the retriever and generates an answer. More details of the system are in Oguz et al. (2020).
|
| 79 |
+
|
| 80 |
+
# 3.2. 6GiB track
|
| 81 |
+
|
| 82 |
+
For this track, we provided two baselines. One retrieval-based, and one generative model. However, all of the top three submissions are retrieval-based, and reduce the memory footprint by drastically pruning the Wikipedia corpus, either by a learned model or based on a page view. They also made use of model quantization and the state-of-the-art in compression to limit their system's footprint. They additionally enhanced the previous state-of-the-art as done in an unrestricted track.
|
| 83 |
+
|
| 84 |
+
Retrieval-based baseline: DPR-subset We create a variant of DPR with pruned Wikipedia. Specifically, we only keep passages from the pages that are paired with the questions on the training set of NQ, reducing the number of passages from 21M to 1.6M.
|
| 85 |
+
|
| 86 |
+
Generative baseline: T5-XL+SSM T5 (Raffel et al., 2020) is a text-to-text Transformer language model utilizing an encoder-decoder architecture that was pre-trained using a "span corruption" objective on Common CWEl. This approach is based on Roberts et al. (2020) which demonstrated a "closed-book" setting where the model was only able to access the knowledge stored in its parameters after fine-tuning on the task. Following Roberts et al. (2020), the model is additionally pre-trained using a "salient span masking" objective (Guu et al., 2020) before fine-tuning. The XL model has approximately 3B parameters.
|
| 87 |
+
|
| 88 |
+
FB system This system is based on a retriever-reader approach. The retriever is an ensemble of a dense retriever and GAR. The dense retriever is initialized from BERT-base and is trained by distilling the cross-attention scores of the reader. GAR (Mao et al., 2020) is a retriever with generative query augmentation, based on BM25. The Lucene $^8$ index is built on the fly. The reader is Fusion-in-Decoder initialized from T5 large.
|
| 89 |
+
|
| 90 |
+
The text corpus initially included 26M passages including the plain text and the lists, but went through an article-level filtering to include 18.8M passages. Specifically, it uses a linear classifier where each Wikipedia article is represented by its title and list of categories.
|
| 91 |
+
|
| 92 |
+
The model weights are stored using float16, taking 1.6GB of memory. The dense index is compressed by relying on three strategies, described in Izacard et al. (2020): 1) the vector representations go through dimension reduction from 768 to 256, 2) they are further quantized through product quantization, using 2 bits per dimension, and 3) the text of Wikipedia is compressed using $\mathrm{lrzip}^9$ .
|
| 93 |
+
|
| 94 |
+
Ousia-Tohoku Soseki Soseki is an open-domain QA system that adopts a two-stage approach consisting of Retriever for passage retrieval, and Reader for reading comprehension. Given a question, Retriever obtains top-k candidate passages from Wikipedia. Then, Reader selects the most relevant passage from them and extracts an answer from the passage.
|
| 95 |
+
|
| 96 |
+
Retriever is based on a re-implementation of DPR, where two BERT-base-uncased models are used to embed questions and passages. We also quantize passage embeddings to reduce the system size. Embeddings of Wikipedia passages are precomputed and stored using Faiss (Johnson et al., 2017). Reader is a reading comprehension model based on ELECTRA-large (Clark et al., 2020). We added one dense layer for selecting the most relevant passage, and two dense layers for detecting the start and end positions of the answer. All modules are trained on the NQ dataset.
|
| 97 |
+
|
| 98 |
+
To reduce the system size under 6GB, we compressed the models, passages, and other data files using ZPAQ $^{10}$ and excluded Wikipedia passages with less than 40 monthly page views, resulting in 18M passages. These tricks do not drop the accuracy.
|
| 99 |
+
|
| 100 |
+
BUT R2-D2 R2-D2 is composed of a dense retriever, a re-ranker and two readers. The dense retriever is based on RoBERTa (Liu et al., 2019) and is trained via an objective known from DPR. It retrieves $K = 400$ passages from a pruned version of Wikipedia, reduced from 21M to 1.6M passages. Pruning is done via a simple binary classifier based on RoBERTa trained on the data created from the golden passages and negative passages randomly sampled from the index. This classifier obtains $90.2\%$ accuracy. The re-ranker (based on Longformer (Beltagy et al., 2020)) concatenates retrieved passages, assigns a score for each passage, and selects $V = 20$ passages. The extractive reader (based on ELECTRA) reads $V$ passages in a similar way as in Fajcik et al. (2020). It is trained via the marginal likelihood objective combined with the compound objective. The generative reader follows the Fusion-in-Decoder schema and generates the answers. R2-D2 aggregates the output from these two readers using two fusioning methods. First, it reranks the top spans from the extractive reader by feeding them to a generative reader and combines its log likelihood with the log likelihood from the extractive reader through a linear combination. They
|
| 101 |
+
|
| 102 |
+
are then further aggregated with abstractive answers from the generative reader that are generated independently, through another linear combination. The parameters are stored in float16 and compressed using ZIP. More details of the system are in Fajcik et al. (2021).
|
| 103 |
+
|
| 104 |
+
# 3.3. 500MiB track
|
| 105 |
+
|
| 106 |
+
The top performing submissions in this track made more drastic changes to get under 500MiB. They have completely different approaches from each other.
|
| 107 |
+
|
| 108 |
+
Baseline: T5-Small+SSM This is a smaller version of the T5-XL+SSM baseline above, with 512 hidden dimension instead of 4096, 6 layers instead of 24, and 8 attention heads instead of 64. It contains 60M parameters in total.
|
| 109 |
+
|
| 110 |
+
UCLNLP-FB system This system is based on the approach from Lewis et al. (2020c), consisting of a database of question-answer pairs and a retriever which returns the answer of the most similar stored question to an input question. This approach is attractive as it performs well using low parameter-count models, and less space is needed to store question-answer pairs than full Wikipedia passages.
|
| 111 |
+
|
| 112 |
+
The Question-Answer Pair Generator is similar to Alberti et al. (2019); Lewis et al. (2019). First, a passage selection model $P(c)$ is trained to identify appropriate passages from Wikipedia. For high-probability passages (w.r.t. $P(c)$ ), the system performs Named Entity Recognition to extract likely answers $a$ , and generates questions $q$ from $(c, a)$ using a BART (Lewis et al., 2020b) model $P(q \mid a, c)$ fine-tuned on NQ. Finally, the system filters question-answer pairs with a "global consistency filter"—if an open-domain QA system (Izacard and Grave, 2020) is given $q$ and generates an answer that is consistent with $a$ , the $(q, a)$ pair is added to the database. This is because generated $(q, a)$ pairs are sometimes incorrect or unanswerable without the passage they were generated from. The final database consists of NQ, the EfficientQA development set and 2.3M generated question-answer pairs. The Retriever consists of TF-IDF and a bi-encoder retriever based on ALBERT-base (Lan et al., 2020). The Reranker is a cross-encoder module based on ALBERT-large; it reranks the top-10 question-answer pairs from the retriever, and the answer of the top-1 question-answer pair is chosen as the final answer.
|
| 113 |
+
|
| 114 |
+
The system is further compressed using TFLite, quantization, and Alpine Linux. More details can be found from Lewis et al. (2021).
|
| 115 |
+
|
| 116 |
+
NAVER RDR RDR is a lightweight retrieve-and-read system, consisting of a single Transformer (MobileBERT (Sun et al., 2020)) that performs both retrieval and reading, an index of dense passage vectors and the filtered Wikipedia corpus. The Transformer serves as a dense retriever as DPR does with two differences: (1) a single Transformer is used to encode both the question and the passage, and (2) the embeddings are defined as the first 128 out of 512 dimensions of the output vectors from the Transformer that correspond to the [CLS] token. The same Transformer serves as an extractive reader by producing the scores of the start and the end of the answer span as well as the passage reranking score. A distillation technique is used for training, by minimizing the KL divergence between its start, end, and reranking scores and those of a fully trained DPR reader. The Transformer is further finetuned in an iterative manner as a retriever and as a reader.
|
| 117 |
+
|
| 118 |
+
The index consists of 1.2M 128-dimensional INT8 vectors (scalar-quantized and unsigned), which are the dense embeddings of the subset of the 21M passages in Wikipedia, filtered by a RoBERTa (Liu et al., 2019)-based binary classifier trained with logistic regression to exclude uninformative passages. Positives to train this classifier are the top 200 passages for each question on NQ dataset and EfficientQA development set, retrieved by Yang and Seo (2020), a DPR retriever further finetuned on hard negatives using knowledge distillation from a DPR reader. Negatives are uniformly drawn from the set of 21M passages, excluding positives. More details can be found from Yang and Seo (2021).
|
| 119 |
+
|
| 120 |
+
# 3.4. $25\%$ smallest track
|
| 121 |
+
|
| 122 |
+
Baseline: T5-XL+SSM Among the provided baselines, the smallest system with an accuracy of over $25\%$ is T5-XL+SSM, the same system described in Section 3.2. This system is 5.65GB and achieves an accuracy of $28\%$ .
|
| 123 |
+
|
| 124 |
+
UCLNLP-FB system (29M) This system is the same system as UCLNLP-FB system in the 500MiB track with following minor modifications, to further decrease the memory: (1) the retriever is just TF-IDF instead of a combination of TF-IDF and a bi-encoder, (2) the reranker uses ALBERT-base instead of ALBERT-large, and (3) there are 40k generated question-answer pairs, instead of 2.3M.
|
| 125 |
+
|
| 126 |
+
# 4. Human Annotations of Correctness
|
| 127 |
+
|
| 128 |
+
The EfficientQA development and test sets have up to five reference answers per question. Due to the variability of language, these five answer strings are often not exhaustive, and systems predict correct answers that are judged incorrect according to the automatic metrics. To rectify this, we sent predictions from each of the systems in Section 3 for human rating to get a better estimation of accuracy.
|
| 129 |
+
|
| 130 |
+
Each system prediction was sent for rating by three separate annotators: 1) the annotator first works on understanding the meaning and intent of the question (with a web search if necessary). 2) The annotator then determines whether the question is ambiguous, i.e., whether the question can lead to multiple different answers depending on factors such as: when the query was asked; where the query was asked; some unspoken intent of the questioner; or the opinion of the person giving the answer. 3) Finally, the annotator determines whether each answer is "definitely correct" (correct given a usual interpretation of the question), "possibly correct" (could be correct, given some interpretation of the question), or "definitely incorrect".
|
| 131 |
+
|
| 132 |
+
Since the original NQ data was collected more than a year before the start of the EfficientQA competition, the denotation of some questions may have changed over time (e.g. "who won the last season of bake-off"). Rather than determine a single correct point in time for these questions, we asked our annotators to assume that the query could have been asked at any time since the web has existed and choose the "possibly correct" label for answers that may or may not have been correct when the question was asked.
|
| 133 |
+
|
| 134 |
+
The final rating is an aggregation of ratings from three annotators: if at least 2/3 raters determined it to be "definitely correct", the label is "definitely correct". If at least 2/3 raters determined it to be either "definitely correct" or "possibly correct", the label is "possibly
|
| 135 |
+
|
| 136 |
+
<table><tr><td rowspan="2">Track</td><td rowspan="2">Model</td><td rowspan="2">Automatic eval</td><td colspan="2">Human eval</td></tr><tr><td>Definitely</td><td>Possibly</td></tr><tr><td rowspan="2">Unrestricted</td><td>MS UnitedQA</td><td>54.00</td><td>65.80 (+21.9%)</td><td>78.12 (+44.7%)</td></tr><tr><td>FB Hybrid</td><td>53.89</td><td>67.38 (+25.0%)</td><td>79.88 (+48.2%)</td></tr><tr><td rowspan="3">6GiB</td><td>FB system</td><td>53.33</td><td>65.18 (+22.2%)</td><td>76.09 (+42.7%)</td></tr><tr><td>Ousia-Tohoku Soseki</td><td>50.17</td><td>62.01 (+23.6%)</td><td>73.83 (+47.2%)</td></tr><tr><td>BUT R2-D2</td><td>47.28</td><td>58.96 (+24.7%)</td><td>70.33 (+49.2%)</td></tr><tr><td rowspan="2">500MiB</td><td>UCLNLP-FB system</td><td>33.44</td><td>39.40 (+17.8%)</td><td>47.37 (+41.7%)</td></tr><tr><td>NAVER RDR</td><td>32.06</td><td>42.23 (+31.7%)</td><td>54.95 (+71.4%)</td></tr><tr><td>25% smallest</td><td>UCLNLP-FB system (29M)</td><td>26.78</td><td>32.45 (+21.2%)</td><td>41.21 (+53.9%)</td></tr></table>
|
| 137 |
+
|
| 138 |
+
Table 2: Summary of the result. For human evaluation result, relative improvements over the automatic evaluation are indicated in parenthesis. Following our analysis of the annotations, we use 'Definitely correct' human ratings as a primary metric.
|
| 139 |
+
|
| 140 |
+
correct". The pairwise agreements of the human ratings are $69.2\%$ (Cohen's $\kappa = 53.8$ ) for 3-way ratings, $85.7\%$ (Cohen's $\kappa = 71.4$ ) for whether the prediction is definitely correct or not, and $76.7\%$ (Cohen's $\kappa = 53.4$ ) for whether the prediction is possibly correct or not.[11]
|
| 141 |
+
|
| 142 |
+
# 5. Results & Analyses
|
| 143 |
+
|
| 144 |
+
# 5.1. Results
|
| 145 |
+
|
| 146 |
+
All of the five systems in the unrestricted track and the 6GiB track significantly outperform the state-of-the-art (Table 2) at the beginning of the competition—DPR $(36.6\%)$ and REALM $(35.9\%)$ . Systems in the 6GiB track approach the unrestricted track's accuracy; for instance, the accuracy FB system is comparable to the accuracy of the top systems in the unrestricted track. The improvements in the $500\mathrm{MiB}$ track are also impressive; both the top two systems significantly beat T5-small $(17.6\%)$ .
|
| 147 |
+
|
| 148 |
+
Discrepancy between automatic eval and human eval Human raters find $13\%$ and $17\%$ of the predictions that do not match the reference answers to be definitely correct or possibly correct, respectively, overall increasing the accuracy of the systems. Most systems showed $17 - 25\%$ and $41 - 54\%$ improvement in accuracy when using definitely correct and possibly correct human evaluation respectively, compared to automatic evaluation metric which only consider exact string match to existing reference answers. An exception is Naver RDR, which achieves significantly larger improvements ( $32\%$ and $71\%$ , respectively). We also found that when the gap in automatic measure between systems is marginal (around or smaller than $1\%$ ), human evaluation may change the rankings between the models.
|
| 149 |
+
|
| 150 |
+
Agreement between system predictions Figure 2 (left) shows the agreement between system predictions, based on exact match in automatic evaluation. The largest agreement is made between FB Hybrid and FB system, likely because they are both based on DPR and Fusion-in-Decoder. Agreements between systems in the unrestricted and the 6GiB tracks
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
Figure 2: (Left) Agreement between system predictions. (Right) Ensemble oracle accuracy, which considers a prediction correct if at least one of the system predictions is correct (based on "definitely correct" human evaluation).
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
are generally higher, likely because they are all based on retrieval-reader framework and pruning Wikipedia does not hurt too much. Two systems in the 500MiB track have smaller agreement with the other systems, and agree with each other even less.
|
| 158 |
+
|
| 159 |
+
Ensemble oracle accuracy of the systems Figure 2 (right) reports the ensemble oracle accuracy for each system pair, which considers a prediction to be correct if either system prediction is correct. The FB Hybrid & Ousia-Tohoku Soseki pair achieves the highest ensemble oracle accuracy, indicating that their system predictions are substantially different from each other compared to other pairs with top performing systems.
|
| 160 |
+
|
| 161 |
+
# 5.2. Analyses
|
| 162 |
+
|
| 163 |
+
We present an analysis of the 50 sample questions where at least one prediction does not match with the gold answer, but is judged as correct by human raters, with a "definitely correct" label or a "possibly correct" label, respectively. The samples are largely divided into three classes: as valid as gold (judged as either definitely correct or possibly correct), valid but not the best (judged as possibly correct), and closer to invalid (judged as possibly correct). We describe fine-grained categories and their percentage $^{12}$ (definitely correct and possibly correct) here, with examples shown in Appendix A (Table 7).
|
| 164 |
+
|
| 165 |
+
The following describes categories on predictions that are as valid as the gold answers.
|
| 166 |
+
|
| 167 |
+
- Semantically the same (60%, 22%): The prediction is semantically equivalent to the gold, e.g., "about 930 BCE" and "around 930 BCE".<sup>13</sup>
|
| 168 |
+
- Open-ended question (6%, 4%): There is a large set of distinct, plausible answers, mainly because the question is vague or open-ended.
|
| 169 |
+
- Ambiguous entity/event references (20%, 20%): There is a set of distinct answers because the question contains ambiguous references of entities or events. For instance,
|
| 170 |
+
|
| 171 |
+
"Gold woman" in the example in Table 7 may refer to the fictional character ("Ayesh") or the actress ("Elizabeth Debick").
|
| 172 |
+
|
| 173 |
+
- Different granularity (14%, 6%): The prediction and the gold answer have different granularity, such as the year (1982) vs. the month (October 1982), or the city (Pawtucket) vs. the state (Rhode Island).
|
| 174 |
+
- Incorrect gold (2%, 6%): The gold annotation is incorrect.
|
| 175 |
+
|
| 176 |
+
The following describes categories on possibly correct predictions that are as valid but not the best answers.
|
| 177 |
+
|
| 178 |
+
- Ambiguous answer type (20%): There is a unique entity or event that can be the answer to the question, but there is ambiguity on which exact text should be presented as the answer. For instance, both the episode title (“Somber News”) and the air date (“February 23 2013”) are the valid answer to the question in Table 7.
|
| 179 |
+
- Answer is time-dependent (18%): The answer depends on the time of the question being asked. Questions in this category usually involve recurring events such as sports games or elections.
|
| 180 |
+
|
| 181 |
+
Finally, we present categories for predictions that are closer to invalid answers.
|
| 182 |
+
|
| 183 |
+
- Conflicting information in Wikipedia (4%): While the gold is the only valid answer, the English Wikipedia contains incorrect information that supports that prediction is the answer to the question. Consider the question in Table 7. While “Fort Hood” is an incorrect answer by a fact, the Wikipedia page<sup>14</sup> states “Fort Hood is the most populous U.S. military installation in the world.”<sup>15</sup>
|
| 184 |
+
- Plausible only in certain conditions (4%): Prediction may be valid in certain conditions, but is incorrect in general cases. For instance, for the question in the table, "president of India" may only be valid in India.
|
| 185 |
+
- Mismatch with question intent (8%): The prediction is somewhat valid but supposedly not the intended answer to the question. For instance, the example question in Table 7 is supposedly asking for the answer that is different from the great depression.
|
| 186 |
+
- Incorrect prediction (2%): The prediction is definitely incorrect.
|
| 187 |
+
|
| 188 |
+
Our two main takeaways are as follows. First, the automatic evaluation confirms the observation of Voorhees and Tice (2000) that it is insufficient in capturing semantically equivalent answers, which are responsible for $60\%$ of the definitely correct predictions. Second, ambiguity arises frequently in the questions in different levels, allowing multiple, semantically different answers to be valid. This is a consistent with Min et al. (2020), which reported that around half of the questions in NQ contain ambiguity, due to ambiguous references of entities, events or properties, or time-dependency of the answer. Based on our human evaluation, annotations on ambiguity have low agreement rate ( $61.3\%$ , Cohen's $\kappa = 22.6$ ), and predictions with the same level of plausibility are often marked as "definitely correct" or "possibly correct" by different human raters. We note the notions of
|
| 189 |
+
|
| 190 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">All</td><td colspan="2">Unambiguous Qs</td></tr><tr><td>Definitely</td><td>Possibly</td><td>Definitely</td><td>Possibly</td></tr><tr><td>MS UnitedQA</td><td>65.80</td><td>78.12</td><td>78.24</td><td>81.18</td></tr><tr><td>FB Hybrid</td><td>67.38</td><td>79.88</td><td>82.65</td><td>85.59</td></tr><tr><td>FB system</td><td>65.18</td><td>76.09</td><td>79.12</td><td>81.47</td></tr><tr><td>Ousia-Tohoku Soseki</td><td>62.01</td><td>73.83</td><td>72.94</td><td>75.00</td></tr><tr><td>BUT R2-D2</td><td>58.96</td><td>70.55</td><td>69.71</td><td>72.06</td></tr><tr><td>UCLNLP-FB system</td><td>39.40</td><td>47.37</td><td>42.06</td><td>43.24</td></tr><tr><td>NAVER RDR</td><td>42.23</td><td>54.95</td><td>49.71</td><td>53.24</td></tr><tr><td>UCLNLP-FB system (29M)</td><td>32.45</td><td>41.21</td><td>28.53</td><td>30.29</td></tr></table>
|
| 191 |
+
|
| 192 |
+
Table 3: Human evaluation on the original set and a subset of unambiguous questions.
|
| 193 |
+
|
| 194 |
+
"correctness" and "plausibility" are not binary, and are instead often dependent on pragmatic interpretation of the questioner's intent. For example, the question "who has the most superbowl rings" could be read as "which person (including coaches) has the most superbowl rings", "which player has the most superbowl rings", "which team has the most superbowl rings". All three annotators identified this question as being ambiguous but they disagreed about the validity of the different readings. The three raters were split three ways when rating the correct answer ("Pittsburgh Steelers") for the last interpretation. Meanwhile there were no "incorrect" ratings, and 2/3 "definitely correct" ratings given to the correct answer ("Tom Brady") for the second interpretation, despite the fact that two coaches have more superbowl rings. Clearly, the annotators are applying some personal interpretation of the questioner's intent and answer plausibility.
|
| 195 |
+
|
| 196 |
+
While we believe that many real world questions do require some non-literal assumptions about the questioner's intent, and we believe that the natural language processing community should not shy away from that task, we also acknowledge that there is work to be done in creating better, non-binary, definitions of correctness. Section 6 contrasts these interpretations of correctness with the more rigid definition used by the Trivia community.
|
| 197 |
+
|
| 198 |
+
Performance on unambiguous questions To better understand the effect of ambiguity on the ranking of different solutions, we also evaluate system performance on a subset of the questions that are unambiguous. We define unambiguous questions to be those that (1) have at least three out of five reference answers contain valid short answers $^{16}$ , and (2) are not labeled as ambiguous by any of three human raters, resulting in $51.5\%$ of the original set. Table 3 shows human evaluation on the original set and this subset of unambiguous questions. Most systems, except UCLNLP-FB system, achieve higher accuracy on unambiguous questions, with the first three systems achieving over or near $80\%$ . Unsurprisingly, the gap between "definitely correct" accuracy and "possibly correct" accuracy is marginal on unambiguous questions.
|
| 199 |
+
|
| 200 |
+
Importantly, the overall rankings are unchanged when we restrict our evaluation set to only unambiguous answers. This suggests that, while question ambiguity may lead to disagreement between annotators at a per-example level, it is not adversely impacting our ability to consistently rank solutions. More analyses can be found in Appendix A.
|
| 201 |
+
|
| 202 |
+

|
| 203 |
+
Figure 3: (Left) Screenshot from our final game between computer systems and trivia experts. Full videos of the human-computer competition are available at https://go.umd.edu/2020eqa. (Right) The human-computer competition is broken into three phases, where each phase allows the participants to use more resources. While most questions were answered in Phase 1 (where humans had a distinct advantage), in Phase 2 computers had a slight advantage; there were diminishing returns in Phase 3.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
# 6. Trivia Experts vs Computer Systems
|
| 208 |
+
|
| 209 |
+
The questions in NQ are posed by humans to computers, and the competition attracted some of the strongest and most efficient QA systems available today. However, humans also answer questions for fun and recreation (Jennings, 2006) and the ultimate goal of artificial intelligence is to create machines that answer questions as well as humans (Turing, 1995, known as the Turing test). Moreover, existing comparisons of human question answering ability often use unskilled humans (Rajpurkar et al., 2016), leading to claims of computers "putting millions of jobs at risk" (Cuthbertson, 2018). Or, in competitions with trivia experts (Ferrucci et al., 2010), arcane rules of competitions can tilt the playing field toward computers (Boyd-Graber and Borschinger, 2020) or use unnatural questions (Boyd-Graber et al., 2012; Rodriguez et al., 2019).
|
| 210 |
+
|
| 211 |
+
# 6.1. A Fair Comparison
|
| 212 |
+
|
| 213 |
+
We advertised our competition to trivia enthusiasts on social media. Teams of up to eight players applied to be part of the competition. We selected five teams to participate in the preliminary competition (results in Section 6.2).
|
| 214 |
+
|
| 215 |
+
To create a fair competition and to showcase all of the tiers of the efficient QA competition, we offered three ways to answer each question where either humans or computers have more resources to answer a question (Table 4).
|
| 216 |
+
|
| 217 |
+
To complement the 500MiB systems, humans had to instantly signal when they knew the answer to a question. This reflects instant recall of a fact by a single individual. In the next phase in competition with the 6GiB systems, both humans and computers had more resources: the human team could discuss the answer for thirty seconds, arguing why they believe their answer is correct and computers had over ten times the memory. Finally, to focus one reading comprehension, unlimited systems faced off against the human teams who also had access to snippets from search results using the question as a query. As with the previous phase, they have thirty seconds to discuss their answer.
|
| 218 |
+
|
| 219 |
+
We selected questions for the human eval based on the following criteria:
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Phase</td><td>Human</td><td>Computer</td><td>Points</td></tr><tr><td>1</td><td>Single player “buzz in”</td><td>500MiB</td><td>3</td></tr><tr><td>2</td><td>Team discussion</td><td>6GiB</td><td>2</td></tr><tr><td>3</td><td>Team discussion w/ search results</td><td>Unlimited</td><td>1</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 4: Phases of the human-computer competition. On odd questions the humans go first, while on even questions the computers go first. If neither team has a correct answer in a phase, we move on to the next phase.
|
| 224 |
+
|
| 225 |
+
<table><tr><td>Team</td><td>Margin</td></tr><tr><td>B</td><td>0.4 ppq</td></tr><tr><td>C</td><td>0.1 ppq</td></tr><tr><td>D</td><td>-0.1 ppq</td></tr><tr><td>A</td><td>-0.2 ppq</td></tr><tr><td>E</td><td>-0.6 ppq</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 5: Marin per question of the human teams at the preliminary competition.
|
| 228 |
+
|
| 229 |
+
- Diverse over topic, ensuring there were questions about history, literature, philosophy, sports, and popular culture. This results in fewer questions about sports and popular culture than the standard NQ distribution.
|
| 230 |
+
- Not tied to 2018; we excluded questions that depend on the date being asked.
|
| 231 |
+
- Interesting questions. While not strictly adversarial, we wanted to showcase both human and computer ability, so we excluded questions that many humans would not know (e.g., "how many us states are there") or questions with answers that are difficult to evaluate in the NQ framework ("how many words are in Les Misérables?").
|
| 232 |
+
- To avoid the issues described in Section 4, we avoided questions that were overly ambiguous (answer changes based on time the question was asked, unclear answer type, mismatch with question intention, etc).
|
| 233 |
+
|
| 234 |
+
Thus, for the human competition, we exclude the broader interpretation of correct adopted in Section 5.2 to a standard we called "actually correct", consistent with traditional trivia interpretations Jennings (2006). A human judge researched all of the answers to the questions and evaluated whether an answer was correct or not. This allows for more permissive answers lines such as "to pass away" to be accepted for questions like "what does it mean to cross over the rainbow bridge" even though the answer line only lists "heaven". Again, consistent with trivia best practice, we only required full names in the case of confusion (e.g., "Theodore Roosevelt" or "Franklin D. Roosevelt" instead of just "Roosevelt").
|
| 235 |
+
|
| 236 |
+
# 6.2. Preliminary Competition
|
| 237 |
+
|
| 238 |
+
To select which of the human teams faced off against the top computers in the final competition and to let the humans teams practice this unconventional format, we had the human teams face off against the baseline systems: T5 (500MiB), DPR (6GiB), and REALM (Unlimited).
|
| 239 |
+
|
| 240 |
+
We set aside an hour for each team. Because better teams can get through more questions, to normalize comparisons per team, we computed the average margin per question (Table 5). Only two teams had higher scores than the computer baseline teams. The top team (Team B), which had multiple professors and trivia champions was clearly the best human team on this set of questions.
|
| 241 |
+
|
| 242 |
+
# 6.3. Final Game
|
| 243 |
+
|
| 244 |
+
The winning team (Team B) went up against the winning computer systems in a final match with fifty questions. Most questions were answered in Phase 1, and those that were not were typically answered in Phase 2 (Figure 3 (right)). Only four questions reached Phase 3; these questions that stumped both humans and computers:
|
| 245 |
+
|
| 246 |
+
- Kimi no na wa what does it mean
|
| 247 |
+
- Who said there are old pilots and there are bold pilots
|
| 248 |
+
- What is the second largest city in ethiopia
|
| 249 |
+
Who said i'll make mincemeat out of you
|
| 250 |
+
|
| 251 |
+
Of these questions, only the second largest city in Ethiopia was answered correctly (the humans were working through a list of cities in Ethiopia they knew). The other questions represent questions that cannot be answered by Wikipedia ("old pilots"), questions that require language capabilities beyond question answering ("kimi no na wa"), or that have a more nuanced answer than NQ can provide. For example, on the questions "who said i'll make mincemeat out of you", the human team reasonably thought that the question meant who originated the phrase, while NQ's answer was the cartoon character who is most associated with the phrase (Klondike Kat).
|
| 252 |
+
|
| 253 |
+
# 6.4. Reflection
|
| 254 |
+
|
| 255 |
+
For question answering, there are multiple interpretations of the phrase "human evaluation". Human evaluation of answers are important for exposing problems in the dataset, revealing ambiguity, and measuring whether the answers are useful. However, for the ultimate questions of where we have achieved artificial intelligence (Turing, 1995), we need fair comparisons with skilled humans. Moreover, to measure whether question answering systems are useful for people, we need to create socio-technical systems where humans and computers can answer questions together (Feng and Boyd-Graber, 2019). More importantly, trivia games are fun. They help illustrate the strengths and weaknesses of the underlying dataset and the QA methods with a spoonful of sugar to make otherwise dull evaluations more interesting.
|
| 256 |
+
|
| 257 |
+
# 7. Conclusions and Future Work
|
| 258 |
+
|
| 259 |
+
The EfficientQA competition was held to encourage research in open-domain question answering that focuses on both accuracy and memory footprint. All top performing submissions have a retriever-reader framework. Submissions to the unrestricted and the 6GiB track are enhanced over the state-of-the-art that retrieves a subset of Wikipedia passages and employs the extractive and/or generative answer modules; systems in the 6GiB track additionally cleverly select a small subset of Wikipedia, only marginally sacrificing accuracy when combined with state-of-the-art compression. In more restricted tracks, systems explore novel approaches and achieve impressive improvements over the baselines. They either generate a large corpus of question-answer pairs and retrieves the closest question to the input question, or dramatically filter Wikipedia and use a single Transformer model for
|
| 260 |
+
|
| 261 |
+
retrieval and answer extraction. Still, they are behind the unrestricted systems in performance by $20\%$ , indicating significant room for improvements in memory-restricted settings.
|
| 262 |
+
|
| 263 |
+
A human analysis shows that automatic evaluations of QA systems are not sufficient for thorough evaluations. Human raters find $30\%$ of the predictions that do not match reference answers but are nonetheless correct. This does not affect all systems equally: relative accuracy rises using definitely correct (between 18–32%) and possibly correct (between 42 and a whopping $71\%$ ). The rise is mainly due to automatic evaluation failing to capture semantically equivalent answers, time-dependence of the answers, or underlying ambiguity in the questions (Min et al., 2020).
|
| 264 |
+
|
| 265 |
+
Future work in efficient open-domain question answering should continue to explore the tradeoff between system size, accuracy, and abstaining (He et al., 2016; Rajpurkar et al., 2018). Moreover, it is important to continue to refine the quality of QA evaluation: not all annotation and not all annotators are created equal. Using trivia enthusiasts for annotation and human benchmarks is a fun and effective evaluation of relative computer QA ability. We would encourage future leaderboards to use some element of human verification in important evaluations (e.g., an annual bake off), removing ambiguity in questions (Min et al., 2020), crafting adversarial examples (Jia and Liang, 2017; Wallace et al., 2019; Dua et al., 2019; Bartolo et al., 2020), or evaluating whether a response is useful to a user (Fan et al., 2019; Feng and Boyd-Graber, 2019).
|
| 266 |
+
|
| 267 |
+
# Acknowledgments
|
| 268 |
+
|
| 269 |
+
We thank all the participants for taking part and making this a successful competition. We thank Google for providing prizes for computer participants. Boyd-Graber is supported by NSF Grant IIS-1822494. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.
|
| 270 |
+
|
| 271 |
+
# References
|
| 272 |
+
|
| 273 |
+
Chris Alberti, Daniel Andor, Emily Pitler, Jacob Devlin, and Michael Collins. Synthetic QA corpora generation with roundtrip consistency. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, July 2019.
|
| 274 |
+
Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. Learning to retrieve reasoning paths over wikipedia graph for question answering. In Proceedings of the International Conference on Learning Representations, 2019.
|
| 275 |
+
Max Bartolo, A. Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. Beat the AI: Investigating adversarial human annotation for reading comprehension. Transactions of the Association for Computational Linguistics, 8:662-678, 2020.
|
| 276 |
+
Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
|
| 277 |
+
Jordan Boyd-Graber and Benjamin Borschinger. What question answering can learn from trivia nerds. ArXiv, 10, 2020. URL https://arxiv.org/abs/1910.14464.
|
| 278 |
+
|
| 279 |
+
Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal Daume III. Besting the quiz master: Crowdsourcing incremental classification games. In Proceedings of Empirical Methods in Natural Language Processing, 2012.
|
| 280 |
+
Danqi Chen and Wen-tau Yih. Open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37, 2020.
|
| 281 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2017.
|
| 282 |
+
Hao Cheng, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Probabilistic assumptions matter: Improved models for distantly-supervised document-level question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5657-5667, Online, July 2020a. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.501. URL https://www.aclweb.org/anthology/2020.acl-main.501.
|
| 283 |
+
Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. Posterior differential regularization with f-divergence for improving model robustness, 2020b.
|
| 284 |
+
Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Unitedqa: A hybrid approach for open domain question answering. arXiv preprint arXiv:2101.00178, 2021.
|
| 285 |
+
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pretraining text encoders as discriminators rather than generators. In Proceedings of the International Conference on Learning Representations, 2020.
|
| 286 |
+
Anthony Cuthbertson. Robots can now read better than humans, putting millions of jobs at risk, 2018. URL https://www.newsweek.com/ robots-can-now-read-better-humans-putting-million-jobs-risk-781393.
|
| 287 |
+
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In *Conference of the North American Chapter of the Association for Computational Linguistics*, pages 2368-2378, Minneapolis, Minnesota, June 2019. doi: 10.18653/v1/N19-1246. URL https://www.aclweb.org/anthology/N19-1246.
|
| 288 |
+
Martin Fajcik, Josef Jon, Santosh Kesiraju, and Pavel Smrz. Rethinking the objectives of extractive question answering. arXiv preprint arXiv:2008.12804, 2020.
|
| 289 |
+
Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. Pruning the index contents for memory efficient open-domain qa. arXiv preprint arXiv:2102.10697, 2021.
|
| 290 |
+
Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019.
|
| 291 |
+
|
| 292 |
+
Shi Feng and Jordan Boyd-Graber. What AI can do for me: Evaluating machine learning interpretations in cooperative play. In International Conference on Intelligent User Interfaces, 2019.
|
| 293 |
+
David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3), 2010.
|
| 294 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. REALM: Retrieval-augmented language model pre-training. In Proceedings of the Conference on Machine Learning, 2020.
|
| 295 |
+
He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daumé III. Opponent modeling in deep reinforcement learning. In Proceedings of the International Conference of Machine Learning, 2016.
|
| 296 |
+
Gautier Izacard and Edouard Grave. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282, 2020.
|
| 297 |
+
Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, and Edouard Grave. A memory efficient baseline for open domain question answering, 2020.
|
| 298 |
+
Ken Jennings. *Brainiac: adventures in the curious, competitive, compulsive world of trivia* *buffs*. Villard, 2006. ISBN 9781400064458.
|
| 299 |
+
Robin Jia and Percy Liang. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1215. URL https://www.aclweb.org/anthology/D17-1215.
|
| 300 |
+
Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. arXiv preprint arXiv:1702.08734, 2017.
|
| 301 |
+
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1601-1611, 2017.
|
| 302 |
+
Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. Technical report on conversational question answering, 2019.
|
| 303 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In Proceedings of Empirical Methods in Natural Language Processing, 2020.
|
| 304 |
+
Divyansh Kaushik and Zachary C. Lipton. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of Empirical Methods in Natural Language Processing, 2018.
|
| 305 |
+
|
| 306 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
|
| 307 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A lite BERT for self-supervised learning of language representations. In Proceedings of the International Conference on Learning Representations, 2020.
|
| 308 |
+
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the Conference of Association for Computational Linguistics, 2019.
|
| 309 |
+
Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, and Luke Zettlemoyer. Pre-training via paraphrasing, 2020a.
|
| 310 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020b.
|
| 311 |
+
Patrick Lewis, Ludovic Denoyer, and Sebastian Riedel. Unsupervised question answering by cloze translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, July 2019.
|
| 312 |
+
Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. Question and answer test-train overlap in open-domain question answering datasets, 2020c.
|
| 313 |
+
Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich Küttler, Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. Paq: 65 million probably-asked questions and what you can do with them. arXiv preprint arXiv:2102.07033, 2021.
|
| 314 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
|
| 315 |
+
Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. Generation-augmented retrieval for open-domain question answering. arXiv preprint arXiv:2009.08553, 2020.
|
| 316 |
+
Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. A discrete hard EM approach for weakly supervised question answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, November 2019a.
|
| 317 |
+
Sewon Min, Danqi Chen, Luke Zettlemoyer, and Hannaneh Hajishirzi. Knowledge guided text retrieval and reading for open domain question answering, 2019b.
|
| 318 |
+
|
| 319 |
+
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. AmbigQA: Answering ambiguous open-domain questions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2020.
|
| 320 |
+
Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. Unified open-domain question answering with structured and unstructured knowledge. arXiv preprint, 2020.
|
| 321 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 2020.
|
| 322 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of Empirical Methods in Natural Language Processing, 2016.
|
| 323 |
+
Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the Association for Computational Linguistics, July 2018. doi: 10.18653/v1/P18-2124. URL https://www.aclweb.org/anthology/P18-2124.
|
| 324 |
+
Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 2020.
|
| 325 |
+
Pedro Rodriguez, Shi Feng, Mohit Iyyer, He He, and Jordan Boyd-Graber. Quizbowl: The case for incremental question answering. CoRR, abs/1904.04792, 2019. URL http://arxiv.org/abs/1904.04792.
|
| 326 |
+
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. Real-time open-domain question answering with dense-sparse phrase index. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, July 2019.
|
| 327 |
+
Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. What makes reading comprehension questions easier? In Proceedings of Empirical Methods in Natural Language Processing, 2018. doi: 10.18653/v1/D18-1453. URL https://www.aclweb.org/anthology/D18-1453.
|
| 328 |
+
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. MobileBERT: a compact task-agnostic bert for resource-limited devices. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2020.
|
| 329 |
+
Alan M. Turing. Computers & thought. Mind, pages 11-35, 1995. URL http://dl.acm.org/citation.cfm?id=216408.216410.
|
| 330 |
+
Ellen M Voorhees and Dawn M Tice. Building a question answering test collection. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 200-207. ACM, 2000.
|
| 331 |
+
|
| 332 |
+
Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. Transactions of the Association of Computational Linguistics, 10, 2019.
|
| 333 |
+
Sohee Yang and Minjoon Seo. Is retriever merely an approximator of reader? arXiv preprint arXiv:2010.10999, 2020.
|
| 334 |
+
Sohee Yang and Minjoon Seo. Designing a minimal retrieve-and-read system for open-domain question answering. In NAACL-HLT, 2021.
|
| 335 |
+
|
| 336 |
+
<table><tr><td>Category</td><td>definitely (%)</td><td>possibly (%)</td></tr><tr><td colspan="3">As valid as gold</td></tr><tr><td>Semantically the same</td><td>60</td><td>22</td></tr><tr><td>Open-ended question</td><td>6</td><td>4</td></tr><tr><td>Ambiguous entity/event references</td><td>20</td><td>20</td></tr><tr><td>Answer has different granularity</td><td>14</td><td>6</td></tr><tr><td>Gold is incorrect</td><td>2</td><td>6</td></tr><tr><td colspan="3">Valid but not the best</td></tr><tr><td>Ambiguous answer type</td><td>0</td><td>20</td></tr><tr><td>Answer is time-dependent</td><td>0</td><td>18</td></tr><tr><td colspan="3">Less plausible</td></tr><tr><td>Conflicting info in Wikipedia</td><td>0</td><td>4</td></tr><tr><td>Plausible only in certain condition</td><td>0</td><td>4</td></tr><tr><td>Mismatch with question intent</td><td>0</td><td>8</td></tr><tr><td>Incorrect prediction</td><td>0</td><td>2</td></tr></table>
|
| 337 |
+
|
| 338 |
+
Table 6: Analysis of predictions that are rated as definitely correct or possibly correct by human raters. Examples of each category are shown in Table 7. Note that the total is over $100\%$ as one question may fall into multiple categories.
|
| 339 |
+
|
| 340 |
+
# Appendix A. Analyses
|
| 341 |
+
|
| 342 |
+
Characteristics of easy and hard questions Table 8 shows questions that are answered correctly by all seven systems (easy), or answered incorrectly by all seven systems (hard). A common feature of easy questions (Sugawara et al., 2018; Kaushik and Lipton, 2018) is that a sentence from Wikipedia provides an explicit support to the question. Such supporting sentences have high lexical overlap with the question and require little paraphrasing (e.g. "the highest population" vs. "the most populous"). Even when the question is not well-formed, all systems may find the correct answer if there is high lexical overlap between the question and the supporting sentence (e.g. the fifth example question in Table 8).
|
| 343 |
+
|
| 344 |
+
Many of the hard questions have their answers in the tables (e.g. the first question in Table 8). This likely makes them "hard" because all systems except FB Hybrid do not consider the tables, and even FB Hybrid may miss such cases. In the other cases, systems make mistakes even when there is the text that supports the answer (e.g. the next two questions in Table 8). Though the reason for the mistake would be different depending on the systems, we conjecture that such supporting sentences are harder to retrieve or do not support the answer as explicitly as other examples. Occasionally, the question is not well-formed, such as the last one in Table 8.
|
| 345 |
+
|
| 346 |
+
Error Analysis We further present an error analysis of the top two systems in the 500MiB track: UCLNLP-FB system and Naver RDR. We choose these two systems because their approaches differ significantly from each other and the 500MiB track fits the main motivation of the EfficientQA competition. We randomly sample 100 questions from the devel
|
| 347 |
+
|
| 348 |
+
<table><tr><td>Category</td><td>Example</td></tr><tr><td colspan="2">As valid as gold</td></tr><tr><td>Semantically the same</td><td>Q: When did israel split into israel and judahGold: about 930 BCE / Prediction: around 930 BCEQ: What movie is bring it on the musical based off ofGold: Bring It On / Prediction: 2000 film of the same name</td></tr><tr><td>Open-ended question</td><td>Q: City belonging to mid west of united statesGold: Des Moines / Prediction: kansas city</td></tr><tr><td>Ambiguous entity/event references</td><td>Q: Gold woman in guardians of the galaxy 2Gold: Ayesha / Prediction: Elizabeth Debicki</td></tr><tr><td>Answer has different granularity</td><td>Q: When did the musical cats open on broadwayGold: 1982, 7 October 1982 / Prediction: october 1982Q: Where did the industrial revolution in the united states beginGold: in Pawtucket / Prediction: rhode island</td></tr><tr><td>Gold is incorrect</td><td>Q: When did universal studios become a theme parkGold: 1965 / Prediction: july 15 1964, 1964</td></tr><tr><td colspan="2">Valid but not the best</td></tr><tr><td>Ambiguous answer type</td><td>Q: When did naruto find out about jiraiya's deathGold: “Somber News” / Prediction: february 23 2013</td></tr><tr><td>Answer is time-dependent</td><td>Q: Who goes to the big 12 championship gameGold: Oklahoma Sooners / Prediction: tcu horned frogs</td></tr><tr><td colspan="2">Less plausible</td></tr><tr><td>Conflicting info in Wikipedia</td><td>Q: Based on population what is the largest military base in the united states / Gold: Fort Bragg / Prediction: fort hood</td></tr><tr><td>Plausible only in certain condition</td><td>Q: Who appointed the chief justice of high courtGold: President / Prediction: president of india</td></tr><tr><td>Mismatch with question intent</td><td>Q: While the united states dealt with the depression it was also facingGold: Racial tensions / Prediction: great depression, economic downturns</td></tr><tr><td>Incorrect</td><td>Q: When were banded iron formations formed on the sea floorGold: Precambrian / Prediction: some 185 billion years ago</td></tr></table>
|
| 349 |
+
|
| 350 |
+
Table 7: Examples of predictions rated as definitely correct or possibly correct by human.
|
| 351 |
+
|
| 352 |
+
opment data. 40, 33 and 27 questions are answered correctly by both systems, one of the systems, and none of the systems, respectively.
|
| 353 |
+
|
| 354 |
+
Table 9 shows the breakdown of the predictions from UCLNLP-FB system, where 50 questions are answered correctly, and the other 50 are not. A majority of the error cases (47 out of 50) is due to retrieving an incorrect question. Out of 47, 25 cases retrieve the question with a different topic from the input questions, many of which discuss different entities (e.g., "don't it make you feel like dancing" vs. "you make me feel like dancing"). The other 19 cases retrieve the question that discusses the same topic or contains the same key entity, but is asking about different details. For instance, in the example in Table 9, the input question asks the amount spent for a film, while the retrieved question asks the amount
|
| 355 |
+
|
| 356 |
+
<table><tr><td>Easy questions
|
| 357 |
+
Q: What city in america has the highest population / Gold: New York City
|
| 358 |
+
New York City: New York City is the most populous city in the United States ...</td></tr><tr><td>Q: Who plays 11 in stranger things on netflix / Gold: Millie Bobby Brown
|
| 359 |
+
Millie Bobby Brown: She gained notability for her role as Eleven in the first season of Netflix science fiction series Stranger Things.</td></tr><tr><td>Q: Where did leave it to beaver take place / Gold: Mayfield
|
| 360 |
+
Leave It to Beaver: Leave It to Beaver is set in the fictitious community of Mayfield and its environs.</td></tr><tr><td>Q: The substance that is dissolved in the solution / Gold: solute
|
| 361 |
+
Solution: A solute is a substance dissolved in another substance, known as a solvent.</td></tr><tr><td>Q: This means that in dna cytosine is always paired with / Gold: guanine
|
| 362 |
+
Guanine: In DNA, guanine is paired with cytosine.</td></tr><tr><td>Hard questions
|
| 363 |
+
Q: What is the area code for colombo sri lanka / Gold: 036
|
| 364 |
+
Telephone numbers in Sri Lanka: (Answer in the table)</td></tr><tr><td>Q: Which country is highest oil producer in the world / Gold: United States / Predictions: Russia
|
| 365 |
+
History of the petroleum industry in the United States: For much of the 19th and 20th centuries, the US was the largest oil producing country in the world.
|
| 366 |
+
List of countries by oil production: (Answer in the table)
|
| 367 |
+
Russia: Russia is the world's leading natural gas exporter and second largest natural gas producer, while also the second largest oil exporter and the third largest oil producer.</td></tr><tr><td>Q: Who did gibbs shoot in season 10 finale / Gold: Mendez
|
| 368 |
+
Predictions: fbi agent tobias fornell (from all systems in unrestricted/6Gb track)
|
| 369 |
+
Past, Present and Future (NCIS) Gibbs realizes that Mendez is .. The show then returns to the scene depicted in a scene from season 10 finale's ... shoots and kills Mendez.</td></tr><tr><td>Q: The girl next door film based on a true story / Gold: loosely / Prediction: girl next door</td></tr></table>
|
| 370 |
+
|
| 371 |
+
Table 8: Samples of easy questions (all systems predict correct answers) and hard questions (all systems predict incorrect answers).
|
| 372 |
+
|
| 373 |
+
made by the film. It is also worth noting that the system sometimes gets the correct answer from the retrieved question with a different meaning from the input question, e.g., "What republican is running for mayor of phoenix" vs. "Who is the current mayor of phoenix", which have the same answer because the current mayor of Phoenix is Republican.
|
| 374 |
+
|
| 375 |
+
Table 10 shows the breakdown of the predictions from NAVER RDR, where 63 questions are answered correctly, and 37 are not. Failure due to pruning the gold passage is rare, being responsible for only $3\%$ . More of the failure cases are due to missing the gold passage, either when choosing 80 passages through dense retrieval $(12\%)$ , or during cross-attention re-ranking 80 passages to decide on the top 1 passage $(15\%)$ . Finally, in $7\%$ of the cases, the top 1 passage contains the gold answer but the system fails to extract the correct answer. The gold passages in this category have valid but implicit support of the answer, e.g., the last example in Table 10.
|
| 376 |
+
|
| 377 |
+
<table><tr><td rowspan="2">Correct</td><td>Captured by automatic eval (32%)
|
| 378 |
+
Input: Who played apollo creed in the original rocky (A: carl weathers)
|
| 379 |
+
Retrieved: Who played apollo creed in the first rocky movie (A: carl weathers)</td></tr><tr><td>Captured by manual eval (18%)
|
| 380 |
+
Input: When does the new season of snl air (A: September 29, 2018)
|
| 381 |
+
Retrieved: When does the new season of snl 2017 start (A: September 30, 2017)</td></tr><tr><td rowspan="5">Incorrect</td><td>Retrieved Q is not related to input Q in topics (25%)
|
| 382 |
+
Input: Who sings don't it make you feel like dancing
|
| 383 |
+
Retrieved: Who sings the song you make me feel like dancing</td></tr><tr><td>Retrieved Q is related but is asking different detail (19%)
|
| 384 |
+
Input: How much did it cost to make bohemian rhapsody film
|
| 385 |
+
Retrieved: How much money did the movie rhapsody make</td></tr><tr><td>Retrieved Q is related but contains incorrect ‘wh’ word (3%)
|
| 386 |
+
Input: When was the current season of top chef filmed
|
| 387 |
+
Retrieved: What is the new season of top chef</td></tr><tr><td>Retrieved Q is context-dependent (1%)
|
| 388 |
+
Input: Who won the war of 1812 between russia and france
|
| 389 |
+
Retrieved: Who did we fight in the war of 1812</td></tr><tr><td>Retrieved Q has incorrect answer (2%)
|
| 390 |
+
Input: When was the last time florida state had a losing record
|
| 391 |
+
Retrieved: When was the last time florida state football had a losing season</td></tr></table>
|
| 392 |
+
|
| 393 |
+
Table 9: Breakdown of the predictions from UCLNLP-FB system on 100 random samples.
|
| 394 |
+
|
| 395 |
+
<table><tr><td rowspan="2">Correct</td><td>Captured by automatic eval (34%)
|
| 396 |
+
Q: Who played apollo creed in the original rocky / Gold: Carl Weathers / Prediction: Carl Weathers
|
| 397 |
+
Retrieved P: Carl Weathers: He is best known for portraying apollo creed in the “rocky” series of films.</td></tr><tr><td>Captured by manual eval (29%)
|
| 398 |
+
Q: what was the shelby in gone in 60 seconds / Gold: Shelby Mustang GT500 / Prediction: gt500
|
| 399 |
+
Retrieved P: Eleanor: The eleanor name is reused for a shelby mustang gt500 in the 2000 gone in 60 seconds remake.</td></tr><tr><td rowspan="4">Incorrect</td><td>No answer found in pruned Wiki (3%)
|
| 400 |
+
Q: Who has been chosen for the 2017 saraswati samman
|
| 401 |
+
Gold: Sitanshu Yashaschandra / Prediction Vijayendra saraswathi
|
| 402 |
+
Retrieved P: jayendra saraswathi: The mutt's pontiff sri vijayendra saraswathi performed the poojas for his guru and predecessor.
|
| 403 |
+
Gold P: Sitanshu Yashaschandra: He received Saraswati Samman (2017) for his poetry collection..</td></tr><tr><td>Fail to retrieve the gold passage (12%)
|
| 404 |
+
Q: Who sings don't it make you feel like dancing / Gold: the Headpins / Prediction: leo sayer
|
| 405 |
+
Retrieved P: you make me feel like dancing is a song by the british singer leo sayer ..</td></tr><tr><td>Fail to rerank the gold passage (15%)
|
| 406 |
+
Q: Who says all animals are equal but some are more equal than others
|
| 407 |
+
Gold: The pigs / Prediction: aristotle
|
| 408 |
+
Retrieved P: moral status of animals in the ancient world: Aristotle perceived some similarities between humans and other species and developed a sort of “psychological continuum”, recognising that human and non-human animals differ only by degree in possessing certain temperaments.</td></tr><tr><td>Fail to extract the correct answer (7%)
|
| 409 |
+
Q: Which is the smallest continent in size in the world / Gold: Australia / Prediction: Asia
|
| 410 |
+
Retrieved P: Continent ...Ordered from largest in area to smallest, they are: Asia, Africa, North America, South America, Antarctica, Europe, and Australia</td></tr></table>
|
| 411 |
+
|
| 412 |
+
Table 10: Breakdown of the predictions from Naver RDR on 100 random samples.
|
data/2021/2101_00xxx/2101.00133/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00178/431507cf-6cad-41d1-9637-1521a5ac3935_content_list.json
CHANGED
|
@@ -1,3 +1,1479 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "UnitedQA: A Hybrid Approach for Open Domain Question Answering",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
134,
|
| 8 |
+
79,
|
| 9 |
+
867,
|
| 10 |
+
101
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Hao Cheng $^{1*}$ , Yelong Shen $^{2*}$ , Xiaodong Liu $^{1}$ , Pengcheng He $^{2}$ , Weizhu Chen $^{2}$ , Jianfeng Gao $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
242,
|
| 19 |
+
131,
|
| 20 |
+
766,
|
| 21 |
+
167
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup> Microsoft Research <sup>2</sup> Microsoft Azure AI",
|
| 28 |
+
"bbox": [
|
| 29 |
+
327,
|
| 30 |
+
167,
|
| 31 |
+
678,
|
| 32 |
+
181
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{chehao, yeshe, xiaodl, penhe, wzchen, jfgao}@microsoft.com",
|
| 39 |
+
"bbox": [
|
| 40 |
+
178,
|
| 41 |
+
185,
|
| 42 |
+
830,
|
| 43 |
+
200
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
263,
|
| 53 |
+
263,
|
| 54 |
+
341,
|
| 55 |
+
278
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvements over previous state-of-the-art models. We demonstrate that an hybrid approach by combining answers from both readers can effectively take advantages of extractive and generative answer inference strategies and outperform single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
142,
|
| 64 |
+
294,
|
| 65 |
+
460,
|
| 66 |
+
565
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
115,
|
| 76 |
+
583,
|
| 77 |
+
260,
|
| 78 |
+
596
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Open-domain question answering (QA) has been a long standing problem in natural language understanding, information retrieval, and related fields (Chen and Yih, 2020). An typical open-domain QA system follows the retrieval-reader framework (Chen et al., 2017; Guu et al., 2020; Karpukhin et al., 2020), where the relevant passages are first retrieved from a large text corpus, and a reader module then navigates multiple passages for answer inference. In this work, we study two paradigms of reader modules, i.e. extractive (Karpukhin et al., 2020; Guu et al., 2020) and generative (Lewis et al., 2020; Izacard and Grave, 2021) readers. The extractive reader extracts contiguous spans from the retrieved passages whereas the generative reader sequentially decodes the answer string which might not be contained in the retrieved passages.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
114,
|
| 87 |
+
609,
|
| 88 |
+
490,
|
| 89 |
+
883
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Recent work on open-domain QA (Karpukhin et al., 2020; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021) explores either an extractive reader or a generative reader exclusively. We hypothesize that extractive and generative readers adopt different answer inference strategies, thus a hybrid extractive/generative reader can be a better option for open-domain QA tasks. As shown in Figure 1, compared with prediction agreement among only generative or extractive readers (top-left and bottom-right), the cross prediction agreement between extractive and generative readers (bottom-left) is relatively low ( $<50\\%$ ). It indicates that answers produced by those two types of models are different and they can be complementary to each other. Therefore, we propose a hybrid reader approach, UnitedQA, which is a simple ensemble approach to combine the predictions from extractive and generative readers. It achieves state-of-the-art results on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017).",
|
| 96 |
+
"bbox": [
|
| 97 |
+
509,
|
| 98 |
+
263,
|
| 99 |
+
885,
|
| 100 |
+
601
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "In UnitedQA, the extractive reader (UnitedQA-E) and generative reader (UnitedQA-G) are built upon the pretrained language models, ELECTRA (Clark et al., 2020) and T5 (Raffel et al., 2020), respectively. For the UnitedQA-E, we adopt a weakly-supervised training objective to address the noisy supervision issue caused by the heuristics-based labeling and incorporate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the model robustness. The UnitedQA-G follows the T5 Fusion-in-Decoder (FID) (Izacard and Grave, 2021) and we make two improvements: first, we add a group of attention bias parameters into the decoder cross-attention block to feature the ranking information of retrieved contexts; second, we add the adversarial training (Ju et al., 2019; Jiang et al., 2020; Pereira et al., 2021) to improve the model generalization ability.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
509,
|
| 109 |
+
602,
|
| 110 |
+
885,
|
| 111 |
+
891
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "The experimental results highlight the effec",
|
| 118 |
+
"bbox": [
|
| 119 |
+
529,
|
| 120 |
+
892,
|
| 121 |
+
884,
|
| 122 |
+
909
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "aside_text",
|
| 128 |
+
"text": "arXiv:2101.00178v2 [cs.CL] 2 Jun 2021",
|
| 129 |
+
"bbox": [
|
| 130 |
+
21,
|
| 131 |
+
319,
|
| 132 |
+
60,
|
| 133 |
+
717
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_footnote",
|
| 139 |
+
"text": "* Equal Contribution",
|
| 140 |
+
"bbox": [
|
| 141 |
+
144,
|
| 142 |
+
895,
|
| 143 |
+
270,
|
| 144 |
+
909
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "image",
|
| 150 |
+
"img_path": "images/e639adf81015e2d37b03aae3c3c09bd6ca0af5b6c190dc1da6d84a01523c4b93.jpg",
|
| 151 |
+
"image_caption": [
|
| 152 |
+
"Figure 1: Pairwise prediction agreement ratio. G-1, G-2, G-3 and E-1, E-2, E-3 are three different generative and extractive readers respectively. All readers achieve similar performance ( $\\approx$ $52\\%$ exact match) on NaturalQuestions. Higher agreement ( $>50\\%$ ) in red and lower agreement ( $<50\\%$ ) in gray. The agreement is calculated based on exact string match."
|
| 153 |
+
],
|
| 154 |
+
"image_footnote": [],
|
| 155 |
+
"bbox": [
|
| 156 |
+
147,
|
| 157 |
+
95,
|
| 158 |
+
433,
|
| 159 |
+
272
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 1
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "tiveness of the simple hybrid approach of UnitedQA. With both improved extractive and generative readers, UnitedQA sets new state-of-the-art results on two popular open-domain QA datasets, i.e. 54.7 and 70.3 in exact match on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), respectively. It is worth noting that our UnitedQA model not only outperforms each single model but also brings more pronounced improvements over homogeneous ensembles of either extractive or generative readers. Last, based on our analyses, UnitedQA-E and UnitedQA-G have advantages in different cases, suggesting they may use different reasoning strategies.",
|
| 166 |
+
"bbox": [
|
| 167 |
+
114,
|
| 168 |
+
428,
|
| 169 |
+
492,
|
| 170 |
+
653
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 1
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "2 Method",
|
| 177 |
+
"text_level": 1,
|
| 178 |
+
"bbox": [
|
| 179 |
+
115,
|
| 180 |
+
669,
|
| 181 |
+
220,
|
| 182 |
+
686
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "In this section, we present the overall pipeline of the UnitedQA system, which consists of three components: Retrieval, Reading, and Re-ranking. First, the retrieval module fetches a list of relevant passages from a Wikipedia dump for a given question. Then, the module of hybrid readers produces answer candidates from the set of retrieved passages. Last, the re-ranking module combines the answer candidates with linear interpolation and produces the final answer.",
|
| 189 |
+
"bbox": [
|
| 190 |
+
114,
|
| 191 |
+
697,
|
| 192 |
+
490,
|
| 193 |
+
858
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "Retrieval Following Karpukhin et al. (2020), we consider two methods, BM25 and dense passage retrieval (DPR), for retrieving the support passages",
|
| 200 |
+
"bbox": [
|
| 201 |
+
114,
|
| 202 |
+
860,
|
| 203 |
+
489,
|
| 204 |
+
910
|
| 205 |
+
],
|
| 206 |
+
"page_idx": 1
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"type": "text",
|
| 210 |
+
"text": "for a given question. For BM25, passages are encoded as bag of words (BOW), and inverse document frequencies are used as the ranking function. For DPR, passages and questions are represented as dense vectors based on two BERT (Devlin et al., 2019) models. The relevance score is then computed based on the dot production between the query and passage vectors. In this paper, we adopt the same implementation as Karpukhin et al. (2020) for retrieving passages. Specifically, the English Wikipedia dump from Dec. 20, 2018 is used as the source documents for retrieval, with the removal of semi-structured data, such as tables or lists. Each document is split into disjoint 100-word passages as the basic retrieval unit. The top-100 passages are then passed for reading.",
|
| 211 |
+
"bbox": [
|
| 212 |
+
509,
|
| 213 |
+
74,
|
| 214 |
+
885,
|
| 215 |
+
332
|
| 216 |
+
],
|
| 217 |
+
"page_idx": 1
|
| 218 |
+
},
|
| 219 |
+
{
|
| 220 |
+
"type": "text",
|
| 221 |
+
"text": "Reading We combine the generative reader and the extractive reader to produce answer candidates over the retrieved passages. Here, we only give a high-level description of our approach. More details regarding our improved extractive and generative models are presented in §2.1 and §2.2 respectively.",
|
| 222 |
+
"bbox": [
|
| 223 |
+
509,
|
| 224 |
+
334,
|
| 225 |
+
885,
|
| 226 |
+
431
|
| 227 |
+
],
|
| 228 |
+
"page_idx": 1
|
| 229 |
+
},
|
| 230 |
+
{
|
| 231 |
+
"type": "text",
|
| 232 |
+
"text": "The generative reader is based on a sequence-to-sequence model pre-trained in a forward-generation fashion on a large corpus, i.e. T5 (Raffel et al., 2020). Similar to Izacard and Grave (2021), the model takes the question and its relevant passages as input, and then generates the answer string token by token. Specifically, the concatenation of all retrieved passages and the corresponding question is used as the encoder input. Then, the decoder performs reasoning over the concatenation of all evidence through an attention mechanism.",
|
| 233 |
+
"bbox": [
|
| 234 |
+
509,
|
| 235 |
+
432,
|
| 236 |
+
885,
|
| 237 |
+
608
|
| 238 |
+
],
|
| 239 |
+
"page_idx": 1
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"type": "text",
|
| 243 |
+
"text": "Following state-of-the-art extractive QA models (Devlin et al., 2019; Karpukhin et al., 2020), our extractive reader is based on a Transformer neural network pre-trained with a cloze style self-supervised objective, i.e. ELECTRA (Clark et al., 2020). Here, a pair of a given question and a support passage is jointly encoded into neural text representations. These representations are then used to define scores or probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans. Finally, the answer string probabilities are based on the aggregation over all possible answer spans from the entire set of support passages.",
|
| 244 |
+
"bbox": [
|
| 245 |
+
509,
|
| 246 |
+
611,
|
| 247 |
+
885,
|
| 248 |
+
837
|
| 249 |
+
],
|
| 250 |
+
"page_idx": 1
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"type": "text",
|
| 254 |
+
"text": "2.1 UnitedQA-E",
|
| 255 |
+
"text_level": 1,
|
| 256 |
+
"bbox": [
|
| 257 |
+
510,
|
| 258 |
+
852,
|
| 259 |
+
658,
|
| 260 |
+
869
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 1
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "text",
|
| 266 |
+
"text": "In §2.1.2, we give the problem definition of open-domain QA for extractive reader. Then, we detail",
|
| 267 |
+
"bbox": [
|
| 268 |
+
509,
|
| 269 |
+
877,
|
| 270 |
+
885,
|
| 271 |
+
910
|
| 272 |
+
],
|
| 273 |
+
"page_idx": 1
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"type": "text",
|
| 277 |
+
"text": "the improvements of UnitedQA-E in $\\S 2.1.2$",
|
| 278 |
+
"bbox": [
|
| 279 |
+
115,
|
| 280 |
+
74,
|
| 281 |
+
443,
|
| 282 |
+
90
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 2
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "2.1.1 Extractive Reader",
|
| 289 |
+
"text_level": 1,
|
| 290 |
+
"bbox": [
|
| 291 |
+
115,
|
| 292 |
+
99,
|
| 293 |
+
319,
|
| 294 |
+
115
|
| 295 |
+
],
|
| 296 |
+
"page_idx": 2
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"type": "text",
|
| 300 |
+
"text": "Given a question $\\mathbf{q}$ and a set of $K$ retrieved passages $\\mathfrak{p}_1, \\ldots, \\mathfrak{p}_K$ , a text encoder produces contextualized representations: $\\mathbf{h}_1^k, \\ldots, \\mathbf{h}_T^k \\in \\mathbb{R}^n$ for the question-passage pair $(\\mathbf{q}, \\mathbf{p}_k)$ in the form of \"[CLS] question [SEP] passage [SEP]\", where [CLS] and [SEP] are special tokens for encoding inputs, $T$ is the maximum sequence length of the input text, and $\\mathbf{h}_i^k$ indicates the contextualized embedding of the $i$ -th token in $(\\mathbf{q}, \\mathbf{p}_k)$ .",
|
| 301 |
+
"bbox": [
|
| 302 |
+
112,
|
| 303 |
+
118,
|
| 304 |
+
490,
|
| 305 |
+
265
|
| 306 |
+
],
|
| 307 |
+
"page_idx": 2
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"type": "text",
|
| 311 |
+
"text": "The extractive reader computes the span-begin score of the $i$ -th token as $s_b(i^k) = \\mathbf{w}_b^T\\mathbf{h}_i^k$ using a weight vector $\\mathbf{w}_b \\in \\mathbb{R}^d$ . The span-end score $s_e(j^k)$ is defined in the same way. Thus, the probabilities of a start position $i^k$ and an end position $j^k$ are $P_b(i^k) = \\frac{\\exp(s_b(i^k))}{Z_b}$ , $P_e(j^k) = \\frac{\\exp(s_e(j^k))}{Z_e}$ , where $Z_b, Z_e$ are normalizing factors defined by the corresponding probability space. The probability of an answer span from $i^k$ to $j^k$ is defined as $P_s(i^k, j^k) = P_b(i^k)P_e(j^k)$ .",
|
| 312 |
+
"bbox": [
|
| 313 |
+
114,
|
| 314 |
+
266,
|
| 315 |
+
489,
|
| 316 |
+
428
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 2
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "Here, we consider two probability spaces, passage level and multi-passage level, with the only difference in the computing of $Z_{b}, Z_{e}$ . Specifically, the passage-level probability of each answer begins and ends is computed by normalizing all possible positions in the respective passage, i.e. $Z_{b} = Z_{b}^{k} = \\sum_{\\mathcal{I}^{k} \\cup \\mathrm{NULL}} \\exp(s_{b}(i))$ , $Z_{e} = Z_{e}^{k} = \\sum_{\\mathcal{I}^{k} \\cup \\mathrm{NULL}} \\exp(s_{e}(j))$ , where $\\mathcal{I}^{k}$ is the set of all possible positions from the $k$ -th passage and NULL indicates special positions if $p_{k}$ does not support answering the question. Similarly, the multi-passage level probability is computed by normalizing over each answer positions across all $K$ relevant passages, i.e. $Z_{b} = Z_{b}^{*} = \\sum_{k} \\sum_{\\mathcal{I}^{k}} \\exp(s_{b}(i))$ , $Z_{e} = Z_{e}^{*} = \\sum_{k} \\sum_{\\mathcal{I}^{k}} \\exp(s_{e}(j))$ , respectively.",
|
| 323 |
+
"bbox": [
|
| 324 |
+
114,
|
| 325 |
+
430,
|
| 326 |
+
490,
|
| 327 |
+
671
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "Since there are usually multiple plausible mentions for open-domain QA, during training, it is typical to maximize either the marginal log-likelihood (MML) of all correct spans (Karpukhin et al., 2020) or the log-likelihood of the most likely correct span (HardEM) (Min et al., 2019). During inference, the prediction is made based on the candidate answer string score, obtaining as $P_{a}(y) = \\sum_{(i,j)\\in \\mathcal{Y}}P_{s}(i,j)$ , where $\\mathcal{Y}$ is the set of spans corresponding to the answer string $y$ .",
|
| 334 |
+
"bbox": [
|
| 335 |
+
114,
|
| 336 |
+
671,
|
| 337 |
+
490,
|
| 338 |
+
832
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "2.1.2 Improvement Method",
|
| 345 |
+
"text_level": 1,
|
| 346 |
+
"bbox": [
|
| 347 |
+
115,
|
| 348 |
+
841,
|
| 349 |
+
349,
|
| 350 |
+
857
|
| 351 |
+
],
|
| 352 |
+
"page_idx": 2
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"type": "text",
|
| 356 |
+
"text": "In addition to better text representations from Clark et al. (2020), we consider two methods for improving the training of the extractive reader.",
|
| 357 |
+
"bbox": [
|
| 358 |
+
114,
|
| 359 |
+
860,
|
| 360 |
+
490,
|
| 361 |
+
910
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 2
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Multi-objective for Weakly-supervised QA The multi-objective formulation is introduced in Cheng et al. (2020) for improving weakly supervised document-level QA. Different from Cheng et al. (2020) where only MML is considered for the multi-objective formulation, we found combining HardEM with MML is more effective for open-domain QA based on our experiments (§4.1). Specifically, we combine a multi-passage HardEM loss with $K$ passage-level MML losses over a batch of $K$ passages",
|
| 368 |
+
"bbox": [
|
| 369 |
+
509,
|
| 370 |
+
74,
|
| 371 |
+
885,
|
| 372 |
+
252
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 2
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "equation",
|
| 378 |
+
"text": "\n$$\n\\begin{array}{l} \\mathcal{L}_{\\mathrm{EXT}} = \\log \\max_{(i,j)}P_{s}^{M}(i,j) + \\\\ \\frac {1}{K} \\sum_ {k} \\log \\sum_ {\\left(i ^ {k}, j ^ {k}\\right)} P _ {s} ^ {P} \\left(i ^ {k}, j ^ {k}\\right), \\tag {1} \\\\ \\end{array}\n$$\n",
|
| 379 |
+
"text_format": "latex",
|
| 380 |
+
"bbox": [
|
| 381 |
+
544,
|
| 382 |
+
262,
|
| 383 |
+
882,
|
| 384 |
+
332
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 2
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "where $P_{s}^{M}, P_{s}^{P}$ is the multi-passage level and passage level span probabilities respectively.",
|
| 391 |
+
"bbox": [
|
| 392 |
+
509,
|
| 393 |
+
344,
|
| 394 |
+
885,
|
| 395 |
+
376
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "Posterior Differential Regularization Due to the noisy supervision for open-domain QA (Chen et al., 2017), we investigate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the robustness of the extractive reader. Different from Cheng et al. (2021) where only clean supervision setting is considered, in this work, we apply PDR to the weakly supervised open-domain QA scenario. Given it is computationally expensive to enumerate all possible spans, we apply two separate regularization terms for the begin and end probabilities at the multi-passage level, respectively,",
|
| 402 |
+
"bbox": [
|
| 403 |
+
509,
|
| 404 |
+
376,
|
| 405 |
+
885,
|
| 406 |
+
570
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "equation",
|
| 412 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {P D R}} = D \\left(P _ {b} (i) \\mid P _ {b} ^ {\\prime} (i)\\right) + D \\left(P _ {e} (j) \\mid P _ {e} ^ {\\prime} (j)\\right), \\tag {2}\n$$\n",
|
| 413 |
+
"text_format": "latex",
|
| 414 |
+
"bbox": [
|
| 415 |
+
514,
|
| 416 |
+
581,
|
| 417 |
+
882,
|
| 418 |
+
600
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 2
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "where $D(\\cdot |\\cdot)$ is the squared Hellinger distance, and $P_b^{\\prime},P_e^{\\prime}$ are the probabilities of start and end positions with additive input noise to the token embeddings. Specifically, we sample noise vectors $\\epsilon_{1},\\ldots ,\\epsilon_{T}$ from $\\mathcal{N}(0,c^2 I)$ , and add them to the token embeddings as the noisy input, i.e. $\\mathbf{v}_1 + \\epsilon_1,\\dots ,\\mathbf{v}_T + \\epsilon_T$ where $c$ is fixed to 1e-3 throughout our experiments.",
|
| 425 |
+
"bbox": [
|
| 426 |
+
509,
|
| 427 |
+
611,
|
| 428 |
+
885,
|
| 429 |
+
739
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "Based on this, the overall training objective for the extractive reader is",
|
| 436 |
+
"bbox": [
|
| 437 |
+
509,
|
| 438 |
+
740,
|
| 439 |
+
882,
|
| 440 |
+
771
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "equation",
|
| 446 |
+
"text": "\n$$\n\\mathcal {L} ^ {1} = \\mathcal {L} _ {\\mathrm {E X T}} + \\gamma \\mathcal {L} _ {\\mathrm {P D R}}, \\tag {3}\n$$\n",
|
| 447 |
+
"text_format": "latex",
|
| 448 |
+
"bbox": [
|
| 449 |
+
596,
|
| 450 |
+
783,
|
| 451 |
+
882,
|
| 452 |
+
801
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "where $\\gamma$ is a regularization scalar hyperparameter.",
|
| 459 |
+
"bbox": [
|
| 460 |
+
509,
|
| 461 |
+
814,
|
| 462 |
+
877,
|
| 463 |
+
829
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "2.2 UnitedQA-G",
|
| 470 |
+
"text_level": 1,
|
| 471 |
+
"bbox": [
|
| 472 |
+
510,
|
| 473 |
+
840,
|
| 474 |
+
660,
|
| 475 |
+
856
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 2
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "text",
|
| 481 |
+
"text": "Here, we first formally define the setup of generative reader for open-domain QA in § 2.2.1 and then present our improvements in § 2.2.2.",
|
| 482 |
+
"bbox": [
|
| 483 |
+
509,
|
| 484 |
+
860,
|
| 485 |
+
885,
|
| 486 |
+
910
|
| 487 |
+
],
|
| 488 |
+
"page_idx": 2
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"type": "text",
|
| 492 |
+
"text": "2.2.1 Generative Reader",
|
| 493 |
+
"text_level": 1,
|
| 494 |
+
"bbox": [
|
| 495 |
+
115,
|
| 496 |
+
74,
|
| 497 |
+
326,
|
| 498 |
+
89
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 3
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"text": "Given a question $\\mathbf{q}$ and a set of $K$ retrieved passages $\\mathfrak{p}_1, \\ldots, \\mathfrak{p}_K$ , the encoder model encodes each $(\\mathfrak{q}, \\mathfrak{p}_k)$ pair independently, and produces contextualized representation for each token: $\\mathbf{h}_i^k \\in \\mathbb{R}^d$ for the $i$ -th token of the $k$ -th pair. The decoder then performs attention over the concatenation of the representations of all the retrieved passages, and generates the answer string.",
|
| 505 |
+
"bbox": [
|
| 506 |
+
114,
|
| 507 |
+
93,
|
| 508 |
+
490,
|
| 509 |
+
222
|
| 510 |
+
],
|
| 511 |
+
"page_idx": 3
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"text": "Let $\\mathbf{x}$ denote the input of the question and all retrieved passages $\\mathbf{x} = ((\\mathbf{q},\\mathbf{p}_1),\\dots,(\\mathbf{q},\\mathbf{p}_K))$ ,and $\\mathbf{y}$ the answer string with its tokens as $(y_{1},\\ldots ,y_{N})$ The generative reader is trained to maximize a sequence-to-sequence objective for a given $(\\mathbf{x},\\mathbf{y})$",
|
| 516 |
+
"bbox": [
|
| 517 |
+
114,
|
| 518 |
+
223,
|
| 519 |
+
490,
|
| 520 |
+
305
|
| 521 |
+
],
|
| 522 |
+
"page_idx": 3
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "equation",
|
| 526 |
+
"text": "\n$$\n\\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta) = \\sum_ {i} ^ {N} \\log P _ {\\theta} \\left(y _ {i} \\mid \\mathbf {x}, y _ {1: i - 1}\\right), \\tag {4}\n$$\n",
|
| 527 |
+
"text_format": "latex",
|
| 528 |
+
"bbox": [
|
| 529 |
+
161,
|
| 530 |
+
309,
|
| 531 |
+
489,
|
| 532 |
+
353
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 3
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "text",
|
| 538 |
+
"text": "where $\\theta$ is the model parameter. During inference, a greedy decoding is used to produce the answer.",
|
| 539 |
+
"bbox": [
|
| 540 |
+
115,
|
| 541 |
+
357,
|
| 542 |
+
489,
|
| 543 |
+
390
|
| 544 |
+
],
|
| 545 |
+
"page_idx": 3
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"type": "text",
|
| 549 |
+
"text": "2.2.2 Improvement Method",
|
| 550 |
+
"text_level": 1,
|
| 551 |
+
"bbox": [
|
| 552 |
+
115,
|
| 553 |
+
398,
|
| 554 |
+
349,
|
| 555 |
+
413
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 3
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"text": "Decoder Attention Bias The decoder in the T5 transformer model adopts a cross-attention mechanism to compute attention scores between the decoding answer tokens and all the retrieved passage tokens. Specifically, let $\\mathbf{y}_i\\in \\mathbb{R}^d$ be the query vector of the $i$ -th decoding token $^1$ , and $\\mathbf{m}_j^k\\in \\mathbb{R}^d$ be the key vector of the $j$ -th token in $(q),p_k)$ . The multi-head cross-attention scores in T5 (Raffel et al., 2020) $\\mathbf{s}_{i,j}^{k}$ is calculated as",
|
| 562 |
+
"bbox": [
|
| 563 |
+
114,
|
| 564 |
+
417,
|
| 565 |
+
490,
|
| 566 |
+
563
|
| 567 |
+
],
|
| 568 |
+
"page_idx": 3
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "equation",
|
| 572 |
+
"text": "\n$$\n\\mathbf {s} _ {i, j} ^ {k} = \\operatorname {M u l t i H e a d A t t} \\left(\\mathbf {y} _ {i}, \\mathbf {m} _ {j} ^ {k}\\right) \\in \\mathbb {R} ^ {\\left| \\text {H e a d} \\right|} \\tag {5}\n$$\n",
|
| 573 |
+
"text_format": "latex",
|
| 574 |
+
"bbox": [
|
| 575 |
+
142,
|
| 576 |
+
570,
|
| 577 |
+
487,
|
| 578 |
+
592
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 3
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "text",
|
| 584 |
+
"text": "where $|\\mathrm{Head}|$ is the number of attention heads. However, it doesn't capture the relevance information of retrieved passages into the reader in (5). To add the relevance feature into the attention block, we revise (5) by incorporating the attention bias",
|
| 585 |
+
"bbox": [
|
| 586 |
+
114,
|
| 587 |
+
596,
|
| 588 |
+
490,
|
| 589 |
+
677
|
| 590 |
+
],
|
| 591 |
+
"page_idx": 3
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"type": "equation",
|
| 595 |
+
"text": "\n$$\n\\mathbf {s} _ {i, j} ^ {k} = \\operatorname {M u l t i H e a d A t t} \\left(\\mathbf {y} _ {i}, \\mathbf {m} _ {j} ^ {k}\\right) + \\mathbf {b} _ {k}, \\tag {6}\n$$\n",
|
| 596 |
+
"text_format": "latex",
|
| 597 |
+
"bbox": [
|
| 598 |
+
164,
|
| 599 |
+
683,
|
| 600 |
+
489,
|
| 601 |
+
703
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 3
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "where $\\mathbf{b}_k\\in \\mathbb{R}^{|Head|}$ is a trainable attention bias vector for all the tokens in the $k$ -th retrieved passage. In the experiments, the maximum retrieved passages is by default set to 100. Thus, the decoder attention bias introduces additional $100*|\\mathrm{Head}|$ parameters for each layer.",
|
| 608 |
+
"bbox": [
|
| 609 |
+
114,
|
| 610 |
+
709,
|
| 611 |
+
490,
|
| 612 |
+
806
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 3
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"text": "Adversarial Training Adversarial training creates adversarial examples by adding small perturbations to the embedding layer. Assuming the word(-piece) embedding layer is parameterized by a matrix $\\mathbf{V} \\in \\mathcal{R}^{|V| \\times d}$ , $|V|$ is the vocabulary size, and $d$",
|
| 619 |
+
"bbox": [
|
| 620 |
+
114,
|
| 621 |
+
807,
|
| 622 |
+
490,
|
| 623 |
+
888
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 3
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "table",
|
| 629 |
+
"img_path": "images/901718d482af3bc4e31ec34c0dfb481a1f2d1e9fcf260bc422dc3b8086ca5ffa.jpg",
|
| 630 |
+
"table_caption": [],
|
| 631 |
+
"table_footnote": [],
|
| 632 |
+
"table_body": "<table><tr><td>Dataset</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>NQ</td><td>79168</td><td>8757</td><td>3610</td></tr><tr><td>TriviaQA</td><td>78785</td><td>8837</td><td>11313</td></tr><tr><td>EffcientQA</td><td>-</td><td>1800</td><td>-</td></tr></table>",
|
| 633 |
+
"bbox": [
|
| 634 |
+
549,
|
| 635 |
+
72,
|
| 636 |
+
848,
|
| 637 |
+
154
|
| 638 |
+
],
|
| 639 |
+
"page_idx": 3
|
| 640 |
+
},
|
| 641 |
+
{
|
| 642 |
+
"type": "text",
|
| 643 |
+
"text": "Table 1: Number of questions in each QA dataset.",
|
| 644 |
+
"bbox": [
|
| 645 |
+
526,
|
| 646 |
+
162,
|
| 647 |
+
867,
|
| 648 |
+
178
|
| 649 |
+
],
|
| 650 |
+
"page_idx": 3
|
| 651 |
+
},
|
| 652 |
+
{
|
| 653 |
+
"type": "text",
|
| 654 |
+
"text": "is the embed-dimension. The adversarial embedding matrix $\\hat{\\mathbf{V}}$ can be obtained by",
|
| 655 |
+
"bbox": [
|
| 656 |
+
509,
|
| 657 |
+
200,
|
| 658 |
+
885,
|
| 659 |
+
233
|
| 660 |
+
],
|
| 661 |
+
"page_idx": 3
|
| 662 |
+
},
|
| 663 |
+
{
|
| 664 |
+
"type": "equation",
|
| 665 |
+
"text": "\n$$\ng _ {\\mathbf {V}} = - \\nabla_ {\\mathbf {V}} \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta), \\tag {7}\n$$\n",
|
| 666 |
+
"text_format": "latex",
|
| 667 |
+
"bbox": [
|
| 668 |
+
630,
|
| 669 |
+
243,
|
| 670 |
+
882,
|
| 671 |
+
260
|
| 672 |
+
],
|
| 673 |
+
"page_idx": 3
|
| 674 |
+
},
|
| 675 |
+
{
|
| 676 |
+
"type": "equation",
|
| 677 |
+
"text": "\n$$\n\\hat {\\mathbf {V}} = \\mathbf {V} + \\operatorname {S G} \\left(\\epsilon g _ {\\mathbf {V}} / \\| g _ {\\mathbf {V}} \\| _ {2}\\right), \\tag {8}\n$$\n",
|
| 678 |
+
"text_format": "latex",
|
| 679 |
+
"bbox": [
|
| 680 |
+
589,
|
| 681 |
+
263,
|
| 682 |
+
882,
|
| 683 |
+
281
|
| 684 |
+
],
|
| 685 |
+
"page_idx": 3
|
| 686 |
+
},
|
| 687 |
+
{
|
| 688 |
+
"type": "text",
|
| 689 |
+
"text": "where $\\mathrm{SG}(\\cdot)$ is the stop-gradient operation. We use the adversarial embedding matrix $\\hat{\\mathbf{V}}$ to replace the original $\\mathbf{V}$ in model parameters $\\theta$ , and obtain $\\hat{\\theta}$ . Thus the adversarial loss can be calculated as",
|
| 690 |
+
"bbox": [
|
| 691 |
+
509,
|
| 692 |
+
290,
|
| 693 |
+
885,
|
| 694 |
+
354
|
| 695 |
+
],
|
| 696 |
+
"page_idx": 3
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"type": "equation",
|
| 700 |
+
"text": "\n$$\n\\mathcal {L} _ {\\mathrm {A T}} (\\mathbf {x}, \\mathbf {y}; \\theta) = \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\hat {\\theta}). \\tag {9}\n$$\n",
|
| 701 |
+
"text_format": "latex",
|
| 702 |
+
"bbox": [
|
| 703 |
+
596,
|
| 704 |
+
363,
|
| 705 |
+
882,
|
| 706 |
+
381
|
| 707 |
+
],
|
| 708 |
+
"page_idx": 3
|
| 709 |
+
},
|
| 710 |
+
{
|
| 711 |
+
"type": "text",
|
| 712 |
+
"text": "Therefore, the overall training objective of the generative reader is",
|
| 713 |
+
"bbox": [
|
| 714 |
+
509,
|
| 715 |
+
390,
|
| 716 |
+
882,
|
| 717 |
+
422
|
| 718 |
+
],
|
| 719 |
+
"page_idx": 3
|
| 720 |
+
},
|
| 721 |
+
{
|
| 722 |
+
"type": "equation",
|
| 723 |
+
"text": "\n$$\n\\mathcal {L} ^ {2} = \\alpha \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta) + \\beta \\mathcal {L} _ {\\mathrm {A T}} (\\mathbf {x}, \\mathbf {y}; \\theta), \\tag {10}\n$$\n",
|
| 724 |
+
"text_format": "latex",
|
| 725 |
+
"bbox": [
|
| 726 |
+
547,
|
| 727 |
+
430,
|
| 728 |
+
882,
|
| 729 |
+
449
|
| 730 |
+
],
|
| 731 |
+
"page_idx": 3
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "text",
|
| 735 |
+
"text": "where $\\alpha = 0.5, \\beta = 0.5$ in all of the experiments.",
|
| 736 |
+
"bbox": [
|
| 737 |
+
509,
|
| 738 |
+
457,
|
| 739 |
+
878,
|
| 740 |
+
473
|
| 741 |
+
],
|
| 742 |
+
"page_idx": 3
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"type": "text",
|
| 746 |
+
"text": "2.3 UnitedQA System",
|
| 747 |
+
"text_level": 1,
|
| 748 |
+
"bbox": [
|
| 749 |
+
510,
|
| 750 |
+
483,
|
| 751 |
+
699,
|
| 752 |
+
499
|
| 753 |
+
],
|
| 754 |
+
"page_idx": 3
|
| 755 |
+
},
|
| 756 |
+
{
|
| 757 |
+
"type": "text",
|
| 758 |
+
"text": "The UnitedQA system combines outputs from both extractive and generative models for a given question during inference. Since the output spaces of extractive and generative models are different, we use a simple linear interpolation based on best predictions from each model<sup>2</sup>. Denote the predicted strings from $M$ extractive and $N$ generative models as $y_1^E, \\ldots, y_M^E$ and $y_1^G, \\ldots, y_N^G$ , respectively. The hybrid prediction $y^*$ is obtained by",
|
| 759 |
+
"bbox": [
|
| 760 |
+
509,
|
| 761 |
+
504,
|
| 762 |
+
885,
|
| 763 |
+
649
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 3
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "equation",
|
| 769 |
+
"text": "\n$$\n\\underset {y \\in \\mathcal {Y}} {\\operatorname {a r g m a x}} \\tau \\sum_ {m = 1} ^ {M} \\mathbf {1} \\left(y, y _ {m} ^ {E}\\right) + \\delta \\sum_ {n = 1} ^ {N} \\mathbf {1} \\left(y, y _ {n} ^ {G}\\right), \\tag {11}\n$$\n",
|
| 770 |
+
"text_format": "latex",
|
| 771 |
+
"bbox": [
|
| 772 |
+
524,
|
| 773 |
+
656,
|
| 774 |
+
882,
|
| 775 |
+
700
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 3
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"text": "where $\\mathcal{V}$ is the set of all predicted strings, $\\mathbf{1}(y,y^{\\prime})$ is an indicator function and $\\tau = 0.6$ , $\\delta = 0.4$ .",
|
| 782 |
+
"bbox": [
|
| 783 |
+
509,
|
| 784 |
+
707,
|
| 785 |
+
885,
|
| 786 |
+
739
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 3
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"text": "3 Experiments",
|
| 793 |
+
"text_level": 1,
|
| 794 |
+
"bbox": [
|
| 795 |
+
510,
|
| 796 |
+
751,
|
| 797 |
+
658,
|
| 798 |
+
766
|
| 799 |
+
],
|
| 800 |
+
"page_idx": 3
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"type": "text",
|
| 804 |
+
"text": "3.1 Experiment Setup",
|
| 805 |
+
"text_level": 1,
|
| 806 |
+
"bbox": [
|
| 807 |
+
510,
|
| 808 |
+
776,
|
| 809 |
+
700,
|
| 810 |
+
791
|
| 811 |
+
],
|
| 812 |
+
"page_idx": 3
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "text",
|
| 816 |
+
"text": "We use two representative QA datasets and adopt the same training/dev/testing splits as in previous",
|
| 817 |
+
"bbox": [
|
| 818 |
+
509,
|
| 819 |
+
796,
|
| 820 |
+
882,
|
| 821 |
+
829
|
| 822 |
+
],
|
| 823 |
+
"page_idx": 3
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "page_footnote",
|
| 827 |
+
"text": "2We have also tried a few more complex approaches for combining the extractive and generative models. For example, we first train an extractive model, and then append the top-k answer strings from the extractive model at the end of the input for training a generative model. None of them is as good as the simple ensemble approach.",
|
| 828 |
+
"bbox": [
|
| 829 |
+
509,
|
| 830 |
+
835,
|
| 831 |
+
885,
|
| 832 |
+
910
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 3
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "page_footnote",
|
| 838 |
+
"text": "we omit the layer notation for simplification",
|
| 839 |
+
"bbox": [
|
| 840 |
+
137,
|
| 841 |
+
894,
|
| 842 |
+
415,
|
| 843 |
+
909
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 3
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "table",
|
| 849 |
+
"img_path": "images/fa5e3a0cc0b9374adf1babbc874451ce0c1811a212298d881329b5d691266938.jpg",
|
| 850 |
+
"table_caption": [],
|
| 851 |
+
"table_footnote": [],
|
| 852 |
+
"table_body": "<table><tr><td>Model</td><td>Reader Type</td><td>Reader Size (M)</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>REALM(Guu et al., 2020)</td><td>Extractive</td><td>110</td><td>40.4</td><td>N/A</td></tr><tr><td>RAG(Lewis et al., 2020)</td><td>Generative</td><td>400</td><td>44.5</td><td>56.1</td></tr><tr><td>DPR(Karpukhin et al., 2020)</td><td>Extractive</td><td>110</td><td>41.5</td><td>57.9</td></tr><tr><td>T5-FIDbase(Izacard and Grave, 2021)</td><td>Generative</td><td>220</td><td>48.2</td><td>65.0</td></tr><tr><td>T5-FIDlarge(Izacard and Grave, 2021)</td><td>Generative</td><td>770</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitedQA-Ebase(Ours)</td><td>Extractive</td><td>110</td><td>47.7</td><td>66.3</td></tr><tr><td>UnitedQA-ELarge(Ours)</td><td>Extractive</td><td>330</td><td>51.8</td><td>68.9</td></tr><tr><td>UnitedQA-Glarge(Ours)</td><td>Generative</td><td>770</td><td>52.3</td><td>68.6</td></tr><tr><td>UnitedQA-ELarge++ (Ours)</td><td>Ensemble</td><td>3x330</td><td>52.4</td><td>69.6</td></tr><tr><td>UnitedQA-GLarge++ (Ours)</td><td>Ensemble</td><td>3x770</td><td>53.3</td><td>69.2</td></tr><tr><td>UnitedQA (Ours)</td><td>Hybrid</td><td>2x770+330</td><td>54.7</td><td>70.5</td></tr></table>",
|
| 853 |
+
"bbox": [
|
| 854 |
+
144,
|
| 855 |
+
72,
|
| 856 |
+
853,
|
| 857 |
+
303
|
| 858 |
+
],
|
| 859 |
+
"page_idx": 4
|
| 860 |
+
},
|
| 861 |
+
{
|
| 862 |
+
"type": "text",
|
| 863 |
+
"text": "Table 2: Comparison to state-of-the-art models on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is used for evaluation. The overall best model is in $\\square$ , the best single model is in $\\square$ , and the best model with the smallest reader size is in $\\square$ .",
|
| 864 |
+
"bbox": [
|
| 865 |
+
114,
|
| 866 |
+
313,
|
| 867 |
+
882,
|
| 868 |
+
356
|
| 869 |
+
],
|
| 870 |
+
"page_idx": 4
|
| 871 |
+
},
|
| 872 |
+
{
|
| 873 |
+
"type": "text",
|
| 874 |
+
"text": "work (Lee et al., 2019; Karpukhin et al., 2020). Both datasets (see Table 1 for statistics) have been heavily studied in recent work (Lee et al., 2019; Min et al., 2019; Karpukhin et al., 2020; Guu et al., 2020). We follow the standard evaluation protocol and use exact match (EM) as the evaluation metric.",
|
| 875 |
+
"bbox": [
|
| 876 |
+
114,
|
| 877 |
+
381,
|
| 878 |
+
490,
|
| 879 |
+
476
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 4
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "NaturalQuestions (Kwiatkowski et al., 2019) is composed of questions by real users to Google Search, each with answers identified by human annotators in Wikipedia. The open-domain version of NaturalQuestions (Lee et al., 2019) only consider questions with short answers, i.e. answers with less than 5 tokens. In the NaturalQuestions, the questions are considered to be more information seeking given that the question askers didn't know the answer beforehand. In addition, we use another evaluation set, i.e. the dev set introduced recently by the EfficientQA competition (Min et al., 2021), which is constructed in the same way as the original NaturalQuestions dataset.",
|
| 886 |
+
"bbox": [
|
| 887 |
+
114,
|
| 888 |
+
482,
|
| 889 |
+
490,
|
| 890 |
+
707
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 4
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "text",
|
| 896 |
+
"text": "TriviaQA (Joshi et al., 2017) contains trivia question-answer pairs that were scraped from the web. Different from NaturalQuestions, the questions here are written with known answers in mind. Specifically, the unfiltered set has been used for developing open-domain QA models.",
|
| 897 |
+
"bbox": [
|
| 898 |
+
114,
|
| 899 |
+
712,
|
| 900 |
+
490,
|
| 901 |
+
808
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 4
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "text",
|
| 907 |
+
"text": "Implementation details For a fair comparison, we use the same retrieval module as Karpukhin et al. (2020) for NaturalQuestions and TriviaQA to mitigate the impact of retrieval difference. Specifically, we use DPR (single) for NaturalQuestions and BM25+DPR (multi) for TriviaQA because of",
|
| 908 |
+
"bbox": [
|
| 909 |
+
114,
|
| 910 |
+
813,
|
| 911 |
+
490,
|
| 912 |
+
909
|
| 913 |
+
],
|
| 914 |
+
"page_idx": 4
|
| 915 |
+
},
|
| 916 |
+
{
|
| 917 |
+
"type": "text",
|
| 918 |
+
"text": "their best end-to-end performance (Karpukhin et al. 2020). For all the experiments, we use 8 and 16 V100-32GB for base and large model training respectively. We train our models with Adam optimizer of a linear scheduler with a warmup raito of 0.1. The extractive models are trained for up to 8 epochs with a learning rate of $2\\mathrm{e} - 5$ and a batch passage size per question of 16. The generative models are trained for up to 10 epochs with a learning rate of $1\\mathrm{e} - 4$ , a batch size of 64, and 100 retrieved passages per question for model training. We select $\\gamma$ in $\\{4,8\\}$ . After the best configuration is selected based on the dev set, we run our best models 3 times independently with different random seeds and report the median performance on the test set. We also report ensemble results which are based on the linear interpolation over answer predictions from the 3 models.",
|
| 919 |
+
"bbox": [
|
| 920 |
+
509,
|
| 921 |
+
381,
|
| 922 |
+
885,
|
| 923 |
+
671
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 4
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "3.2 Main results",
|
| 930 |
+
"text_level": 1,
|
| 931 |
+
"bbox": [
|
| 932 |
+
510,
|
| 933 |
+
690,
|
| 934 |
+
658,
|
| 935 |
+
703
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 4
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"text": "Single Model Results: We first compare our models to two recent models, REALM (Guu et al., 2020) and RAG (Lewis et al., 2020), which are first pre-trained with different retrieval augmented objectives and then fine-tuned for open-domain QA. In addition, we include as baselines DPR (Karpukhin et al., 2020) and T5-FID (Izacard and Grave, 2021), both of which are based on the same retriever as ours. As shown in Table 2, both our extractive and generative models achieve new state-of-the-art results for both studied datasets. Compared with the recent state-of-the-art extractive",
|
| 942 |
+
"bbox": [
|
| 943 |
+
509,
|
| 944 |
+
715,
|
| 945 |
+
885,
|
| 946 |
+
909
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 4
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "text",
|
| 952 |
+
"text": "model (DPR), our base model leads to pronounced $15\\%$ relative improvements for both NaturalQuestions $(+6.2$ absolute improvement) and TriviaQA $(+8.4$ absolute improvement). More importantly, UnitedQA- $\\mathbf{E}_{\\mathrm{base}}$ achieves comparable or even better performance with regard to generative models of larger size, i.e. RAG and T5-FIDbase. It highlights the importance of proper training strategies for open-domain QA models.",
|
| 953 |
+
"bbox": [
|
| 954 |
+
114,
|
| 955 |
+
74,
|
| 956 |
+
492,
|
| 957 |
+
219
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 5
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "Hybrid Model Results: In order to evaluate the advantage of the hybrid of the extractive and generative models (UnitedQA), we include two homogeneous ensemble baselines, one consisting of only extractive readers (UnitedQA-E++) and the other ensemble of exclusively generative models (UnitedQA-G++.). For homogeneous ensemble cases, the three-way majority prediction is used. For the hybrid of extractive and generative readers, we select a three-model combination from the set of three generative and three extractive models based on the dev set. We observed that combining predictions from two generative models and one extractive model results in the best hybrid model for both datasets. As expected, all ensemble models show an improvement over their single model counterparts. However, the two homogeneous ensemble baselines, UnitedQA-E++ and UnitedQA-G++, only provide marginal gains over the corresponding best single models. The significant improvement brought by our proposed hybrid approach indicates the benefit of combining extractive and generative readers for open-domain QA.",
|
| 964 |
+
"bbox": [
|
| 965 |
+
114,
|
| 966 |
+
225,
|
| 967 |
+
492,
|
| 968 |
+
596
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 5
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "Discussion: Although the proposed hybrid approach has been shown to be highly effective for open-domain QA, we point out that the improved performance comes with increased computational cost. The best combination requires approximately three times the computational cost of a single generative model. Therefore, it would be interesting to explore more efficient hybrid methods, such as effective parameter sharing strategies or unified formulations. Another interesting future direction is to explore customized compression approaches for reducing the model size of retriever and reader separately or jointly through pruning (Han et al., 2016), quantization (Hubara et al., 2018), and knowledge distillation (Hinton et al., 2015). Specifically, given that the hybrid model is more effective, it is likely that a student model can learn more effectively from a hybrid teacher model via knowledge distillation for open-domain QA.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
114,
|
| 977 |
+
602,
|
| 978 |
+
492,
|
| 979 |
+
910
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 5
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "table",
|
| 985 |
+
"img_path": "images/e417a4fbc4b8ca697c576e4d37f04ef17665fc4c19dff0f506d7b6a1b2b32cc2.jpg",
|
| 986 |
+
"table_caption": [],
|
| 987 |
+
"table_footnote": [],
|
| 988 |
+
"table_body": "<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>(Cheng et al., 2020) +PDR</td><td>43.3</td><td>60.1</td></tr><tr><td>BERTbase</td><td>44.2</td><td>62.2</td></tr><tr><td>-Multi-object</td><td>43.5</td><td>61.3</td></tr><tr><td>-PDR</td><td>41.8</td><td>60.2</td></tr><tr><td>-Multi-object & PDR</td><td>40.6</td><td>58.5</td></tr><tr><td>UnitedQA-Ebase</td><td>46.0</td><td>65.4</td></tr><tr><td>-Multi-object</td><td>45.2</td><td>64.3</td></tr><tr><td>-PDR</td><td>43.1</td><td>63.8</td></tr><tr><td>-Multi-object & PDR</td><td>42.5</td><td>61.2</td></tr></table>",
|
| 989 |
+
"bbox": [
|
| 990 |
+
517,
|
| 991 |
+
72,
|
| 992 |
+
880,
|
| 993 |
+
256
|
| 994 |
+
],
|
| 995 |
+
"page_idx": 5
|
| 996 |
+
},
|
| 997 |
+
{
|
| 998 |
+
"type": "text",
|
| 999 |
+
"text": "Table 3: Ablation experiments of the extractive model on the dev sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported. The top and bottom models are built on BERTbase and ELECTRAbase, respectively.",
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
509,
|
| 1002 |
+
266,
|
| 1003 |
+
885,
|
| 1004 |
+
338
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 5
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "4 Analysis",
|
| 1011 |
+
"text_level": 1,
|
| 1012 |
+
"bbox": [
|
| 1013 |
+
510,
|
| 1014 |
+
367,
|
| 1015 |
+
620,
|
| 1016 |
+
384
|
| 1017 |
+
],
|
| 1018 |
+
"page_idx": 5
|
| 1019 |
+
},
|
| 1020 |
+
{
|
| 1021 |
+
"type": "text",
|
| 1022 |
+
"text": "In this section, we first carry out ablation study on the extractive and generative model improvements. Moreover, we aim to take a deeper look and understand the difference between the two models.",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
509,
|
| 1025 |
+
399,
|
| 1026 |
+
885,
|
| 1027 |
+
463
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 5
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "4.1 Ablation Study",
|
| 1034 |
+
"text_level": 1,
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
510,
|
| 1037 |
+
482,
|
| 1038 |
+
675,
|
| 1039 |
+
497
|
| 1040 |
+
],
|
| 1041 |
+
"page_idx": 5
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "text",
|
| 1045 |
+
"text": "In Table 3, we present ablation experiments on the effectiveness of different textual representations and methods for improving the extractive model UnitedQA-Ebase. Here, we focus on base models, i.e. BERTbase and ELECTRAbase. Note that the row UnitedQA-Ebase is the corresponding base model reported in Table 2. Compared with the MML-based multi-objective (Cheng et al., 2020), we find that a new multi-objective with HardEM at the multi-passage level and MML at the passage level is more effective for open-domain QA. In addition to the multi-objective training, there is a noticeable improvement brought by the regularization method (PDR) which indicates the importance of proper regularization for learning with noisy supervision. Last but not least, the large improvement of ELECTRA over BERT indicates the importance of deriving better text representations for weakly supervised NLP problems. For the UnitedQA-G, we present the ablation study on analyzing the effectiveness of decoder attention bias component and adversarial training mechanism in Table 4. Both techniques contribute to decent improvements over T5-FID with more pronounced gains brought by adversarial training.",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
509,
|
| 1048 |
+
507,
|
| 1049 |
+
885,
|
| 1050 |
+
910
|
| 1051 |
+
],
|
| 1052 |
+
"page_idx": 5
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "table",
|
| 1056 |
+
"img_path": "images/942fdb4eb5e81ba91a80a3df3506104f14c8d7eec207ceeafc088d83778fe1a6.jpg",
|
| 1057 |
+
"table_caption": [],
|
| 1058 |
+
"table_footnote": [],
|
| 1059 |
+
"table_body": "<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>T5-FIDlarge</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitiedQA-Glarge</td><td>52.3</td><td>68.6</td></tr><tr><td>-Adv Training</td><td>52.0</td><td>68.2</td></tr><tr><td>-Attention Bias</td><td>51.8</td><td>68.1</td></tr></table>",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
132,
|
| 1062 |
+
72,
|
| 1063 |
+
470,
|
| 1064 |
+
168
|
| 1065 |
+
],
|
| 1066 |
+
"page_idx": 6
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"type": "table",
|
| 1070 |
+
"img_path": "images/a98b932ca098aea95e827b393acc7fcfcf37ad9b647b1f7d66c0f11e9cbfce1a.jpg",
|
| 1071 |
+
"table_caption": [
|
| 1072 |
+
"Table 4: Ablation experiments of the generative model on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported."
|
| 1073 |
+
],
|
| 1074 |
+
"table_footnote": [],
|
| 1075 |
+
"table_body": "<table><tr><td></td><td></td><td>Top-20</td><td>Top-100</td><td>Δ</td></tr><tr><td rowspan=\"3\">NQ</td><td>Retrieval</td><td>78.4</td><td>85.4</td><td>+9%</td></tr><tr><td>United-E</td><td>49.8</td><td>51.8</td><td>+4%</td></tr><tr><td>United-G</td><td>49.3</td><td>52.3</td><td>+6%</td></tr><tr><td rowspan=\"3\">TriviaQA</td><td>Retrieval</td><td>79.9</td><td>84.4</td><td>+6%</td></tr><tr><td>United-E</td><td>67.1</td><td>68.9</td><td>+3%</td></tr><tr><td>United-G</td><td>65.4</td><td>68.6</td><td>+5%</td></tr></table>",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
117,
|
| 1078 |
+
237,
|
| 1079 |
+
492,
|
| 1080 |
+
372
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "Table 5: Retrieval top- $k$ accuracy and end-to-end QA extract match scores on the test sets of NaturalQuestions (NQ) and TriviaQA. United-E and United-G stand for our extractive and generative models respectively.",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
114,
|
| 1089 |
+
382,
|
| 1090 |
+
489,
|
| 1091 |
+
441
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "text",
|
| 1097 |
+
"text": "4.2 Impact of Retrieval Accuracy",
|
| 1098 |
+
"text_level": 1,
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
115,
|
| 1101 |
+
467,
|
| 1102 |
+
394,
|
| 1103 |
+
483
|
| 1104 |
+
],
|
| 1105 |
+
"page_idx": 6
|
| 1106 |
+
},
|
| 1107 |
+
{
|
| 1108 |
+
"type": "text",
|
| 1109 |
+
"text": "Here, we vary the number of retrieved passages during inference and report the evaluation results in terms of end-to-end QA exact match score of UnitedQA-E and UnitedQA-G along with the corresponding top- $k$ retrieval accuracy. The results are summarized in Table 5. As expected, when the number of retrieved passages increases, both top- $k$ retrieval accuracy and the end-to-end QA performance improve. However, there is a noticeable gap between the improvement of retrieving more passages (i.e., recall) and that of the corresponding end-to-end QA performance, especially for the extractive reader. This is likely caused by additional noise introduced with improved retrieval recall. Specifically, only half of the retriever improvement can be effectively utilized by the extractive model while the generative model can benefit more from retrieving more passages. This suggests that by concatenating all passages in vector space, the generative model are more effective in de-noising in comparison to the extractive model.",
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
114,
|
| 1112 |
+
489,
|
| 1113 |
+
489,
|
| 1114 |
+
826
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 6
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "text",
|
| 1120 |
+
"text": "4.3 Breakdown Evaluation",
|
| 1121 |
+
"text_level": 1,
|
| 1122 |
+
"bbox": [
|
| 1123 |
+
115,
|
| 1124 |
+
839,
|
| 1125 |
+
344,
|
| 1126 |
+
853
|
| 1127 |
+
],
|
| 1128 |
+
"page_idx": 6
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "text",
|
| 1132 |
+
"text": "Following Lewis et al. (2021), we carry out a breakdown evaluation of model performance over the NaturalQuestions and TriviaQA test sets. Given",
|
| 1133 |
+
"bbox": [
|
| 1134 |
+
114,
|
| 1135 |
+
860,
|
| 1136 |
+
489,
|
| 1137 |
+
909
|
| 1138 |
+
],
|
| 1139 |
+
"page_idx": 6
|
| 1140 |
+
},
|
| 1141 |
+
{
|
| 1142 |
+
"type": "text",
|
| 1143 |
+
"text": "their superior performance, we again only consider our improved extractive and generative models, i.e. UnitedQA-Elarge and UnitedQA-G respectively. The evaluation is summarized in Table 6. In comparison to their corresponding overall performance, both the extractive and generative models achieve much better performance on the \"Overlap\" categories (i.e. \"Question Overlap\" and \"Answer Overlap\") for both NaturalQuestions and TrivaQA, which indicates that both models perform well for question and answer memorization. Different from question and answer memorization, there is a pronounced performance drop for both models on the \"Answer Overlap Only\" category where certain amount of relevance inference capability is required to succeed. Lastly, we see that both extractive and generative models suffer some significant performance degradation for the \"No Overlap\" column which highlights model's generalization evaluation. Nevertheless, the extractive model demonstrate a better QA generalization by achieving a better overall performance on the \"No Overlap\" category for both datasets.",
|
| 1144 |
+
"bbox": [
|
| 1145 |
+
509,
|
| 1146 |
+
74,
|
| 1147 |
+
885,
|
| 1148 |
+
445
|
| 1149 |
+
],
|
| 1150 |
+
"page_idx": 6
|
| 1151 |
+
},
|
| 1152 |
+
{
|
| 1153 |
+
"type": "text",
|
| 1154 |
+
"text": "4.4 Error Analysis",
|
| 1155 |
+
"text_level": 1,
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
510,
|
| 1158 |
+
463,
|
| 1159 |
+
673,
|
| 1160 |
+
479
|
| 1161 |
+
],
|
| 1162 |
+
"page_idx": 6
|
| 1163 |
+
},
|
| 1164 |
+
{
|
| 1165 |
+
"type": "text",
|
| 1166 |
+
"text": "Here, we conduct analyses into prediction errors made by the extractive and generative models based on automatic evaluation. For this study, we use the EfficientQA dev set (Min et al., 2021) which is constructed in the same way as the original NaturalQuestions dataset. Specifically, we group prediction errors into three categorizes: 1) common prediction errors made by both the extractive and generative models, 2) prediction errors made by the extractive model, 3) prediction errors produced by the generative model. In the following, we first carry out a manual inspection into the common errors. Then, we compare the prediction errors made by extractive and generative models, respectively.",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
509,
|
| 1169 |
+
488,
|
| 1170 |
+
885,
|
| 1171 |
+
714
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 6
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "text",
|
| 1177 |
+
"text": "First of all, there is an error rate of $29\\%$ of those consensus predictions made by both extractive and generative models according to the automatic evaluation. Based on 30 randomly selected examples, we find that around $30\\%$ of those predictions are actually valid answers as shown in the top part of Table 7. In addition to predictions that are answers at different granularity or semantically equivalent ones, some of those prediction errors are likely caused by the ambiguity in questions. As the given example in Table 7, based on the specificity, the model prediction is also a valid answer. This high-",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
509,
|
| 1180 |
+
715,
|
| 1181 |
+
885,
|
| 1182 |
+
909
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 6
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "table",
|
| 1188 |
+
"img_path": "images/c7684f13c12511c878bfd6b84e617aa88df996943558556557fb1c19a838009e.jpg",
|
| 1189 |
+
"table_caption": [],
|
| 1190 |
+
"table_footnote": [],
|
| 1191 |
+
"table_body": "<table><tr><td>Dataset</td><td>Model</td><td>Total</td><td>Question Overlap</td><td>No Question Overlap</td><td>Answer Overlap</td><td>Answer Overlap Only</td><td>No Overlap</td></tr><tr><td rowspan=\"2\">NQ</td><td>UnitedQA-G</td><td>52.3</td><td>72.2</td><td>40.5</td><td>62.7</td><td>45.4</td><td>34.0</td></tr><tr><td>UnitedQA-E</td><td>51.8</td><td>69.4</td><td>41.5</td><td>60.1</td><td>45.1</td><td>37.6</td></tr><tr><td rowspan=\"2\">TriviaQA</td><td>UnitedQA-G</td><td>68.6</td><td>88.4</td><td>62.5</td><td>78.1</td><td>69.6</td><td>44.5</td></tr><tr><td>UnitedQA-E</td><td>68.9</td><td>89.3</td><td>62.7</td><td>78.6</td><td>70.6</td><td>44.3</td></tr></table>",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
144,
|
| 1194 |
+
72,
|
| 1195 |
+
860,
|
| 1196 |
+
209
|
| 1197 |
+
],
|
| 1198 |
+
"page_idx": 7
|
| 1199 |
+
},
|
| 1200 |
+
{
|
| 1201 |
+
"type": "table",
|
| 1202 |
+
"img_path": "images/2578ed96651d5a7d7e3f7d011e29ff323ed181652d2dfe46f5d109c30d289961.jpg",
|
| 1203 |
+
"table_caption": [
|
| 1204 |
+
"Table 6: Breakdown evaluation on NaturalQuestions (NQ) and TriviaQA based on test splits defined in (Lewis et al., 2021). Exact match scores are reported. UnitedQA-E and UnitedQA-G denote our extractive and generative models respectively."
|
| 1205 |
+
],
|
| 1206 |
+
"table_footnote": [],
|
| 1207 |
+
"table_body": "<table><tr><td colspan=\"2\">Valid Answers</td></tr><tr><td>Different granularity</td><td>Q: When was harry potter and the deathly hallows part 2 movie released\nPrediction: 2011 / Gold: 15 July 2011</td></tr><tr><td>Semantically equivalent</td><td>Q: minimum age limit for chief justic of india\nPrediction: 65 / Gold: 65 years</td></tr><tr><td>Ambiguity question</td><td>Q: who won her first tennis grand slam in 2018\nPrediction: Carolin Wozniacki / Gold: Simona Halep</td></tr><tr><td colspan=\"2\">Wrong Answers</td></tr><tr><td>Part as whole error</td><td>Q: the official U.S. poverty line is based on the cost of what\nPrediction: food / Gold: ICP purchasing power</td></tr><tr><td>Entity confusion</td><td>Q: actor who played tommy in terms of endearment\nPrediction: Jeff Daniels / Gold: Troy Bishop</td></tr><tr><td>Event confusion</td><td>Q: when did the saturdaywanan roughriders last won the grey cup\nPrediction: 2007 / Gold: 2013</td></tr></table>",
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
137,
|
| 1210 |
+
274,
|
| 1211 |
+
863,
|
| 1212 |
+
532
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 7
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"text": "Table 7: Examples of prediction errors as judged by the automatic evaluation.",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
235,
|
| 1221 |
+
539,
|
| 1222 |
+
759,
|
| 1223 |
+
555
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 7
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "lights the limitation of the current evaluation metric, which does not accurately estimate the existing open-domain QA system capabilities. As shown in the bottom part of Table 7, most of representative errors are due to the confusion of related concepts, entities or events that are mentioned frequently together with the corresponding gold answers.",
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
114,
|
| 1232 |
+
580,
|
| 1233 |
+
490,
|
| 1234 |
+
693
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 7
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "text",
|
| 1240 |
+
"text": "Next, all questions from the dev set are categorized based on WH question word, i.e. what, which, when, who, how, where. We then report the relative performance change of each WH category for both extractive and generative models over their corresponding overall prediction accuracy in Figure 2. First, it is easy to see that both extractive and generative models achieve the best performance for entity related who questions, which is likely to be the result of high ratio of samples of this type seen during training. In contrast, the answers to what questions can play a much richer syntactic role in context, making it more difficult for both extractive",
|
| 1241 |
+
"bbox": [
|
| 1242 |
+
114,
|
| 1243 |
+
700,
|
| 1244 |
+
492,
|
| 1245 |
+
910
|
| 1246 |
+
],
|
| 1247 |
+
"page_idx": 7
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "text",
|
| 1251 |
+
"text": "and generative models to perform well. Interestingly, the generative model exhibits the strength for temporal reasoning, whereas the extractive model does not. This difference suggests that it is worth exploring better temporal modeling strategies to improve the extractive model in the future.",
|
| 1252 |
+
"bbox": [
|
| 1253 |
+
509,
|
| 1254 |
+
580,
|
| 1255 |
+
885,
|
| 1256 |
+
677
|
| 1257 |
+
],
|
| 1258 |
+
"page_idx": 7
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"text": "5 Related Work",
|
| 1263 |
+
"text_level": 1,
|
| 1264 |
+
"bbox": [
|
| 1265 |
+
510,
|
| 1266 |
+
689,
|
| 1267 |
+
665,
|
| 1268 |
+
703
|
| 1269 |
+
],
|
| 1270 |
+
"page_idx": 7
|
| 1271 |
+
},
|
| 1272 |
+
{
|
| 1273 |
+
"type": "text",
|
| 1274 |
+
"text": "Open-domain QA Open-domain QA requires a system to answer questions based on evidence retrieved from a large corpus such as Wikipedia (Voorhees, 2000; Chen et al., 2017). Recent progress has been made towards improving evidence retrieval through both sparse vector models like TF-IDF or BM25 (Chen et al., 2017; Min et al., 2019), and dense vector models based on BERT (Lee et al., 2019; Karpukhin et al., 2020; Guu et al., 2020; Qu et al., 2021). Generally, the dense representations complement the sparse vector methods for passage retrieval as they can potentially give",
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
509,
|
| 1277 |
+
715,
|
| 1278 |
+
885,
|
| 1279 |
+
910
|
| 1280 |
+
],
|
| 1281 |
+
"page_idx": 7
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"type": "image",
|
| 1285 |
+
"img_path": "images/d788d1edae44080357e825ab943ff3d3bd9ccb34708b4cf69c0396d357b5d961.jpg",
|
| 1286 |
+
"image_caption": [
|
| 1287 |
+
"Figure 2: Relative accuracy of different $WH$ questions. The relative accuracy is the relative change of a $WH$ category accuracy to the overall model accuracy."
|
| 1288 |
+
],
|
| 1289 |
+
"image_footnote": [],
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
119,
|
| 1292 |
+
73,
|
| 1293 |
+
480,
|
| 1294 |
+
243
|
| 1295 |
+
],
|
| 1296 |
+
"page_idx": 8
|
| 1297 |
+
},
|
| 1298 |
+
{
|
| 1299 |
+
"type": "text",
|
| 1300 |
+
"text": "high similarity to semantically related text pairs, even without exact lexical overlap. Unlike most work focusing on a pipeline model, Lee et al. (2019) propose a pre-training objective for jointly training both the retrieval encoder and reader. It is further extended by Guu et al. (2020) with a dynamic update of the passage index during the training. Instead, in this work, we focus on a hybrid reader approach for open-domain QA. By simply combining answer predictions from extractive and generative models, our UnitedQA achieves significant improvements over state-of-the-art models.",
|
| 1301 |
+
"bbox": [
|
| 1302 |
+
114,
|
| 1303 |
+
328,
|
| 1304 |
+
490,
|
| 1305 |
+
521
|
| 1306 |
+
],
|
| 1307 |
+
"page_idx": 8
|
| 1308 |
+
},
|
| 1309 |
+
{
|
| 1310 |
+
"type": "text",
|
| 1311 |
+
"text": "Reading Comprehension with Noisy Labels There has been a line of work on improving distantly-supervised reading comprehension models by developing learning methods and model architectures that can better use noisy labels. Most of them focus on the document-level QA, where all paragraphs share the same document context. Clark and Gardner (2018) propose a paragraph-pair ranking objective for learning with multiple paragraphs so that the model can distinguish relevant paragraphs from irrelevant ones. In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones. Min et al. (2019) propose a hard EM learning scheme where only passage-level loss is considered for document-level QA. More recently, different probabilistic assumptions with corresponding training and inference methods are examined in (Cheng et al., 2020) again for document-level QA with distant supervision. In our work, we further extend the multi-objective formulation proposed in (Cheng et al., 2020) with the hard EM learning (Min et al., 2019) for enhancing extrac",
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
114,
|
| 1314 |
+
523,
|
| 1315 |
+
492,
|
| 1316 |
+
910
|
| 1317 |
+
],
|
| 1318 |
+
"page_idx": 8
|
| 1319 |
+
},
|
| 1320 |
+
{
|
| 1321 |
+
"type": "text",
|
| 1322 |
+
"text": "tive open-domain QA, where the input passages are given by a retrieval model and are typically from different documents.",
|
| 1323 |
+
"bbox": [
|
| 1324 |
+
509,
|
| 1325 |
+
74,
|
| 1326 |
+
884,
|
| 1327 |
+
121
|
| 1328 |
+
],
|
| 1329 |
+
"page_idx": 8
|
| 1330 |
+
},
|
| 1331 |
+
{
|
| 1332 |
+
"type": "text",
|
| 1333 |
+
"text": "6 Conclusion",
|
| 1334 |
+
"text_level": 1,
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
510,
|
| 1337 |
+
135,
|
| 1338 |
+
643,
|
| 1339 |
+
149
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 8
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "text",
|
| 1345 |
+
"text": "In this study, we propose a hybrid model for open-domain QA, called UnitedQA, which combines the strengths of extractive and generative readers. We demonstrate the effectiveness of UnitedQA on two popular open-domain QA benchmarks, NaturalQuestions and TriviaQA. Our results show that the proposed UnitedQA model significantly outperforms single extractive and generative models as well as their corresponding homogeneous ensembles, and sets new state-of-the-art on both benchmarks. We also perform a comprehensive empirical study to investigate the relative contributions of different components of our model and the techniques we use to improve the readers.",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
507,
|
| 1348 |
+
162,
|
| 1349 |
+
887,
|
| 1350 |
+
385
|
| 1351 |
+
],
|
| 1352 |
+
"page_idx": 8
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "text",
|
| 1356 |
+
"text": "For future work, it would be interesting to explore model compression approaches for reducing the model size of retriever and reader separately or jointly through pruning, quantization, and knowledge distillation.",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
507,
|
| 1359 |
+
387,
|
| 1360 |
+
885,
|
| 1361 |
+
467
|
| 1362 |
+
],
|
| 1363 |
+
"page_idx": 8
|
| 1364 |
+
},
|
| 1365 |
+
{
|
| 1366 |
+
"type": "text",
|
| 1367 |
+
"text": "Acknowledgments",
|
| 1368 |
+
"text_level": 1,
|
| 1369 |
+
"bbox": [
|
| 1370 |
+
510,
|
| 1371 |
+
480,
|
| 1372 |
+
672,
|
| 1373 |
+
495
|
| 1374 |
+
],
|
| 1375 |
+
"page_idx": 8
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "text",
|
| 1379 |
+
"text": "We would like to thank the anonymous reviewers for valuable suggestions, Yuning Mao for valuable discussions and comments, and Microsoft Research Technology Engineering team for computing support.",
|
| 1380 |
+
"bbox": [
|
| 1381 |
+
509,
|
| 1382 |
+
506,
|
| 1383 |
+
885,
|
| 1384 |
+
586
|
| 1385 |
+
],
|
| 1386 |
+
"page_idx": 8
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "text",
|
| 1390 |
+
"text": "References",
|
| 1391 |
+
"text_level": 1,
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
510,
|
| 1394 |
+
613,
|
| 1395 |
+
610,
|
| 1396 |
+
629
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 8
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "list",
|
| 1402 |
+
"sub_type": "ref_text",
|
| 1403 |
+
"list_items": [
|
| 1404 |
+
"Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879. Association for Computational Linguistics.",
|
| 1405 |
+
"Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37, Online. Association for Computational Linguistics.",
|
| 1406 |
+
"Hao Cheng, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2020. Probabilistic assumptions matter: Improved models for distantly-supervised document-level question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5657-5667, Online. Association for Computational Linguistics."
|
| 1407 |
+
],
|
| 1408 |
+
"bbox": [
|
| 1409 |
+
510,
|
| 1410 |
+
636,
|
| 1411 |
+
885,
|
| 1412 |
+
910
|
| 1413 |
+
],
|
| 1414 |
+
"page_idx": 8
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "list",
|
| 1418 |
+
"sub_type": "ref_text",
|
| 1419 |
+
"list_items": [
|
| 1420 |
+
"Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regularization with f-divergence for improving model robustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1078-1089, Online. Association for Computational Linguistics.",
|
| 1421 |
+
"Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845-855. Association for Computational Linguistics.",
|
| 1422 |
+
"Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations (ICLR).",
|
| 1423 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 1424 |
+
"Kelvin Guu, Kenton Lee, Zora Tung, Panupong Papat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.",
|
| 1425 |
+
"Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.",
|
| 1426 |
+
"Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.",
|
| 1427 |
+
"Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2018. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18(187):1-30.",
|
| 1428 |
+
"Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics."
|
| 1429 |
+
],
|
| 1430 |
+
"bbox": [
|
| 1431 |
+
117,
|
| 1432 |
+
76,
|
| 1433 |
+
490,
|
| 1434 |
+
909
|
| 1435 |
+
],
|
| 1436 |
+
"page_idx": 9
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "list",
|
| 1440 |
+
"sub_type": "ref_text",
|
| 1441 |
+
"list_items": [
|
| 1442 |
+
"Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics.",
|
| 1443 |
+
"Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.",
|
| 1444 |
+
"Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. 2019. Technical report on conversational question answering.",
|
| 1445 |
+
"Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.",
|
| 1446 |
+
"Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.",
|
| 1447 |
+
"Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096. Association for Computational Linguistics.",
|
| 1448 |
+
"Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.",
|
| 1449 |
+
"Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics."
|
| 1450 |
+
],
|
| 1451 |
+
"bbox": [
|
| 1452 |
+
512,
|
| 1453 |
+
76,
|
| 1454 |
+
885,
|
| 1455 |
+
909
|
| 1456 |
+
],
|
| 1457 |
+
"page_idx": 9
|
| 1458 |
+
},
|
| 1459 |
+
{
|
| 1460 |
+
"type": "list",
|
| 1461 |
+
"sub_type": "ref_text",
|
| 1462 |
+
"list_items": [
|
| 1463 |
+
"Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745.",
|
| 1464 |
+
"Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Kuttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Korel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen tau Yih. 2021. NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned.",
|
| 1465 |
+
"Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM approach for weakly supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2851-2864, Hong Kong, China. Association for Computational Linguistics.",
|
| 1466 |
+
"Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5385-5393, Online. Association for Computational Linguistics.",
|
| 1467 |
+
"Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics.",
|
| 1468 |
+
"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.",
|
| 1469 |
+
"Ellen Voorhees. 2000. The TREC-8 question answering track report."
|
| 1470 |
+
],
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
117,
|
| 1473 |
+
76,
|
| 1474 |
+
490,
|
| 1475 |
+
908
|
| 1476 |
+
],
|
| 1477 |
+
"page_idx": 10
|
| 1478 |
+
}
|
| 1479 |
+
]
|
data/2021/2101_00xxx/2101.00178/431507cf-6cad-41d1-9637-1521a5ac3935_model.json
CHANGED
|
@@ -1,3 +1,1740 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.32,
|
| 8 |
+
0.061,
|
| 9 |
+
0.718
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2101.00178v2 [cs.CL] 2 Jun 2021"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.136,
|
| 18 |
+
0.08,
|
| 19 |
+
0.868,
|
| 20 |
+
0.102
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "UnitedQA: A Hybrid Approach for Open Domain Question Answering"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.243,
|
| 29 |
+
0.133,
|
| 30 |
+
0.768,
|
| 31 |
+
0.168
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Hao Cheng\\(^{1*}\\), Yelong Shen\\(^{2*}\\), Xiaodong Liu\\(^{1}\\), Pengcheng He\\(^{2}\\), Weizhu Chen\\(^{2}\\), Jianfeng Gao\\(^{1}\\)"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.328,
|
| 40 |
+
0.168,
|
| 41 |
+
0.679,
|
| 42 |
+
0.183
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "<sup>1</sup> Microsoft Research <sup>2</sup> Microsoft Azure AI"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.179,
|
| 51 |
+
0.186,
|
| 52 |
+
0.831,
|
| 53 |
+
0.201
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "{chehao, yeshe, xiaodl, penhe, wzchen, jfgao}@microsoft.com"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.264,
|
| 62 |
+
0.265,
|
| 63 |
+
0.342,
|
| 64 |
+
0.279
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Abstract"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.144,
|
| 73 |
+
0.296,
|
| 74 |
+
0.462,
|
| 75 |
+
0.566
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvements over previous state-of-the-art models. We demonstrate that an hybrid approach by combining answers from both readers can effectively take advantages of extractive and generative answer inference strategies and outperform single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively."
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "title",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.117,
|
| 84 |
+
0.584,
|
| 85 |
+
0.262,
|
| 86 |
+
0.598
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "1 Introduction"
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.115,
|
| 95 |
+
0.611,
|
| 96 |
+
0.491,
|
| 97 |
+
0.884
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Open-domain question answering (QA) has been a long standing problem in natural language understanding, information retrieval, and related fields (Chen and Yih, 2020). An typical open-domain QA system follows the retrieval-reader framework (Chen et al., 2017; Guu et al., 2020; Karpukhin et al., 2020), where the relevant passages are first retrieved from a large text corpus, and a reader module then navigates multiple passages for answer inference. In this work, we study two paradigms of reader modules, i.e. extractive (Karpukhin et al., 2020; Guu et al., 2020) and generative (Lewis et al., 2020; Izacard and Grave, 2021) readers. The extractive reader extracts contiguous spans from the retrieved passages whereas the generative reader sequentially decodes the answer string which might not be contained in the retrieved passages."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.51,
|
| 106 |
+
0.265,
|
| 107 |
+
0.887,
|
| 108 |
+
0.602
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Recent work on open-domain QA (Karpukhin et al., 2020; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021) explores either an extractive reader or a generative reader exclusively. We hypothesize that extractive and generative readers adopt different answer inference strategies, thus a hybrid extractive/generative reader can be a better option for open-domain QA tasks. As shown in Figure 1, compared with prediction agreement among only generative or extractive readers (top-left and bottom-right), the cross prediction agreement between extractive and generative readers (bottom-left) is relatively low (\\(<50\\%\\)). It indicates that answers produced by those two types of models are different and they can be complementary to each other. Therefore, we propose a hybrid reader approach, UnitedQA, which is a simple ensemble approach to combine the predictions from extractive and generative readers. It achieves state-of-the-art results on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017)."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.51,
|
| 117 |
+
0.604,
|
| 118 |
+
0.886,
|
| 119 |
+
0.892
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "In UnitedQA, the extractive reader (UnitedQA-E) and generative reader (UnitedQA-G) are built upon the pretrained language models, ELECTRA (Clark et al., 2020) and T5 (Raffel et al., 2020), respectively. For the UnitedQA-E, we adopt a weakly-supervised training objective to address the noisy supervision issue caused by the heuristics-based labeling and incorporate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the model robustness. The UnitedQA-G follows the T5 Fusion-in-Decoder (FID) (Izacard and Grave, 2021) and we make two improvements: first, we add a group of attention bias parameters into the decoder cross-attention block to feature the ranking information of retrieved contexts; second, we add the adversarial training (Ju et al., 2019; Jiang et al., 2020; Pereira et al., 2021) to improve the model generalization ability."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.53,
|
| 128 |
+
0.894,
|
| 129 |
+
0.885,
|
| 130 |
+
0.91
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "The experimental results highlight the effec"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "page_footnote",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.145,
|
| 139 |
+
0.896,
|
| 140 |
+
0.272,
|
| 141 |
+
0.91
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "* Equal Contribution"
|
| 145 |
+
}
|
| 146 |
+
],
|
| 147 |
+
[
|
| 148 |
+
{
|
| 149 |
+
"type": "image",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.149,
|
| 152 |
+
0.096,
|
| 153 |
+
0.434,
|
| 154 |
+
0.273
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": null
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "image_caption",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.115,
|
| 163 |
+
0.3,
|
| 164 |
+
0.493,
|
| 165 |
+
0.402
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "Figure 1: Pairwise prediction agreement ratio. G-1, G-2, G-3 and E-1, E-2, E-3 are three different generative and extractive readers respectively. All readers achieve similar performance (\\(\\approx\\) \\(52\\%\\) exact match) on NaturalQuestions. Higher agreement (\\(>50\\%\\)) in red and lower agreement (\\(<50\\%\\)) in gray. The agreement is calculated based on exact string match."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.115,
|
| 174 |
+
0.429,
|
| 175 |
+
0.493,
|
| 176 |
+
0.655
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "tiveness of the simple hybrid approach of UnitedQA. With both improved extractive and generative readers, UnitedQA sets new state-of-the-art results on two popular open-domain QA datasets, i.e. 54.7 and 70.3 in exact match on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), respectively. It is worth noting that our UnitedQA model not only outperforms each single model but also brings more pronounced improvements over homogeneous ensembles of either extractive or generative readers. Last, based on our analyses, UnitedQA-E and UnitedQA-G have advantages in different cases, suggesting they may use different reasoning strategies."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "title",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.116,
|
| 185 |
+
0.67,
|
| 186 |
+
0.221,
|
| 187 |
+
0.687
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "2 Method"
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.115,
|
| 196 |
+
0.699,
|
| 197 |
+
0.491,
|
| 198 |
+
0.859
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "In this section, we present the overall pipeline of the UnitedQA system, which consists of three components: Retrieval, Reading, and Re-ranking. First, the retrieval module fetches a list of relevant passages from a Wikipedia dump for a given question. Then, the module of hybrid readers produces answer candidates from the set of retrieved passages. Last, the re-ranking module combines the answer candidates with linear interpolation and produces the final answer."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.115,
|
| 207 |
+
0.862,
|
| 208 |
+
0.49,
|
| 209 |
+
0.911
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Retrieval Following Karpukhin et al. (2020), we consider two methods, BM25 and dense passage retrieval (DPR), for retrieving the support passages"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.51,
|
| 218 |
+
0.076,
|
| 219 |
+
0.887,
|
| 220 |
+
0.333
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "for a given question. For BM25, passages are encoded as bag of words (BOW), and inverse document frequencies are used as the ranking function. For DPR, passages and questions are represented as dense vectors based on two BERT (Devlin et al., 2019) models. The relevance score is then computed based on the dot production between the query and passage vectors. In this paper, we adopt the same implementation as Karpukhin et al. (2020) for retrieving passages. Specifically, the English Wikipedia dump from Dec. 20, 2018 is used as the source documents for retrieval, with the removal of semi-structured data, such as tables or lists. Each document is split into disjoint 100-word passages as the basic retrieval unit. The top-100 passages are then passed for reading."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.51,
|
| 229 |
+
0.335,
|
| 230 |
+
0.887,
|
| 231 |
+
0.432
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "Reading We combine the generative reader and the extractive reader to produce answer candidates over the retrieved passages. Here, we only give a high-level description of our approach. More details regarding our improved extractive and generative models are presented in §2.1 and §2.2 respectively."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.51,
|
| 240 |
+
0.433,
|
| 241 |
+
0.887,
|
| 242 |
+
0.609
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "The generative reader is based on a sequence-to-sequence model pre-trained in a forward-generation fashion on a large corpus, i.e. T5 (Raffel et al., 2020). Similar to Izacard and Grave (2021), the model takes the question and its relevant passages as input, and then generates the answer string token by token. Specifically, the concatenation of all retrieved passages and the corresponding question is used as the encoder input. Then, the decoder performs reasoning over the concatenation of all evidence through an attention mechanism."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.51,
|
| 251 |
+
0.612,
|
| 252 |
+
0.887,
|
| 253 |
+
0.838
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "Following state-of-the-art extractive QA models (Devlin et al., 2019; Karpukhin et al., 2020), our extractive reader is based on a Transformer neural network pre-trained with a cloze style self-supervised objective, i.e. ELECTRA (Clark et al., 2020). Here, a pair of a given question and a support passage is jointly encoded into neural text representations. These representations are then used to define scores or probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans. Finally, the answer string probabilities are based on the aggregation over all possible answer spans from the entire set of support passages."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "title",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.511,
|
| 262 |
+
0.853,
|
| 263 |
+
0.66,
|
| 264 |
+
0.87
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "2.1 UnitedQA-E"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.51,
|
| 273 |
+
0.878,
|
| 274 |
+
0.886,
|
| 275 |
+
0.911
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "In §2.1.2, we give the problem definition of open-domain QA for extractive reader. Then, we detail"
|
| 279 |
+
}
|
| 280 |
+
],
|
| 281 |
+
[
|
| 282 |
+
{
|
| 283 |
+
"type": "text",
|
| 284 |
+
"bbox": [
|
| 285 |
+
0.116,
|
| 286 |
+
0.076,
|
| 287 |
+
0.444,
|
| 288 |
+
0.091
|
| 289 |
+
],
|
| 290 |
+
"angle": 0,
|
| 291 |
+
"content": "the improvements of UnitedQA-E in \\(\\S 2.1.2\\)"
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "title",
|
| 295 |
+
"bbox": [
|
| 296 |
+
0.116,
|
| 297 |
+
0.101,
|
| 298 |
+
0.32,
|
| 299 |
+
0.116
|
| 300 |
+
],
|
| 301 |
+
"angle": 0,
|
| 302 |
+
"content": "2.1.1 Extractive Reader"
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "text",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.114,
|
| 308 |
+
0.12,
|
| 309 |
+
0.491,
|
| 310 |
+
0.266
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "Given a question \\( \\mathbf{q} \\) and a set of \\( K \\) retrieved passages \\( \\mathfrak{p}_1, \\ldots, \\mathfrak{p}_K \\), a text encoder produces contextualized representations: \\( \\mathbf{h}_1^k, \\ldots, \\mathbf{h}_T^k \\in \\mathbb{R}^n \\) for the question-passage pair \\( (\\mathbf{q}, \\mathbf{p}_k) \\) in the form of \"[CLS] question [SEP] passage [SEP]\", where [CLS] and [SEP] are special tokens for encoding inputs, \\( T \\) is the maximum sequence length of the input text, and \\( \\mathbf{h}_i^k \\) indicates the contextualized embedding of the \\( i \\)-th token in \\( (\\mathbf{q}, \\mathbf{p}_k) \\)."
|
| 314 |
+
},
|
| 315 |
+
{
|
| 316 |
+
"type": "text",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.115,
|
| 319 |
+
0.267,
|
| 320 |
+
0.49,
|
| 321 |
+
0.429
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "The extractive reader computes the span-begin score of the \\(i\\)-th token as \\(s_b(i^k) = \\mathbf{w}_b^T\\mathbf{h}_i^k\\) using a weight vector \\(\\mathbf{w}_b \\in \\mathbb{R}^d\\). The span-end score \\(s_e(j^k)\\) is defined in the same way. Thus, the probabilities of a start position \\(i^k\\) and an end position \\(j^k\\) are \\(P_b(i^k) = \\frac{\\exp(s_b(i^k))}{Z_b}\\), \\(P_e(j^k) = \\frac{\\exp(s_e(j^k))}{Z_e}\\), where \\(Z_b, Z_e\\) are normalizing factors defined by the corresponding probability space. The probability of an answer span from \\(i^k\\) to \\(j^k\\) is defined as \\(P_s(i^k, j^k) = P_b(i^k)P_e(j^k)\\)."
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "text",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.115,
|
| 330 |
+
0.431,
|
| 331 |
+
0.492,
|
| 332 |
+
0.672
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "Here, we consider two probability spaces, passage level and multi-passage level, with the only difference in the computing of \\( Z_{b}, Z_{e} \\). Specifically, the passage-level probability of each answer begins and ends is computed by normalizing all possible positions in the respective passage, i.e. \\( Z_{b} = Z_{b}^{k} = \\sum_{\\mathcal{I}^{k} \\cup \\mathrm{NULL}} \\exp(s_{b}(i)) \\), \\( Z_{e} = Z_{e}^{k} = \\sum_{\\mathcal{I}^{k} \\cup \\mathrm{NULL}} \\exp(s_{e}(j)) \\), where \\( \\mathcal{I}^{k} \\) is the set of all possible positions from the \\( k \\)-th passage and NULL indicates special positions if \\( p_{k} \\) does not support answering the question. Similarly, the multi-passage level probability is computed by normalizing over each answer positions across all \\( K \\) relevant passages, i.e. \\( Z_{b} = Z_{b}^{*} = \\sum_{k} \\sum_{\\mathcal{I}^{k}} \\exp(s_{b}(i)) \\), \\( Z_{e} = Z_{e}^{*} = \\sum_{k} \\sum_{\\mathcal{I}^{k}} \\exp(s_{e}(j)) \\), respectively."
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "text",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.115,
|
| 341 |
+
0.672,
|
| 342 |
+
0.492,
|
| 343 |
+
0.833
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "Since there are usually multiple plausible mentions for open-domain QA, during training, it is typical to maximize either the marginal log-likelihood (MML) of all correct spans (Karpukhin et al., 2020) or the log-likelihood of the most likely correct span (HardEM) (Min et al., 2019). During inference, the prediction is made based on the candidate answer string score, obtaining as \\( P_{a}(y) = \\sum_{(i,j)\\in \\mathcal{Y}}P_{s}(i,j) \\), where \\( \\mathcal{Y} \\) is the set of spans corresponding to the answer string \\( y \\)."
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "title",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.116,
|
| 352 |
+
0.843,
|
| 353 |
+
0.35,
|
| 354 |
+
0.858
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "2.1.2 Improvement Method"
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "text",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.115,
|
| 363 |
+
0.862,
|
| 364 |
+
0.491,
|
| 365 |
+
0.911
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "In addition to better text representations from Clark et al. (2020), we consider two methods for improving the training of the extractive reader."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.51,
|
| 374 |
+
0.075,
|
| 375 |
+
0.886,
|
| 376 |
+
0.253
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Multi-objective for Weakly-supervised QA The multi-objective formulation is introduced in Cheng et al. (2020) for improving weakly supervised document-level QA. Different from Cheng et al. (2020) where only MML is considered for the multi-objective formulation, we found combining HardEM with MML is more effective for open-domain QA based on our experiments (§4.1). Specifically, we combine a multi-passage HardEM loss with \\(K\\) passage-level MML losses over a batch of \\(K\\) passages"
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "equation",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.545,
|
| 385 |
+
0.263,
|
| 386 |
+
0.884,
|
| 387 |
+
0.334
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "\\[\n\\begin{array}{l} \\mathcal{L}_{\\mathrm{EXT}} = \\log \\max_{(i,j)}P_{s}^{M}(i,j) + \\\\ \\frac {1}{K} \\sum_ {k} \\log \\sum_ {\\left(i ^ {k}, j ^ {k}\\right)} P _ {s} ^ {P} \\left(i ^ {k}, j ^ {k}\\right), \\tag {1} \\\\ \\end{array}\n\\]"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.51,
|
| 396 |
+
0.345,
|
| 397 |
+
0.886,
|
| 398 |
+
0.378
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "where \\(P_{s}^{M}, P_{s}^{P}\\) is the multi-passage level and passage level span probabilities respectively."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.51,
|
| 407 |
+
0.378,
|
| 408 |
+
0.886,
|
| 409 |
+
0.571
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "Posterior Differential Regularization Due to the noisy supervision for open-domain QA (Chen et al., 2017), we investigate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the robustness of the extractive reader. Different from Cheng et al. (2021) where only clean supervision setting is considered, in this work, we apply PDR to the weakly supervised open-domain QA scenario. Given it is computationally expensive to enumerate all possible spans, we apply two separate regularization terms for the begin and end probabilities at the multi-passage level, respectively,"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "equation",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.515,
|
| 418 |
+
0.582,
|
| 419 |
+
0.884,
|
| 420 |
+
0.601
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "\\[\n\\mathcal {L} _ {\\mathrm {P D R}} = D \\left(P _ {b} (i) \\mid P _ {b} ^ {\\prime} (i)\\right) + D \\left(P _ {e} (j) \\mid P _ {e} ^ {\\prime} (j)\\right), \\tag {2}\n\\]"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.51,
|
| 429 |
+
0.612,
|
| 430 |
+
0.886,
|
| 431 |
+
0.74
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "where \\(D(\\cdot |\\cdot)\\) is the squared Hellinger distance, and \\(P_b^{\\prime},P_e^{\\prime}\\) are the probabilities of start and end positions with additive input noise to the token embeddings. Specifically, we sample noise vectors \\(\\epsilon_{1},\\ldots ,\\epsilon_{T}\\) from \\(\\mathcal{N}(0,c^2 I)\\), and add them to the token embeddings as the noisy input, i.e. \\(\\mathbf{v}_1 + \\epsilon_1,\\dots ,\\mathbf{v}_T + \\epsilon_T\\) where \\(c\\) is fixed to 1e-3 throughout our experiments."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "text",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.51,
|
| 440 |
+
0.741,
|
| 441 |
+
0.884,
|
| 442 |
+
0.772
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "Based on this, the overall training objective for the extractive reader is"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "equation",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.598,
|
| 451 |
+
0.784,
|
| 452 |
+
0.884,
|
| 453 |
+
0.802
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "\\[\n\\mathcal {L} ^ {1} = \\mathcal {L} _ {\\mathrm {E X T}} + \\gamma \\mathcal {L} _ {\\mathrm {P D R}}, \\tag {3}\n\\]"
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.51,
|
| 462 |
+
0.815,
|
| 463 |
+
0.878,
|
| 464 |
+
0.831
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "where \\(\\gamma\\) is a regularization scalar hyperparameter."
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "title",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.511,
|
| 473 |
+
0.841,
|
| 474 |
+
0.661,
|
| 475 |
+
0.857
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "2.2 UnitedQA-G"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.51,
|
| 484 |
+
0.862,
|
| 485 |
+
0.886,
|
| 486 |
+
0.911
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "Here, we first formally define the setup of generative reader for open-domain QA in § 2.2.1 and then present our improvements in § 2.2.2."
|
| 490 |
+
}
|
| 491 |
+
],
|
| 492 |
+
[
|
| 493 |
+
{
|
| 494 |
+
"type": "title",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.116,
|
| 497 |
+
0.076,
|
| 498 |
+
0.327,
|
| 499 |
+
0.09
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "2.2.1 Generative Reader"
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.115,
|
| 508 |
+
0.094,
|
| 509 |
+
0.491,
|
| 510 |
+
0.223
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "Given a question \\( \\mathbf{q} \\) and a set of \\( K \\) retrieved passages \\( \\mathfrak{p}_1, \\ldots, \\mathfrak{p}_K \\), the encoder model encodes each \\( (\\mathfrak{q}, \\mathfrak{p}_k) \\) pair independently, and produces contextualized representation for each token: \\( \\mathbf{h}_i^k \\in \\mathbb{R}^d \\) for the \\( i \\)-th token of the \\( k \\)-th pair. The decoder then performs attention over the concatenation of the representations of all the retrieved passages, and generates the answer string."
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "text",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.115,
|
| 519 |
+
0.224,
|
| 520 |
+
0.492,
|
| 521 |
+
0.306
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "Let \\(\\mathbf{x}\\) denote the input of the question and all retrieved passages \\(\\mathbf{x} = ((\\mathbf{q},\\mathbf{p}_1),\\dots,(\\mathbf{q},\\mathbf{p}_K))\\) ,and \\(\\mathbf{y}\\) the answer string with its tokens as \\((y_{1},\\ldots ,y_{N})\\) The generative reader is trained to maximize a sequence-to-sequence objective for a given \\((\\mathbf{x},\\mathbf{y})\\)"
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "equation",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.162,
|
| 530 |
+
0.31,
|
| 531 |
+
0.49,
|
| 532 |
+
0.354
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "\\[\n\\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta) = \\sum_ {i} ^ {N} \\log P _ {\\theta} \\left(y _ {i} \\mid \\mathbf {x}, y _ {1: i - 1}\\right), \\tag {4}\n\\]"
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "text",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.116,
|
| 541 |
+
0.359,
|
| 542 |
+
0.49,
|
| 543 |
+
0.391
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "where \\(\\theta\\) is the model parameter. During inference, a greedy decoding is used to produce the answer."
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "title",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.116,
|
| 552 |
+
0.399,
|
| 553 |
+
0.35,
|
| 554 |
+
0.414
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "2.2.2 Improvement Method"
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.115,
|
| 563 |
+
0.418,
|
| 564 |
+
0.491,
|
| 565 |
+
0.564
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "Decoder Attention Bias The decoder in the T5 transformer model adopts a cross-attention mechanism to compute attention scores between the decoding answer tokens and all the retrieved passage tokens. Specifically, let \\(\\mathbf{y}_i\\in \\mathbb{R}^d\\) be the query vector of the \\(i\\)-th decoding token\\(^1\\), and \\(\\mathbf{m}_j^k\\in \\mathbb{R}^d\\) be the key vector of the \\(j\\)-th token in \\((q),p_k)\\). The multi-head cross-attention scores in T5 (Raffel et al., 2020) \\(\\mathbf{s}_{i,j}^{k}\\) is calculated as"
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "equation",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.143,
|
| 574 |
+
0.571,
|
| 575 |
+
0.489,
|
| 576 |
+
0.593
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "\\[\n\\mathbf {s} _ {i, j} ^ {k} = \\operatorname {M u l t i H e a d A t t} \\left(\\mathbf {y} _ {i}, \\mathbf {m} _ {j} ^ {k}\\right) \\in \\mathbb {R} ^ {\\left| \\text {H e a d} \\right|} \\tag {5}\n\\]"
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.115,
|
| 585 |
+
0.598,
|
| 586 |
+
0.492,
|
| 587 |
+
0.678
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "where \\(|\\mathrm{Head}|\\) is the number of attention heads. However, it doesn't capture the relevance information of retrieved passages into the reader in (5). To add the relevance feature into the attention block, we revise (5) by incorporating the attention bias"
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "equation",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.166,
|
| 596 |
+
0.684,
|
| 597 |
+
0.49,
|
| 598 |
+
0.705
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "\\[\n\\mathbf {s} _ {i, j} ^ {k} = \\operatorname {M u l t i H e a d A t t} \\left(\\mathbf {y} _ {i}, \\mathbf {m} _ {j} ^ {k}\\right) + \\mathbf {b} _ {k}, \\tag {6}\n\\]"
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.115,
|
| 607 |
+
0.71,
|
| 608 |
+
0.491,
|
| 609 |
+
0.807
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "where \\(\\mathbf{b}_k\\in \\mathbb{R}^{|Head|}\\) is a trainable attention bias vector for all the tokens in the \\(k\\) -th retrieved passage. In the experiments, the maximum retrieved passages is by default set to 100. Thus, the decoder attention bias introduces additional \\(100*|\\mathrm{Head}|\\) parameters for each layer."
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.115,
|
| 618 |
+
0.808,
|
| 619 |
+
0.492,
|
| 620 |
+
0.889
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "Adversarial Training Adversarial training creates adversarial examples by adding small perturbations to the embedding layer. Assuming the word(-piece) embedding layer is parameterized by a matrix \\(\\mathbf{V} \\in \\mathcal{R}^{|V| \\times d}\\), \\(|V|\\) is the vocabulary size, and \\(d\\)"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "table",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.55,
|
| 629 |
+
0.073,
|
| 630 |
+
0.849,
|
| 631 |
+
0.155
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "<table><tr><td>Dataset</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>NQ</td><td>79168</td><td>8757</td><td>3610</td></tr><tr><td>TriviaQA</td><td>78785</td><td>8837</td><td>11313</td></tr><tr><td>EffcientQA</td><td>-</td><td>1800</td><td>-</td></tr></table>"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "table_caption",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.527,
|
| 640 |
+
0.164,
|
| 641 |
+
0.868,
|
| 642 |
+
0.179
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "Table 1: Number of questions in each QA dataset."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.51,
|
| 651 |
+
0.202,
|
| 652 |
+
0.886,
|
| 653 |
+
0.234
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "is the embed-dimension. The adversarial embedding matrix \\(\\hat{\\mathbf{V}}\\) can be obtained by"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "equation",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.631,
|
| 662 |
+
0.244,
|
| 663 |
+
0.884,
|
| 664 |
+
0.261
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "\\[\ng _ {\\mathbf {V}} = - \\nabla_ {\\mathbf {V}} \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta), \\tag {7}\n\\]"
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "equation",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.59,
|
| 673 |
+
0.264,
|
| 674 |
+
0.884,
|
| 675 |
+
0.282
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "\\[\n\\hat {\\mathbf {V}} = \\mathbf {V} + \\operatorname {S G} \\left(\\epsilon g _ {\\mathbf {V}} / \\| g _ {\\mathbf {V}} \\| _ {2}\\right), \\tag {8}\n\\]"
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.51,
|
| 684 |
+
0.291,
|
| 685 |
+
0.886,
|
| 686 |
+
0.355
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": "where \\(\\mathrm{SG}(\\cdot)\\) is the stop-gradient operation. We use the adversarial embedding matrix \\(\\hat{\\mathbf{V}}\\) to replace the original \\(\\mathbf{V}\\) in model parameters \\(\\theta\\), and obtain \\(\\hat{\\theta}\\). Thus the adversarial loss can be calculated as"
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "equation",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.597,
|
| 695 |
+
0.364,
|
| 696 |
+
0.884,
|
| 697 |
+
0.382
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "\\[\n\\mathcal {L} _ {\\mathrm {A T}} (\\mathbf {x}, \\mathbf {y}; \\theta) = \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\hat {\\theta}). \\tag {9}\n\\]"
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.51,
|
| 706 |
+
0.391,
|
| 707 |
+
0.884,
|
| 708 |
+
0.423
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "Therefore, the overall training objective of the generative reader is"
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "equation",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.549,
|
| 717 |
+
0.431,
|
| 718 |
+
0.884,
|
| 719 |
+
0.45
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "\\[\n\\mathcal {L} ^ {2} = \\alpha \\mathcal {L} (\\mathbf {x}, \\mathbf {y}; \\theta) + \\beta \\mathcal {L} _ {\\mathrm {A T}} (\\mathbf {x}, \\mathbf {y}; \\theta), \\tag {10}\n\\]"
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.51,
|
| 728 |
+
0.458,
|
| 729 |
+
0.88,
|
| 730 |
+
0.474
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "where \\(\\alpha = 0.5, \\beta = 0.5\\) in all of the experiments."
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "title",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.511,
|
| 739 |
+
0.484,
|
| 740 |
+
0.7,
|
| 741 |
+
0.5
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "2.3 UnitedQA System"
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.51,
|
| 750 |
+
0.505,
|
| 751 |
+
0.886,
|
| 752 |
+
0.65
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "The UnitedQA system combines outputs from both extractive and generative models for a given question during inference. Since the output spaces of extractive and generative models are different, we use a simple linear interpolation based on best predictions from each model<sup>2</sup>. Denote the predicted strings from \\( M \\) extractive and \\( N \\) generative models as \\( y_1^E, \\ldots, y_M^E \\) and \\( y_1^G, \\ldots, y_N^G \\), respectively. The hybrid prediction \\( y^* \\) is obtained by"
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "equation",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.525,
|
| 761 |
+
0.657,
|
| 762 |
+
0.884,
|
| 763 |
+
0.701
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "\\[\n\\underset {y \\in \\mathcal {Y}} {\\operatorname {a r g m a x}} \\tau \\sum_ {m = 1} ^ {M} \\mathbf {1} \\left(y, y _ {m} ^ {E}\\right) + \\delta \\sum_ {n = 1} ^ {N} \\mathbf {1} \\left(y, y _ {n} ^ {G}\\right), \\tag {11}\n\\]"
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"type": "text",
|
| 770 |
+
"bbox": [
|
| 771 |
+
0.51,
|
| 772 |
+
0.708,
|
| 773 |
+
0.886,
|
| 774 |
+
0.74
|
| 775 |
+
],
|
| 776 |
+
"angle": 0,
|
| 777 |
+
"content": "where \\(\\mathcal{V}\\) is the set of all predicted strings, \\(\\mathbf{1}(y,y^{\\prime})\\) is an indicator function and \\(\\tau = 0.6\\), \\(\\delta = 0.4\\)."
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "title",
|
| 781 |
+
"bbox": [
|
| 782 |
+
0.511,
|
| 783 |
+
0.752,
|
| 784 |
+
0.659,
|
| 785 |
+
0.768
|
| 786 |
+
],
|
| 787 |
+
"angle": 0,
|
| 788 |
+
"content": "3 Experiments"
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "title",
|
| 792 |
+
"bbox": [
|
| 793 |
+
0.511,
|
| 794 |
+
0.777,
|
| 795 |
+
0.702,
|
| 796 |
+
0.793
|
| 797 |
+
],
|
| 798 |
+
"angle": 0,
|
| 799 |
+
"content": "3.1 Experiment Setup"
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "text",
|
| 803 |
+
"bbox": [
|
| 804 |
+
0.51,
|
| 805 |
+
0.797,
|
| 806 |
+
0.884,
|
| 807 |
+
0.83
|
| 808 |
+
],
|
| 809 |
+
"angle": 0,
|
| 810 |
+
"content": "We use two representative QA datasets and adopt the same training/dev/testing splits as in previous"
|
| 811 |
+
},
|
| 812 |
+
{
|
| 813 |
+
"type": "page_footnote",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.51,
|
| 816 |
+
0.836,
|
| 817 |
+
0.886,
|
| 818 |
+
0.911
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": "2We have also tried a few more complex approaches for combining the extractive and generative models. For example, we first train an extractive model, and then append the top-k answer strings from the extractive model at the end of the input for training a generative model. None of them is as good as the simple ensemble approach."
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "page_footnote",
|
| 825 |
+
"bbox": [
|
| 826 |
+
0.138,
|
| 827 |
+
0.895,
|
| 828 |
+
0.416,
|
| 829 |
+
0.91
|
| 830 |
+
],
|
| 831 |
+
"angle": 0,
|
| 832 |
+
"content": "we omit the layer notation for simplification"
|
| 833 |
+
}
|
| 834 |
+
],
|
| 835 |
+
[
|
| 836 |
+
{
|
| 837 |
+
"type": "table",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.146,
|
| 840 |
+
0.073,
|
| 841 |
+
0.855,
|
| 842 |
+
0.304
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "<table><tr><td>Model</td><td>Reader Type</td><td>Reader Size (M)</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>REALM(Guu et al., 2020)</td><td>Extractive</td><td>110</td><td>40.4</td><td>N/A</td></tr><tr><td>RAG(Lewis et al., 2020)</td><td>Generative</td><td>400</td><td>44.5</td><td>56.1</td></tr><tr><td>DPR(Karpukhin et al., 2020)</td><td>Extractive</td><td>110</td><td>41.5</td><td>57.9</td></tr><tr><td>T5-FIDbase(Izacard and Grave, 2021)</td><td>Generative</td><td>220</td><td>48.2</td><td>65.0</td></tr><tr><td>T5-FIDlarge(Izacard and Grave, 2021)</td><td>Generative</td><td>770</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitedQA-Ebase(Ours)</td><td>Extractive</td><td>110</td><td>47.7</td><td>66.3</td></tr><tr><td>UnitedQA-ELarge(Ours)</td><td>Extractive</td><td>330</td><td>51.8</td><td>68.9</td></tr><tr><td>UnitedQA-Glarge(Ours)</td><td>Generative</td><td>770</td><td>52.3</td><td>68.6</td></tr><tr><td>UnitedQA-ELarge++ (Ours)</td><td>Ensemble</td><td>3x330</td><td>52.4</td><td>69.6</td></tr><tr><td>UnitedQA-GLarge++ (Ours)</td><td>Ensemble</td><td>3x770</td><td>53.3</td><td>69.2</td></tr><tr><td>UnitedQA (Ours)</td><td>Hybrid</td><td>2x770+330</td><td>54.7</td><td>70.5</td></tr></table>"
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "table_caption",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.115,
|
| 851 |
+
0.314,
|
| 852 |
+
0.884,
|
| 853 |
+
0.357
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Table 2: Comparison to state-of-the-art models on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is used for evaluation. The overall best model is in \\(\\square\\), the best single model is in \\(\\square\\), and the best model with the smallest reader size is in \\(\\square\\)."
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.115,
|
| 862 |
+
0.382,
|
| 863 |
+
0.492,
|
| 864 |
+
0.478
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "work (Lee et al., 2019; Karpukhin et al., 2020). Both datasets (see Table 1 for statistics) have been heavily studied in recent work (Lee et al., 2019; Min et al., 2019; Karpukhin et al., 2020; Guu et al., 2020). We follow the standard evaluation protocol and use exact match (EM) as the evaluation metric."
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "text",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.115,
|
| 873 |
+
0.483,
|
| 874 |
+
0.491,
|
| 875 |
+
0.708
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": "NaturalQuestions (Kwiatkowski et al., 2019) is composed of questions by real users to Google Search, each with answers identified by human annotators in Wikipedia. The open-domain version of NaturalQuestions (Lee et al., 2019) only consider questions with short answers, i.e. answers with less than 5 tokens. In the NaturalQuestions, the questions are considered to be more information seeking given that the question askers didn't know the answer beforehand. In addition, we use another evaluation set, i.e. the dev set introduced recently by the EfficientQA competition (Min et al., 2021), which is constructed in the same way as the original NaturalQuestions dataset."
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "text",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.115,
|
| 884 |
+
0.713,
|
| 885 |
+
0.492,
|
| 886 |
+
0.809
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "TriviaQA (Joshi et al., 2017) contains trivia question-answer pairs that were scraped from the web. Different from NaturalQuestions, the questions here are written with known answers in mind. Specifically, the unfiltered set has been used for developing open-domain QA models."
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.115,
|
| 895 |
+
0.814,
|
| 896 |
+
0.492,
|
| 897 |
+
0.91
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "Implementation details For a fair comparison, we use the same retrieval module as Karpukhin et al. (2020) for NaturalQuestions and TriviaQA to mitigate the impact of retrieval difference. Specifically, we use DPR (single) for NaturalQuestions and BM25+DPR (multi) for TriviaQA because of"
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.51,
|
| 906 |
+
0.382,
|
| 907 |
+
0.887,
|
| 908 |
+
0.673
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "their best end-to-end performance (Karpukhin et al. 2020). For all the experiments, we use 8 and 16 V100-32GB for base and large model training respectively. We train our models with Adam optimizer of a linear scheduler with a warmup raito of 0.1. The extractive models are trained for up to 8 epochs with a learning rate of \\(2\\mathrm{e} - 5\\) and a batch passage size per question of 16. The generative models are trained for up to 10 epochs with a learning rate of \\(1\\mathrm{e} - 4\\), a batch size of 64, and 100 retrieved passages per question for model training. We select \\(\\gamma\\) in \\(\\{4,8\\}\\). After the best configuration is selected based on the dev set, we run our best models 3 times independently with different random seeds and report the median performance on the test set. We also report ensemble results which are based on the linear interpolation over answer predictions from the 3 models."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "title",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.511,
|
| 917 |
+
0.691,
|
| 918 |
+
0.659,
|
| 919 |
+
0.705
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "3.2 Main results"
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.51,
|
| 928 |
+
0.717,
|
| 929 |
+
0.886,
|
| 930 |
+
0.91
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "Single Model Results: We first compare our models to two recent models, REALM (Guu et al., 2020) and RAG (Lewis et al., 2020), which are first pre-trained with different retrieval augmented objectives and then fine-tuned for open-domain QA. In addition, we include as baselines DPR (Karpukhin et al., 2020) and T5-FID (Izacard and Grave, 2021), both of which are based on the same retriever as ours. As shown in Table 2, both our extractive and generative models achieve new state-of-the-art results for both studied datasets. Compared with the recent state-of-the-art extractive"
|
| 934 |
+
}
|
| 935 |
+
],
|
| 936 |
+
[
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"bbox": [
|
| 940 |
+
0.115,
|
| 941 |
+
0.075,
|
| 942 |
+
0.493,
|
| 943 |
+
0.22
|
| 944 |
+
],
|
| 945 |
+
"angle": 0,
|
| 946 |
+
"content": "model (DPR), our base model leads to pronounced \\(15\\%\\) relative improvements for both NaturalQuestions \\((+6.2\\) absolute improvement) and TriviaQA \\((+8.4\\) absolute improvement). More importantly, UnitedQA- \\(\\mathbf{E}_{\\mathrm{base}}\\) achieves comparable or even better performance with regard to generative models of larger size, i.e. RAG and T5-FIDbase. It highlights the importance of proper training strategies for open-domain QA models."
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "text",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.115,
|
| 952 |
+
0.227,
|
| 953 |
+
0.493,
|
| 954 |
+
0.597
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": "Hybrid Model Results: In order to evaluate the advantage of the hybrid of the extractive and generative models (UnitedQA), we include two homogeneous ensemble baselines, one consisting of only extractive readers (UnitedQA-E++) and the other ensemble of exclusively generative models (UnitedQA-G++.). For homogeneous ensemble cases, the three-way majority prediction is used. For the hybrid of extractive and generative readers, we select a three-model combination from the set of three generative and three extractive models based on the dev set. We observed that combining predictions from two generative models and one extractive model results in the best hybrid model for both datasets. As expected, all ensemble models show an improvement over their single model counterparts. However, the two homogeneous ensemble baselines, UnitedQA-E++ and UnitedQA-G++, only provide marginal gains over the corresponding best single models. The significant improvement brought by our proposed hybrid approach indicates the benefit of combining extractive and generative readers for open-domain QA."
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.115,
|
| 963 |
+
0.604,
|
| 964 |
+
0.493,
|
| 965 |
+
0.911
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "Discussion: Although the proposed hybrid approach has been shown to be highly effective for open-domain QA, we point out that the improved performance comes with increased computational cost. The best combination requires approximately three times the computational cost of a single generative model. Therefore, it would be interesting to explore more efficient hybrid methods, such as effective parameter sharing strategies or unified formulations. Another interesting future direction is to explore customized compression approaches for reducing the model size of retriever and reader separately or jointly through pruning (Han et al., 2016), quantization (Hubara et al., 2018), and knowledge distillation (Hinton et al., 2015). Specifically, given that the hybrid model is more effective, it is likely that a student model can learn more effectively from a hybrid teacher model via knowledge distillation for open-domain QA."
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "table",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.518,
|
| 974 |
+
0.073,
|
| 975 |
+
0.881,
|
| 976 |
+
0.257
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>(Cheng et al., 2020) +PDR</td><td>43.3</td><td>60.1</td></tr><tr><td>BERTbase</td><td>44.2</td><td>62.2</td></tr><tr><td>-Multi-object</td><td>43.5</td><td>61.3</td></tr><tr><td>-PDR</td><td>41.8</td><td>60.2</td></tr><tr><td>-Multi-object & PDR</td><td>40.6</td><td>58.5</td></tr><tr><td>UnitedQA-Ebase</td><td>46.0</td><td>65.4</td></tr><tr><td>-Multi-object</td><td>45.2</td><td>64.3</td></tr><tr><td>-PDR</td><td>43.1</td><td>63.8</td></tr><tr><td>-Multi-object & PDR</td><td>42.5</td><td>61.2</td></tr></table>"
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "table_caption",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.51,
|
| 985 |
+
0.267,
|
| 986 |
+
0.887,
|
| 987 |
+
0.34
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "Table 3: Ablation experiments of the extractive model on the dev sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported. The top and bottom models are built on BERTbase and ELECTRAbase, respectively."
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "title",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.511,
|
| 996 |
+
0.368,
|
| 997 |
+
0.621,
|
| 998 |
+
0.385
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "4 Analysis"
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.51,
|
| 1007 |
+
0.4,
|
| 1008 |
+
0.886,
|
| 1009 |
+
0.464
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "In this section, we first carry out ablation study on the extractive and generative model improvements. Moreover, we aim to take a deeper look and understand the difference between the two models."
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "title",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.511,
|
| 1018 |
+
0.483,
|
| 1019 |
+
0.677,
|
| 1020 |
+
0.498
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "4.1 Ablation Study"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.51,
|
| 1029 |
+
0.508,
|
| 1030 |
+
0.886,
|
| 1031 |
+
0.911
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "In Table 3, we present ablation experiments on the effectiveness of different textual representations and methods for improving the extractive model UnitedQA-Ebase. Here, we focus on base models, i.e. BERTbase and ELECTRAbase. Note that the row UnitedQA-Ebase is the corresponding base model reported in Table 2. Compared with the MML-based multi-objective (Cheng et al., 2020), we find that a new multi-objective with HardEM at the multi-passage level and MML at the passage level is more effective for open-domain QA. In addition to the multi-objective training, there is a noticeable improvement brought by the regularization method (PDR) which indicates the importance of proper regularization for learning with noisy supervision. Last but not least, the large improvement of ELECTRA over BERT indicates the importance of deriving better text representations for weakly supervised NLP problems. For the UnitedQA-G, we present the ablation study on analyzing the effectiveness of decoder attention bias component and adversarial training mechanism in Table 4. Both techniques contribute to decent improvements over T5-FID with more pronounced gains brought by adversarial training."
|
| 1035 |
+
}
|
| 1036 |
+
],
|
| 1037 |
+
[
|
| 1038 |
+
{
|
| 1039 |
+
"type": "table",
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
0.134,
|
| 1042 |
+
0.073,
|
| 1043 |
+
0.472,
|
| 1044 |
+
0.169
|
| 1045 |
+
],
|
| 1046 |
+
"angle": 0,
|
| 1047 |
+
"content": "<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>T5-FIDlarge</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitiedQA-Glarge</td><td>52.3</td><td>68.6</td></tr><tr><td>-Adv Training</td><td>52.0</td><td>68.2</td></tr><tr><td>-Attention Bias</td><td>51.8</td><td>68.1</td></tr></table>"
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "table_caption",
|
| 1051 |
+
"bbox": [
|
| 1052 |
+
0.116,
|
| 1053 |
+
0.18,
|
| 1054 |
+
0.49,
|
| 1055 |
+
0.224
|
| 1056 |
+
],
|
| 1057 |
+
"angle": 0,
|
| 1058 |
+
"content": "Table 4: Ablation experiments of the generative model on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported."
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "table",
|
| 1062 |
+
"bbox": [
|
| 1063 |
+
0.118,
|
| 1064 |
+
0.238,
|
| 1065 |
+
0.493,
|
| 1066 |
+
0.373
|
| 1067 |
+
],
|
| 1068 |
+
"angle": 0,
|
| 1069 |
+
"content": "<table><tr><td></td><td></td><td>Top-20</td><td>Top-100</td><td>Δ</td></tr><tr><td rowspan=\"3\">NQ</td><td>Retrieval</td><td>78.4</td><td>85.4</td><td>+9%</td></tr><tr><td>United-E</td><td>49.8</td><td>51.8</td><td>+4%</td></tr><tr><td>United-G</td><td>49.3</td><td>52.3</td><td>+6%</td></tr><tr><td rowspan=\"3\">TriviaQA</td><td>Retrieval</td><td>79.9</td><td>84.4</td><td>+6%</td></tr><tr><td>United-E</td><td>67.1</td><td>68.9</td><td>+3%</td></tr><tr><td>United-G</td><td>65.4</td><td>68.6</td><td>+5%</td></tr></table>"
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "table_caption",
|
| 1073 |
+
"bbox": [
|
| 1074 |
+
0.115,
|
| 1075 |
+
0.384,
|
| 1076 |
+
0.49,
|
| 1077 |
+
0.442
|
| 1078 |
+
],
|
| 1079 |
+
"angle": 0,
|
| 1080 |
+
"content": "Table 5: Retrieval top- \\(k\\) accuracy and end-to-end QA extract match scores on the test sets of NaturalQuestions (NQ) and TriviaQA. United-E and United-G stand for our extractive and generative models respectively."
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "title",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
0.116,
|
| 1086 |
+
0.468,
|
| 1087 |
+
0.395,
|
| 1088 |
+
0.484
|
| 1089 |
+
],
|
| 1090 |
+
"angle": 0,
|
| 1091 |
+
"content": "4.2 Impact of Retrieval Accuracy"
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "text",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
0.115,
|
| 1097 |
+
0.49,
|
| 1098 |
+
0.49,
|
| 1099 |
+
0.827
|
| 1100 |
+
],
|
| 1101 |
+
"angle": 0,
|
| 1102 |
+
"content": "Here, we vary the number of retrieved passages during inference and report the evaluation results in terms of end-to-end QA exact match score of UnitedQA-E and UnitedQA-G along with the corresponding top-\\(k\\) retrieval accuracy. The results are summarized in Table 5. As expected, when the number of retrieved passages increases, both top-\\(k\\) retrieval accuracy and the end-to-end QA performance improve. However, there is a noticeable gap between the improvement of retrieving more passages (i.e., recall) and that of the corresponding end-to-end QA performance, especially for the extractive reader. This is likely caused by additional noise introduced with improved retrieval recall. Specifically, only half of the retriever improvement can be effectively utilized by the extractive model while the generative model can benefit more from retrieving more passages. This suggests that by concatenating all passages in vector space, the generative model are more effective in de-noising in comparison to the extractive model."
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "title",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
0.116,
|
| 1108 |
+
0.84,
|
| 1109 |
+
0.345,
|
| 1110 |
+
0.854
|
| 1111 |
+
],
|
| 1112 |
+
"angle": 0,
|
| 1113 |
+
"content": "4.3 Breakdown Evaluation"
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "text",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
0.115,
|
| 1119 |
+
0.862,
|
| 1120 |
+
0.49,
|
| 1121 |
+
0.91
|
| 1122 |
+
],
|
| 1123 |
+
"angle": 0,
|
| 1124 |
+
"content": "Following Lewis et al. (2021), we carry out a breakdown evaluation of model performance over the NaturalQuestions and TriviaQA test sets. Given"
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "text",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.51,
|
| 1130 |
+
0.076,
|
| 1131 |
+
0.887,
|
| 1132 |
+
0.446
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "their superior performance, we again only consider our improved extractive and generative models, i.e. UnitedQA-Elarge and UnitedQA-G respectively. The evaluation is summarized in Table 6. In comparison to their corresponding overall performance, both the extractive and generative models achieve much better performance on the \"Overlap\" categories (i.e. \"Question Overlap\" and \"Answer Overlap\") for both NaturalQuestions and TrivaQA, which indicates that both models perform well for question and answer memorization. Different from question and answer memorization, there is a pronounced performance drop for both models on the \"Answer Overlap Only\" category where certain amount of relevance inference capability is required to succeed. Lastly, we see that both extractive and generative models suffer some significant performance degradation for the \"No Overlap\" column which highlights model's generalization evaluation. Nevertheless, the extractive model demonstrate a better QA generalization by achieving a better overall performance on the \"No Overlap\" category for both datasets."
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "title",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.511,
|
| 1141 |
+
0.464,
|
| 1142 |
+
0.674,
|
| 1143 |
+
0.48
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "4.4 Error Analysis"
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "text",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.51,
|
| 1152 |
+
0.489,
|
| 1153 |
+
0.886,
|
| 1154 |
+
0.715
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": "Here, we conduct analyses into prediction errors made by the extractive and generative models based on automatic evaluation. For this study, we use the EfficientQA dev set (Min et al., 2021) which is constructed in the same way as the original NaturalQuestions dataset. Specifically, we group prediction errors into three categorizes: 1) common prediction errors made by both the extractive and generative models, 2) prediction errors made by the extractive model, 3) prediction errors produced by the generative model. In the following, we first carry out a manual inspection into the common errors. Then, we compare the prediction errors made by extractive and generative models, respectively."
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "text",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.51,
|
| 1163 |
+
0.717,
|
| 1164 |
+
0.886,
|
| 1165 |
+
0.91
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "First of all, there is an error rate of \\(29\\%\\) of those consensus predictions made by both extractive and generative models according to the automatic evaluation. Based on 30 randomly selected examples, we find that around \\(30\\%\\) of those predictions are actually valid answers as shown in the top part of Table 7. In addition to predictions that are answers at different granularity or semantically equivalent ones, some of those prediction errors are likely caused by the ambiguity in questions. As the given example in Table 7, based on the specificity, the model prediction is also a valid answer. This high-"
|
| 1169 |
+
}
|
| 1170 |
+
],
|
| 1171 |
+
[
|
| 1172 |
+
{
|
| 1173 |
+
"type": "table",
|
| 1174 |
+
"bbox": [
|
| 1175 |
+
0.145,
|
| 1176 |
+
0.073,
|
| 1177 |
+
0.861,
|
| 1178 |
+
0.21
|
| 1179 |
+
],
|
| 1180 |
+
"angle": 0,
|
| 1181 |
+
"content": "<table><tr><td>Dataset</td><td>Model</td><td>Total</td><td>Question Overlap</td><td>No Question Overlap</td><td>Answer Overlap</td><td>Answer Overlap Only</td><td>No Overlap</td></tr><tr><td rowspan=\"2\">NQ</td><td>UnitedQA-G</td><td>52.3</td><td>72.2</td><td>40.5</td><td>62.7</td><td>45.4</td><td>34.0</td></tr><tr><td>UnitedQA-E</td><td>51.8</td><td>69.4</td><td>41.5</td><td>60.1</td><td>45.1</td><td>37.6</td></tr><tr><td rowspan=\"2\">TriviaQA</td><td>UnitedQA-G</td><td>68.6</td><td>88.4</td><td>62.5</td><td>78.1</td><td>69.6</td><td>44.5</td></tr><tr><td>UnitedQA-E</td><td>68.9</td><td>89.3</td><td>62.7</td><td>78.6</td><td>70.6</td><td>44.3</td></tr></table>"
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"type": "table_caption",
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
0.115,
|
| 1187 |
+
0.219,
|
| 1188 |
+
0.885,
|
| 1189 |
+
0.263
|
| 1190 |
+
],
|
| 1191 |
+
"angle": 0,
|
| 1192 |
+
"content": "Table 6: Breakdown evaluation on NaturalQuestions (NQ) and TriviaQA based on test splits defined in (Lewis et al., 2021). Exact match scores are reported. UnitedQA-E and UnitedQA-G denote our extractive and generative models respectively."
|
| 1193 |
+
},
|
| 1194 |
+
{
|
| 1195 |
+
"type": "table",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
0.139,
|
| 1198 |
+
0.275,
|
| 1199 |
+
0.864,
|
| 1200 |
+
0.533
|
| 1201 |
+
],
|
| 1202 |
+
"angle": 0,
|
| 1203 |
+
"content": "<table><tr><td colspan=\"2\">Valid Answers</td></tr><tr><td>Different granularity</td><td>Q: When was harry potter and the deathly hallows part 2 movie released\nPrediction: 2011 / Gold: 15 July 2011</td></tr><tr><td>Semantically equivalent</td><td>Q: minimum age limit for chief justic of india\nPrediction: 65 / Gold: 65 years</td></tr><tr><td>Ambiguity question</td><td>Q: who won her first tennis grand slam in 2018\nPrediction: Carolin Wozniacki / Gold: Simona Halep</td></tr><tr><td colspan=\"2\">Wrong Answers</td></tr><tr><td>Part as whole error</td><td>Q: the official U.S. poverty line is based on the cost of what\nPrediction: food / Gold: ICP purchasing power</td></tr><tr><td>Entity confusion</td><td>Q: actor who played tommy in terms of endearment\nPrediction: Jeff Daniels / Gold: Troy Bishop</td></tr><tr><td>Event confusion</td><td>Q: when did the saturdaywanan roughriders last won the grey cup\nPrediction: 2007 / Gold: 2013</td></tr></table>"
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "table_caption",
|
| 1207 |
+
"bbox": [
|
| 1208 |
+
0.236,
|
| 1209 |
+
0.541,
|
| 1210 |
+
0.761,
|
| 1211 |
+
0.556
|
| 1212 |
+
],
|
| 1213 |
+
"angle": 0,
|
| 1214 |
+
"content": "Table 7: Examples of prediction errors as judged by the automatic evaluation."
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
0.115,
|
| 1220 |
+
0.581,
|
| 1221 |
+
0.491,
|
| 1222 |
+
0.694
|
| 1223 |
+
],
|
| 1224 |
+
"angle": 0,
|
| 1225 |
+
"content": "lights the limitation of the current evaluation metric, which does not accurately estimate the existing open-domain QA system capabilities. As shown in the bottom part of Table 7, most of representative errors are due to the confusion of related concepts, entities or events that are mentioned frequently together with the corresponding gold answers."
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
0.115,
|
| 1231 |
+
0.701,
|
| 1232 |
+
0.494,
|
| 1233 |
+
0.911
|
| 1234 |
+
],
|
| 1235 |
+
"angle": 0,
|
| 1236 |
+
"content": "Next, all questions from the dev set are categorized based on WH question word, i.e. what, which, when, who, how, where. We then report the relative performance change of each WH category for both extractive and generative models over their corresponding overall prediction accuracy in Figure 2. First, it is easy to see that both extractive and generative models achieve the best performance for entity related who questions, which is likely to be the result of high ratio of samples of this type seen during training. In contrast, the answers to what questions can play a much richer syntactic role in context, making it more difficult for both extractive"
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "text",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
0.51,
|
| 1242 |
+
0.581,
|
| 1243 |
+
0.886,
|
| 1244 |
+
0.678
|
| 1245 |
+
],
|
| 1246 |
+
"angle": 0,
|
| 1247 |
+
"content": "and generative models to perform well. Interestingly, the generative model exhibits the strength for temporal reasoning, whereas the extractive model does not. This difference suggests that it is worth exploring better temporal modeling strategies to improve the extractive model in the future."
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "title",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
0.511,
|
| 1253 |
+
0.69,
|
| 1254 |
+
0.667,
|
| 1255 |
+
0.705
|
| 1256 |
+
],
|
| 1257 |
+
"angle": 0,
|
| 1258 |
+
"content": "5 Related Work"
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
0.51,
|
| 1264 |
+
0.717,
|
| 1265 |
+
0.886,
|
| 1266 |
+
0.911
|
| 1267 |
+
],
|
| 1268 |
+
"angle": 0,
|
| 1269 |
+
"content": "Open-domain QA Open-domain QA requires a system to answer questions based on evidence retrieved from a large corpus such as Wikipedia (Voorhees, 2000; Chen et al., 2017). Recent progress has been made towards improving evidence retrieval through both sparse vector models like TF-IDF or BM25 (Chen et al., 2017; Min et al., 2019), and dense vector models based on BERT (Lee et al., 2019; Karpukhin et al., 2020; Guu et al., 2020; Qu et al., 2021). Generally, the dense representations complement the sparse vector methods for passage retrieval as they can potentially give"
|
| 1270 |
+
}
|
| 1271 |
+
],
|
| 1272 |
+
[
|
| 1273 |
+
{
|
| 1274 |
+
"type": "image",
|
| 1275 |
+
"bbox": [
|
| 1276 |
+
0.121,
|
| 1277 |
+
0.074,
|
| 1278 |
+
0.482,
|
| 1279 |
+
0.244
|
| 1280 |
+
],
|
| 1281 |
+
"angle": 0,
|
| 1282 |
+
"content": null
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "image_caption",
|
| 1286 |
+
"bbox": [
|
| 1287 |
+
0.115,
|
| 1288 |
+
0.255,
|
| 1289 |
+
0.49,
|
| 1290 |
+
0.3
|
| 1291 |
+
],
|
| 1292 |
+
"angle": 0,
|
| 1293 |
+
"content": "Figure 2: Relative accuracy of different \\( WH \\) questions. The relative accuracy is the relative change of a \\( WH \\) category accuracy to the overall model accuracy."
|
| 1294 |
+
},
|
| 1295 |
+
{
|
| 1296 |
+
"type": "text",
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
0.115,
|
| 1299 |
+
0.329,
|
| 1300 |
+
0.492,
|
| 1301 |
+
0.522
|
| 1302 |
+
],
|
| 1303 |
+
"angle": 0,
|
| 1304 |
+
"content": "high similarity to semantically related text pairs, even without exact lexical overlap. Unlike most work focusing on a pipeline model, Lee et al. (2019) propose a pre-training objective for jointly training both the retrieval encoder and reader. It is further extended by Guu et al. (2020) with a dynamic update of the passage index during the training. Instead, in this work, we focus on a hybrid reader approach for open-domain QA. By simply combining answer predictions from extractive and generative models, our UnitedQA achieves significant improvements over state-of-the-art models."
|
| 1305 |
+
},
|
| 1306 |
+
{
|
| 1307 |
+
"type": "text",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
0.115,
|
| 1310 |
+
0.524,
|
| 1311 |
+
0.493,
|
| 1312 |
+
0.911
|
| 1313 |
+
],
|
| 1314 |
+
"angle": 0,
|
| 1315 |
+
"content": "Reading Comprehension with Noisy Labels There has been a line of work on improving distantly-supervised reading comprehension models by developing learning methods and model architectures that can better use noisy labels. Most of them focus on the document-level QA, where all paragraphs share the same document context. Clark and Gardner (2018) propose a paragraph-pair ranking objective for learning with multiple paragraphs so that the model can distinguish relevant paragraphs from irrelevant ones. In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones. Min et al. (2019) propose a hard EM learning scheme where only passage-level loss is considered for document-level QA. More recently, different probabilistic assumptions with corresponding training and inference methods are examined in (Cheng et al., 2020) again for document-level QA with distant supervision. In our work, we further extend the multi-objective formulation proposed in (Cheng et al., 2020) with the hard EM learning (Min et al., 2019) for enhancing extrac"
|
| 1316 |
+
},
|
| 1317 |
+
{
|
| 1318 |
+
"type": "text",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
0.51,
|
| 1321 |
+
0.076,
|
| 1322 |
+
0.885,
|
| 1323 |
+
0.122
|
| 1324 |
+
],
|
| 1325 |
+
"angle": 0,
|
| 1326 |
+
"content": "tive open-domain QA, where the input passages are given by a retrieval model and are typically from different documents."
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "title",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
0.511,
|
| 1332 |
+
0.136,
|
| 1333 |
+
0.644,
|
| 1334 |
+
0.151
|
| 1335 |
+
],
|
| 1336 |
+
"angle": 0,
|
| 1337 |
+
"content": "6 Conclusion"
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "text",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
0.509,
|
| 1343 |
+
0.163,
|
| 1344 |
+
0.888,
|
| 1345 |
+
0.386
|
| 1346 |
+
],
|
| 1347 |
+
"angle": 0,
|
| 1348 |
+
"content": "In this study, we propose a hybrid model for open-domain QA, called UnitedQA, which combines the strengths of extractive and generative readers. We demonstrate the effectiveness of UnitedQA on two popular open-domain QA benchmarks, NaturalQuestions and TriviaQA. Our results show that the proposed UnitedQA model significantly outperforms single extractive and generative models as well as their corresponding homogeneous ensembles, and sets new state-of-the-art on both benchmarks. We also perform a comprehensive empirical study to investigate the relative contributions of different components of our model and the techniques we use to improve the readers."
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "text",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
0.509,
|
| 1354 |
+
0.388,
|
| 1355 |
+
0.887,
|
| 1356 |
+
0.468
|
| 1357 |
+
],
|
| 1358 |
+
"angle": 0,
|
| 1359 |
+
"content": "For future work, it would be interesting to explore model compression approaches for reducing the model size of retriever and reader separately or jointly through pruning, quantization, and knowledge distillation."
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "title",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
0.511,
|
| 1365 |
+
0.481,
|
| 1366 |
+
0.673,
|
| 1367 |
+
0.497
|
| 1368 |
+
],
|
| 1369 |
+
"angle": 0,
|
| 1370 |
+
"content": "Acknowledgments"
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "text",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
0.51,
|
| 1376 |
+
0.507,
|
| 1377 |
+
0.886,
|
| 1378 |
+
0.587
|
| 1379 |
+
],
|
| 1380 |
+
"angle": 0,
|
| 1381 |
+
"content": "We would like to thank the anonymous reviewers for valuable suggestions, Yuning Mao for valuable discussions and comments, and Microsoft Research Technology Engineering team for computing support."
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "title",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
0.512,
|
| 1387 |
+
0.614,
|
| 1388 |
+
0.611,
|
| 1389 |
+
0.63
|
| 1390 |
+
],
|
| 1391 |
+
"angle": 0,
|
| 1392 |
+
"content": "References"
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "ref_text",
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
0.512,
|
| 1398 |
+
0.637,
|
| 1399 |
+
0.887,
|
| 1400 |
+
0.718
|
| 1401 |
+
],
|
| 1402 |
+
"angle": 0,
|
| 1403 |
+
"content": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879. Association for Computational Linguistics."
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "ref_text",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
0.512,
|
| 1409 |
+
0.727,
|
| 1410 |
+
0.887,
|
| 1411 |
+
0.794
|
| 1412 |
+
],
|
| 1413 |
+
"angle": 0,
|
| 1414 |
+
"content": "Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37, Online. Association for Computational Linguistics."
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "ref_text",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
0.512,
|
| 1420 |
+
0.804,
|
| 1421 |
+
0.887,
|
| 1422 |
+
0.911
|
| 1423 |
+
],
|
| 1424 |
+
"angle": 0,
|
| 1425 |
+
"content": "Hao Cheng, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2020. Probabilistic assumptions matter: Improved models for distantly-supervised document-level question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5657-5667, Online. Association for Computational Linguistics."
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "list",
|
| 1429 |
+
"bbox": [
|
| 1430 |
+
0.512,
|
| 1431 |
+
0.637,
|
| 1432 |
+
0.887,
|
| 1433 |
+
0.911
|
| 1434 |
+
],
|
| 1435 |
+
"angle": 0,
|
| 1436 |
+
"content": null
|
| 1437 |
+
}
|
| 1438 |
+
],
|
| 1439 |
+
[
|
| 1440 |
+
{
|
| 1441 |
+
"type": "ref_text",
|
| 1442 |
+
"bbox": [
|
| 1443 |
+
0.118,
|
| 1444 |
+
0.077,
|
| 1445 |
+
0.491,
|
| 1446 |
+
0.182
|
| 1447 |
+
],
|
| 1448 |
+
"angle": 0,
|
| 1449 |
+
"content": "Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regularization with f-divergence for improving model robustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1078-1089, Online. Association for Computational Linguistics."
|
| 1450 |
+
},
|
| 1451 |
+
{
|
| 1452 |
+
"type": "ref_text",
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
0.118,
|
| 1455 |
+
0.192,
|
| 1456 |
+
0.492,
|
| 1457 |
+
0.271
|
| 1458 |
+
],
|
| 1459 |
+
"angle": 0,
|
| 1460 |
+
"content": "Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845-855. Association for Computational Linguistics."
|
| 1461 |
+
},
|
| 1462 |
+
{
|
| 1463 |
+
"type": "ref_text",
|
| 1464 |
+
"bbox": [
|
| 1465 |
+
0.118,
|
| 1466 |
+
0.281,
|
| 1467 |
+
0.492,
|
| 1468 |
+
0.348
|
| 1469 |
+
],
|
| 1470 |
+
"angle": 0,
|
| 1471 |
+
"content": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations (ICLR)."
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"type": "ref_text",
|
| 1475 |
+
"bbox": [
|
| 1476 |
+
0.118,
|
| 1477 |
+
0.357,
|
| 1478 |
+
0.491,
|
| 1479 |
+
0.476
|
| 1480 |
+
],
|
| 1481 |
+
"angle": 0,
|
| 1482 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics."
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "ref_text",
|
| 1486 |
+
"bbox": [
|
| 1487 |
+
0.118,
|
| 1488 |
+
0.485,
|
| 1489 |
+
0.492,
|
| 1490 |
+
0.566
|
| 1491 |
+
],
|
| 1492 |
+
"angle": 0,
|
| 1493 |
+
"content": "Kelvin Guu, Kenton Lee, Zora Tung, Panupong Papat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR."
|
| 1494 |
+
},
|
| 1495 |
+
{
|
| 1496 |
+
"type": "ref_text",
|
| 1497 |
+
"bbox": [
|
| 1498 |
+
0.118,
|
| 1499 |
+
0.575,
|
| 1500 |
+
0.492,
|
| 1501 |
+
0.668
|
| 1502 |
+
],
|
| 1503 |
+
"angle": 0,
|
| 1504 |
+
"content": "Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings."
|
| 1505 |
+
},
|
| 1506 |
+
{
|
| 1507 |
+
"type": "ref_text",
|
| 1508 |
+
"bbox": [
|
| 1509 |
+
0.118,
|
| 1510 |
+
0.678,
|
| 1511 |
+
0.492,
|
| 1512 |
+
0.731
|
| 1513 |
+
],
|
| 1514 |
+
"angle": 0,
|
| 1515 |
+
"content": "Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop."
|
| 1516 |
+
},
|
| 1517 |
+
{
|
| 1518 |
+
"type": "ref_text",
|
| 1519 |
+
"bbox": [
|
| 1520 |
+
0.118,
|
| 1521 |
+
0.741,
|
| 1522 |
+
0.492,
|
| 1523 |
+
0.807
|
| 1524 |
+
],
|
| 1525 |
+
"angle": 0,
|
| 1526 |
+
"content": "Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2018. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18(187):1-30."
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "ref_text",
|
| 1530 |
+
"bbox": [
|
| 1531 |
+
0.118,
|
| 1532 |
+
0.817,
|
| 1533 |
+
0.492,
|
| 1534 |
+
0.91
|
| 1535 |
+
],
|
| 1536 |
+
"angle": 0,
|
| 1537 |
+
"content": "Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics."
|
| 1538 |
+
},
|
| 1539 |
+
{
|
| 1540 |
+
"type": "list",
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
0.118,
|
| 1543 |
+
0.077,
|
| 1544 |
+
0.492,
|
| 1545 |
+
0.91
|
| 1546 |
+
],
|
| 1547 |
+
"angle": 0,
|
| 1548 |
+
"content": null
|
| 1549 |
+
},
|
| 1550 |
+
{
|
| 1551 |
+
"type": "ref_text",
|
| 1552 |
+
"bbox": [
|
| 1553 |
+
0.513,
|
| 1554 |
+
0.077,
|
| 1555 |
+
0.886,
|
| 1556 |
+
0.182
|
| 1557 |
+
],
|
| 1558 |
+
"angle": 0,
|
| 1559 |
+
"content": "Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics."
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "ref_text",
|
| 1563 |
+
"bbox": [
|
| 1564 |
+
0.513,
|
| 1565 |
+
0.192,
|
| 1566 |
+
0.886,
|
| 1567 |
+
0.299
|
| 1568 |
+
],
|
| 1569 |
+
"angle": 0,
|
| 1570 |
+
"content": "Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics."
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "ref_text",
|
| 1574 |
+
"bbox": [
|
| 1575 |
+
0.513,
|
| 1576 |
+
0.31,
|
| 1577 |
+
0.886,
|
| 1578 |
+
0.352
|
| 1579 |
+
],
|
| 1580 |
+
"angle": 0,
|
| 1581 |
+
"content": "Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. 2019. Technical report on conversational question answering."
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "ref_text",
|
| 1585 |
+
"bbox": [
|
| 1586 |
+
0.513,
|
| 1587 |
+
0.362,
|
| 1588 |
+
0.886,
|
| 1589 |
+
0.468
|
| 1590 |
+
],
|
| 1591 |
+
"angle": 0,
|
| 1592 |
+
"content": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics."
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "ref_text",
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
0.513,
|
| 1598 |
+
0.479,
|
| 1599 |
+
0.886,
|
| 1600 |
+
0.598
|
| 1601 |
+
],
|
| 1602 |
+
"angle": 0,
|
| 1603 |
+
"content": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466."
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "ref_text",
|
| 1607 |
+
"bbox": [
|
| 1608 |
+
0.513,
|
| 1609 |
+
0.609,
|
| 1610 |
+
0.886,
|
| 1611 |
+
0.689
|
| 1612 |
+
],
|
| 1613 |
+
"angle": 0,
|
| 1614 |
+
"content": "Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096. Association for Computational Linguistics."
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "ref_text",
|
| 1618 |
+
"bbox": [
|
| 1619 |
+
0.513,
|
| 1620 |
+
0.7,
|
| 1621 |
+
0.886,
|
| 1622 |
+
0.805
|
| 1623 |
+
],
|
| 1624 |
+
"angle": 0,
|
| 1625 |
+
"content": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc."
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "ref_text",
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
0.513,
|
| 1631 |
+
0.817,
|
| 1632 |
+
0.886,
|
| 1633 |
+
0.91
|
| 1634 |
+
],
|
| 1635 |
+
"angle": 0,
|
| 1636 |
+
"content": "Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics."
|
| 1637 |
+
},
|
| 1638 |
+
{
|
| 1639 |
+
"type": "list",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
0.513,
|
| 1642 |
+
0.077,
|
| 1643 |
+
0.886,
|
| 1644 |
+
0.91
|
| 1645 |
+
],
|
| 1646 |
+
"angle": 0,
|
| 1647 |
+
"content": null
|
| 1648 |
+
}
|
| 1649 |
+
],
|
| 1650 |
+
[
|
| 1651 |
+
{
|
| 1652 |
+
"type": "ref_text",
|
| 1653 |
+
"bbox": [
|
| 1654 |
+
0.119,
|
| 1655 |
+
0.077,
|
| 1656 |
+
0.491,
|
| 1657 |
+
0.154
|
| 1658 |
+
],
|
| 1659 |
+
"angle": 0,
|
| 1660 |
+
"content": "Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745."
|
| 1661 |
+
},
|
| 1662 |
+
{
|
| 1663 |
+
"type": "ref_text",
|
| 1664 |
+
"bbox": [
|
| 1665 |
+
0.119,
|
| 1666 |
+
0.163,
|
| 1667 |
+
0.49,
|
| 1668 |
+
0.411
|
| 1669 |
+
],
|
| 1670 |
+
"angle": 0,
|
| 1671 |
+
"content": "Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Kuttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Korel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen tau Yih. 2021. NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned."
|
| 1672 |
+
},
|
| 1673 |
+
{
|
| 1674 |
+
"type": "ref_text",
|
| 1675 |
+
"bbox": [
|
| 1676 |
+
0.119,
|
| 1677 |
+
0.419,
|
| 1678 |
+
0.49,
|
| 1679 |
+
0.537
|
| 1680 |
+
],
|
| 1681 |
+
"angle": 0,
|
| 1682 |
+
"content": "Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM approach for weakly supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2851-2864, Hong Kong, China. Association for Computational Linguistics."
|
| 1683 |
+
},
|
| 1684 |
+
{
|
| 1685 |
+
"type": "ref_text",
|
| 1686 |
+
"bbox": [
|
| 1687 |
+
0.119,
|
| 1688 |
+
0.544,
|
| 1689 |
+
0.49,
|
| 1690 |
+
0.649
|
| 1691 |
+
],
|
| 1692 |
+
"angle": 0,
|
| 1693 |
+
"content": "Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5385-5393, Online. Association for Computational Linguistics."
|
| 1694 |
+
},
|
| 1695 |
+
{
|
| 1696 |
+
"type": "ref_text",
|
| 1697 |
+
"bbox": [
|
| 1698 |
+
0.119,
|
| 1699 |
+
0.657,
|
| 1700 |
+
0.49,
|
| 1701 |
+
0.788
|
| 1702 |
+
],
|
| 1703 |
+
"angle": 0,
|
| 1704 |
+
"content": "Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics."
|
| 1705 |
+
},
|
| 1706 |
+
{
|
| 1707 |
+
"type": "ref_text",
|
| 1708 |
+
"bbox": [
|
| 1709 |
+
0.119,
|
| 1710 |
+
0.795,
|
| 1711 |
+
0.49,
|
| 1712 |
+
0.874
|
| 1713 |
+
],
|
| 1714 |
+
"angle": 0,
|
| 1715 |
+
"content": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67."
|
| 1716 |
+
},
|
| 1717 |
+
{
|
| 1718 |
+
"type": "ref_text",
|
| 1719 |
+
"bbox": [
|
| 1720 |
+
0.119,
|
| 1721 |
+
0.882,
|
| 1722 |
+
0.49,
|
| 1723 |
+
0.909
|
| 1724 |
+
],
|
| 1725 |
+
"angle": 0,
|
| 1726 |
+
"content": "Ellen Voorhees. 2000. The TREC-8 question answering track report."
|
| 1727 |
+
},
|
| 1728 |
+
{
|
| 1729 |
+
"type": "list",
|
| 1730 |
+
"bbox": [
|
| 1731 |
+
0.119,
|
| 1732 |
+
0.077,
|
| 1733 |
+
0.491,
|
| 1734 |
+
0.909
|
| 1735 |
+
],
|
| 1736 |
+
"angle": 0,
|
| 1737 |
+
"content": null
|
| 1738 |
+
}
|
| 1739 |
+
]
|
| 1740 |
+
]
|
data/2021/2101_00xxx/2101.00178/full.md
CHANGED
|
@@ -1,3 +1,298 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UnitedQA: A Hybrid Approach for Open Domain Question Answering
|
| 2 |
+
|
| 3 |
+
Hao Cheng $^{1*}$ , Yelong Shen $^{2*}$ , Xiaodong Liu $^{1}$ , Pengcheng He $^{2}$ , Weizhu Chen $^{2}$ , Jianfeng Gao $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> Microsoft Research <sup>2</sup> Microsoft Azure AI
|
| 6 |
+
|
| 7 |
+
{chehao, yeshe, xiaodl, penhe, wzchen, jfgao}@microsoft.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
To date, most of recent work under the retrieval-reader framework for open-domain QA focuses on either extractive or generative reader exclusively. In this paper, we study a hybrid approach for leveraging the strengths of both models. We apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvements over previous state-of-the-art models. We demonstrate that an hybrid approach by combining answers from both readers can effectively take advantages of extractive and generative answer inference strategies and outperform single models as well as homogeneous ensembles. Our approach outperforms previous state-of-the-art models by 3.3 and 2.7 points in exact match on NaturalQuestions and TriviaQA respectively.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Open-domain question answering (QA) has been a long standing problem in natural language understanding, information retrieval, and related fields (Chen and Yih, 2020). An typical open-domain QA system follows the retrieval-reader framework (Chen et al., 2017; Guu et al., 2020; Karpukhin et al., 2020), where the relevant passages are first retrieved from a large text corpus, and a reader module then navigates multiple passages for answer inference. In this work, we study two paradigms of reader modules, i.e. extractive (Karpukhin et al., 2020; Guu et al., 2020) and generative (Lewis et al., 2020; Izacard and Grave, 2021) readers. The extractive reader extracts contiguous spans from the retrieved passages whereas the generative reader sequentially decodes the answer string which might not be contained in the retrieved passages.
|
| 16 |
+
|
| 17 |
+
Recent work on open-domain QA (Karpukhin et al., 2020; Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2021) explores either an extractive reader or a generative reader exclusively. We hypothesize that extractive and generative readers adopt different answer inference strategies, thus a hybrid extractive/generative reader can be a better option for open-domain QA tasks. As shown in Figure 1, compared with prediction agreement among only generative or extractive readers (top-left and bottom-right), the cross prediction agreement between extractive and generative readers (bottom-left) is relatively low ( $<50\%$ ). It indicates that answers produced by those two types of models are different and they can be complementary to each other. Therefore, we propose a hybrid reader approach, UnitedQA, which is a simple ensemble approach to combine the predictions from extractive and generative readers. It achieves state-of-the-art results on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017).
|
| 18 |
+
|
| 19 |
+
In UnitedQA, the extractive reader (UnitedQA-E) and generative reader (UnitedQA-G) are built upon the pretrained language models, ELECTRA (Clark et al., 2020) and T5 (Raffel et al., 2020), respectively. For the UnitedQA-E, we adopt a weakly-supervised training objective to address the noisy supervision issue caused by the heuristics-based labeling and incorporate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the model robustness. The UnitedQA-G follows the T5 Fusion-in-Decoder (FID) (Izacard and Grave, 2021) and we make two improvements: first, we add a group of attention bias parameters into the decoder cross-attention block to feature the ranking information of retrieved contexts; second, we add the adversarial training (Ju et al., 2019; Jiang et al., 2020; Pereira et al., 2021) to improve the model generalization ability.
|
| 20 |
+
|
| 21 |
+
The experimental results highlight the effec
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Pairwise prediction agreement ratio. G-1, G-2, G-3 and E-1, E-2, E-3 are three different generative and extractive readers respectively. All readers achieve similar performance ( $\approx$ $52\%$ exact match) on NaturalQuestions. Higher agreement ( $>50\%$ ) in red and lower agreement ( $<50\%$ ) in gray. The agreement is calculated based on exact string match.
|
| 25 |
+
|
| 26 |
+
tiveness of the simple hybrid approach of UnitedQA. With both improved extractive and generative readers, UnitedQA sets new state-of-the-art results on two popular open-domain QA datasets, i.e. 54.7 and 70.3 in exact match on NaturalQuestions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017), respectively. It is worth noting that our UnitedQA model not only outperforms each single model but also brings more pronounced improvements over homogeneous ensembles of either extractive or generative readers. Last, based on our analyses, UnitedQA-E and UnitedQA-G have advantages in different cases, suggesting they may use different reasoning strategies.
|
| 27 |
+
|
| 28 |
+
# 2 Method
|
| 29 |
+
|
| 30 |
+
In this section, we present the overall pipeline of the UnitedQA system, which consists of three components: Retrieval, Reading, and Re-ranking. First, the retrieval module fetches a list of relevant passages from a Wikipedia dump for a given question. Then, the module of hybrid readers produces answer candidates from the set of retrieved passages. Last, the re-ranking module combines the answer candidates with linear interpolation and produces the final answer.
|
| 31 |
+
|
| 32 |
+
Retrieval Following Karpukhin et al. (2020), we consider two methods, BM25 and dense passage retrieval (DPR), for retrieving the support passages
|
| 33 |
+
|
| 34 |
+
for a given question. For BM25, passages are encoded as bag of words (BOW), and inverse document frequencies are used as the ranking function. For DPR, passages and questions are represented as dense vectors based on two BERT (Devlin et al., 2019) models. The relevance score is then computed based on the dot production between the query and passage vectors. In this paper, we adopt the same implementation as Karpukhin et al. (2020) for retrieving passages. Specifically, the English Wikipedia dump from Dec. 20, 2018 is used as the source documents for retrieval, with the removal of semi-structured data, such as tables or lists. Each document is split into disjoint 100-word passages as the basic retrieval unit. The top-100 passages are then passed for reading.
|
| 35 |
+
|
| 36 |
+
Reading We combine the generative reader and the extractive reader to produce answer candidates over the retrieved passages. Here, we only give a high-level description of our approach. More details regarding our improved extractive and generative models are presented in §2.1 and §2.2 respectively.
|
| 37 |
+
|
| 38 |
+
The generative reader is based on a sequence-to-sequence model pre-trained in a forward-generation fashion on a large corpus, i.e. T5 (Raffel et al., 2020). Similar to Izacard and Grave (2021), the model takes the question and its relevant passages as input, and then generates the answer string token by token. Specifically, the concatenation of all retrieved passages and the corresponding question is used as the encoder input. Then, the decoder performs reasoning over the concatenation of all evidence through an attention mechanism.
|
| 39 |
+
|
| 40 |
+
Following state-of-the-art extractive QA models (Devlin et al., 2019; Karpukhin et al., 2020), our extractive reader is based on a Transformer neural network pre-trained with a cloze style self-supervised objective, i.e. ELECTRA (Clark et al., 2020). Here, a pair of a given question and a support passage is jointly encoded into neural text representations. These representations are then used to define scores or probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans. Finally, the answer string probabilities are based on the aggregation over all possible answer spans from the entire set of support passages.
|
| 41 |
+
|
| 42 |
+
# 2.1 UnitedQA-E
|
| 43 |
+
|
| 44 |
+
In §2.1.2, we give the problem definition of open-domain QA for extractive reader. Then, we detail
|
| 45 |
+
|
| 46 |
+
the improvements of UnitedQA-E in $\S 2.1.2$
|
| 47 |
+
|
| 48 |
+
# 2.1.1 Extractive Reader
|
| 49 |
+
|
| 50 |
+
Given a question $\mathbf{q}$ and a set of $K$ retrieved passages $\mathfrak{p}_1, \ldots, \mathfrak{p}_K$ , a text encoder produces contextualized representations: $\mathbf{h}_1^k, \ldots, \mathbf{h}_T^k \in \mathbb{R}^n$ for the question-passage pair $(\mathbf{q}, \mathbf{p}_k)$ in the form of "[CLS] question [SEP] passage [SEP]", where [CLS] and [SEP] are special tokens for encoding inputs, $T$ is the maximum sequence length of the input text, and $\mathbf{h}_i^k$ indicates the contextualized embedding of the $i$ -th token in $(\mathbf{q}, \mathbf{p}_k)$ .
|
| 51 |
+
|
| 52 |
+
The extractive reader computes the span-begin score of the $i$ -th token as $s_b(i^k) = \mathbf{w}_b^T\mathbf{h}_i^k$ using a weight vector $\mathbf{w}_b \in \mathbb{R}^d$ . The span-end score $s_e(j^k)$ is defined in the same way. Thus, the probabilities of a start position $i^k$ and an end position $j^k$ are $P_b(i^k) = \frac{\exp(s_b(i^k))}{Z_b}$ , $P_e(j^k) = \frac{\exp(s_e(j^k))}{Z_e}$ , where $Z_b, Z_e$ are normalizing factors defined by the corresponding probability space. The probability of an answer span from $i^k$ to $j^k$ is defined as $P_s(i^k, j^k) = P_b(i^k)P_e(j^k)$ .
|
| 53 |
+
|
| 54 |
+
Here, we consider two probability spaces, passage level and multi-passage level, with the only difference in the computing of $Z_{b}, Z_{e}$ . Specifically, the passage-level probability of each answer begins and ends is computed by normalizing all possible positions in the respective passage, i.e. $Z_{b} = Z_{b}^{k} = \sum_{\mathcal{I}^{k} \cup \mathrm{NULL}} \exp(s_{b}(i))$ , $Z_{e} = Z_{e}^{k} = \sum_{\mathcal{I}^{k} \cup \mathrm{NULL}} \exp(s_{e}(j))$ , where $\mathcal{I}^{k}$ is the set of all possible positions from the $k$ -th passage and NULL indicates special positions if $p_{k}$ does not support answering the question. Similarly, the multi-passage level probability is computed by normalizing over each answer positions across all $K$ relevant passages, i.e. $Z_{b} = Z_{b}^{*} = \sum_{k} \sum_{\mathcal{I}^{k}} \exp(s_{b}(i))$ , $Z_{e} = Z_{e}^{*} = \sum_{k} \sum_{\mathcal{I}^{k}} \exp(s_{e}(j))$ , respectively.
|
| 55 |
+
|
| 56 |
+
Since there are usually multiple plausible mentions for open-domain QA, during training, it is typical to maximize either the marginal log-likelihood (MML) of all correct spans (Karpukhin et al., 2020) or the log-likelihood of the most likely correct span (HardEM) (Min et al., 2019). During inference, the prediction is made based on the candidate answer string score, obtaining as $P_{a}(y) = \sum_{(i,j)\in \mathcal{Y}}P_{s}(i,j)$ , where $\mathcal{Y}$ is the set of spans corresponding to the answer string $y$ .
|
| 57 |
+
|
| 58 |
+
# 2.1.2 Improvement Method
|
| 59 |
+
|
| 60 |
+
In addition to better text representations from Clark et al. (2020), we consider two methods for improving the training of the extractive reader.
|
| 61 |
+
|
| 62 |
+
Multi-objective for Weakly-supervised QA The multi-objective formulation is introduced in Cheng et al. (2020) for improving weakly supervised document-level QA. Different from Cheng et al. (2020) where only MML is considered for the multi-objective formulation, we found combining HardEM with MML is more effective for open-domain QA based on our experiments (§4.1). Specifically, we combine a multi-passage HardEM loss with $K$ passage-level MML losses over a batch of $K$ passages
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\begin{array}{l} \mathcal{L}_{\mathrm{EXT}} = \log \max_{(i,j)}P_{s}^{M}(i,j) + \\ \frac {1}{K} \sum_ {k} \log \sum_ {\left(i ^ {k}, j ^ {k}\right)} P _ {s} ^ {P} \left(i ^ {k}, j ^ {k}\right), \tag {1} \\ \end{array}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
where $P_{s}^{M}, P_{s}^{P}$ is the multi-passage level and passage level span probabilities respectively.
|
| 69 |
+
|
| 70 |
+
Posterior Differential Regularization Due to the noisy supervision for open-domain QA (Chen et al., 2017), we investigate the posterior differential regularization (PDR) (Cheng et al., 2021) to improve the robustness of the extractive reader. Different from Cheng et al. (2021) where only clean supervision setting is considered, in this work, we apply PDR to the weakly supervised open-domain QA scenario. Given it is computationally expensive to enumerate all possible spans, we apply two separate regularization terms for the begin and end probabilities at the multi-passage level, respectively,
|
| 71 |
+
|
| 72 |
+
$$
|
| 73 |
+
\mathcal {L} _ {\mathrm {P D R}} = D \left(P _ {b} (i) \mid P _ {b} ^ {\prime} (i)\right) + D \left(P _ {e} (j) \mid P _ {e} ^ {\prime} (j)\right), \tag {2}
|
| 74 |
+
$$
|
| 75 |
+
|
| 76 |
+
where $D(\cdot |\cdot)$ is the squared Hellinger distance, and $P_b^{\prime},P_e^{\prime}$ are the probabilities of start and end positions with additive input noise to the token embeddings. Specifically, we sample noise vectors $\epsilon_{1},\ldots ,\epsilon_{T}$ from $\mathcal{N}(0,c^2 I)$ , and add them to the token embeddings as the noisy input, i.e. $\mathbf{v}_1 + \epsilon_1,\dots ,\mathbf{v}_T + \epsilon_T$ where $c$ is fixed to 1e-3 throughout our experiments.
|
| 77 |
+
|
| 78 |
+
Based on this, the overall training objective for the extractive reader is
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\mathcal {L} ^ {1} = \mathcal {L} _ {\mathrm {E X T}} + \gamma \mathcal {L} _ {\mathrm {P D R}}, \tag {3}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where $\gamma$ is a regularization scalar hyperparameter.
|
| 85 |
+
|
| 86 |
+
# 2.2 UnitedQA-G
|
| 87 |
+
|
| 88 |
+
Here, we first formally define the setup of generative reader for open-domain QA in § 2.2.1 and then present our improvements in § 2.2.2.
|
| 89 |
+
|
| 90 |
+
# 2.2.1 Generative Reader
|
| 91 |
+
|
| 92 |
+
Given a question $\mathbf{q}$ and a set of $K$ retrieved passages $\mathfrak{p}_1, \ldots, \mathfrak{p}_K$ , the encoder model encodes each $(\mathfrak{q}, \mathfrak{p}_k)$ pair independently, and produces contextualized representation for each token: $\mathbf{h}_i^k \in \mathbb{R}^d$ for the $i$ -th token of the $k$ -th pair. The decoder then performs attention over the concatenation of the representations of all the retrieved passages, and generates the answer string.
|
| 93 |
+
|
| 94 |
+
Let $\mathbf{x}$ denote the input of the question and all retrieved passages $\mathbf{x} = ((\mathbf{q},\mathbf{p}_1),\dots,(\mathbf{q},\mathbf{p}_K))$ ,and $\mathbf{y}$ the answer string with its tokens as $(y_{1},\ldots ,y_{N})$ The generative reader is trained to maximize a sequence-to-sequence objective for a given $(\mathbf{x},\mathbf{y})$
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\mathcal {L} (\mathbf {x}, \mathbf {y}; \theta) = \sum_ {i} ^ {N} \log P _ {\theta} \left(y _ {i} \mid \mathbf {x}, y _ {1: i - 1}\right), \tag {4}
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where $\theta$ is the model parameter. During inference, a greedy decoding is used to produce the answer.
|
| 101 |
+
|
| 102 |
+
# 2.2.2 Improvement Method
|
| 103 |
+
|
| 104 |
+
Decoder Attention Bias The decoder in the T5 transformer model adopts a cross-attention mechanism to compute attention scores between the decoding answer tokens and all the retrieved passage tokens. Specifically, let $\mathbf{y}_i\in \mathbb{R}^d$ be the query vector of the $i$ -th decoding token $^1$ , and $\mathbf{m}_j^k\in \mathbb{R}^d$ be the key vector of the $j$ -th token in $(q),p_k)$ . The multi-head cross-attention scores in T5 (Raffel et al., 2020) $\mathbf{s}_{i,j}^{k}$ is calculated as
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\mathbf {s} _ {i, j} ^ {k} = \operatorname {M u l t i H e a d A t t} \left(\mathbf {y} _ {i}, \mathbf {m} _ {j} ^ {k}\right) \in \mathbb {R} ^ {\left| \text {H e a d} \right|} \tag {5}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
where $|\mathrm{Head}|$ is the number of attention heads. However, it doesn't capture the relevance information of retrieved passages into the reader in (5). To add the relevance feature into the attention block, we revise (5) by incorporating the attention bias
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
\mathbf {s} _ {i, j} ^ {k} = \operatorname {M u l t i H e a d A t t} \left(\mathbf {y} _ {i}, \mathbf {m} _ {j} ^ {k}\right) + \mathbf {b} _ {k}, \tag {6}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
where $\mathbf{b}_k\in \mathbb{R}^{|Head|}$ is a trainable attention bias vector for all the tokens in the $k$ -th retrieved passage. In the experiments, the maximum retrieved passages is by default set to 100. Thus, the decoder attention bias introduces additional $100*|\mathrm{Head}|$ parameters for each layer.
|
| 117 |
+
|
| 118 |
+
Adversarial Training Adversarial training creates adversarial examples by adding small perturbations to the embedding layer. Assuming the word(-piece) embedding layer is parameterized by a matrix $\mathbf{V} \in \mathcal{R}^{|V| \times d}$ , $|V|$ is the vocabulary size, and $d$
|
| 119 |
+
|
| 120 |
+
<table><tr><td>Dataset</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>NQ</td><td>79168</td><td>8757</td><td>3610</td></tr><tr><td>TriviaQA</td><td>78785</td><td>8837</td><td>11313</td></tr><tr><td>EffcientQA</td><td>-</td><td>1800</td><td>-</td></tr></table>
|
| 121 |
+
|
| 122 |
+
Table 1: Number of questions in each QA dataset.
|
| 123 |
+
|
| 124 |
+
is the embed-dimension. The adversarial embedding matrix $\hat{\mathbf{V}}$ can be obtained by
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
g _ {\mathbf {V}} = - \nabla_ {\mathbf {V}} \mathcal {L} (\mathbf {x}, \mathbf {y}; \theta), \tag {7}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\hat {\mathbf {V}} = \mathbf {V} + \operatorname {S G} \left(\epsilon g _ {\mathbf {V}} / \| g _ {\mathbf {V}} \| _ {2}\right), \tag {8}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
where $\mathrm{SG}(\cdot)$ is the stop-gradient operation. We use the adversarial embedding matrix $\hat{\mathbf{V}}$ to replace the original $\mathbf{V}$ in model parameters $\theta$ , and obtain $\hat{\theta}$ . Thus the adversarial loss can be calculated as
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
\mathcal {L} _ {\mathrm {A T}} (\mathbf {x}, \mathbf {y}; \theta) = \mathcal {L} (\mathbf {x}, \mathbf {y}; \hat {\theta}). \tag {9}
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
Therefore, the overall training objective of the generative reader is
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\mathcal {L} ^ {2} = \alpha \mathcal {L} (\mathbf {x}, \mathbf {y}; \theta) + \beta \mathcal {L} _ {\mathrm {A T}} (\mathbf {x}, \mathbf {y}; \theta), \tag {10}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
where $\alpha = 0.5, \beta = 0.5$ in all of the experiments.
|
| 147 |
+
|
| 148 |
+
# 2.3 UnitedQA System
|
| 149 |
+
|
| 150 |
+
The UnitedQA system combines outputs from both extractive and generative models for a given question during inference. Since the output spaces of extractive and generative models are different, we use a simple linear interpolation based on best predictions from each model<sup>2</sup>. Denote the predicted strings from $M$ extractive and $N$ generative models as $y_1^E, \ldots, y_M^E$ and $y_1^G, \ldots, y_N^G$ , respectively. The hybrid prediction $y^*$ is obtained by
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\underset {y \in \mathcal {Y}} {\operatorname {a r g m a x}} \tau \sum_ {m = 1} ^ {M} \mathbf {1} \left(y, y _ {m} ^ {E}\right) + \delta \sum_ {n = 1} ^ {N} \mathbf {1} \left(y, y _ {n} ^ {G}\right), \tag {11}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
where $\mathcal{V}$ is the set of all predicted strings, $\mathbf{1}(y,y^{\prime})$ is an indicator function and $\tau = 0.6$ , $\delta = 0.4$ .
|
| 157 |
+
|
| 158 |
+
# 3 Experiments
|
| 159 |
+
|
| 160 |
+
# 3.1 Experiment Setup
|
| 161 |
+
|
| 162 |
+
We use two representative QA datasets and adopt the same training/dev/testing splits as in previous
|
| 163 |
+
|
| 164 |
+
<table><tr><td>Model</td><td>Reader Type</td><td>Reader Size (M)</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>REALM(Guu et al., 2020)</td><td>Extractive</td><td>110</td><td>40.4</td><td>N/A</td></tr><tr><td>RAG(Lewis et al., 2020)</td><td>Generative</td><td>400</td><td>44.5</td><td>56.1</td></tr><tr><td>DPR(Karpukhin et al., 2020)</td><td>Extractive</td><td>110</td><td>41.5</td><td>57.9</td></tr><tr><td>T5-FIDbase(Izacard and Grave, 2021)</td><td>Generative</td><td>220</td><td>48.2</td><td>65.0</td></tr><tr><td>T5-FIDlarge(Izacard and Grave, 2021)</td><td>Generative</td><td>770</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitedQA-Ebase(Ours)</td><td>Extractive</td><td>110</td><td>47.7</td><td>66.3</td></tr><tr><td>UnitedQA-ELarge(Ours)</td><td>Extractive</td><td>330</td><td>51.8</td><td>68.9</td></tr><tr><td>UnitedQA-Glarge(Ours)</td><td>Generative</td><td>770</td><td>52.3</td><td>68.6</td></tr><tr><td>UnitedQA-ELarge++ (Ours)</td><td>Ensemble</td><td>3x330</td><td>52.4</td><td>69.6</td></tr><tr><td>UnitedQA-GLarge++ (Ours)</td><td>Ensemble</td><td>3x770</td><td>53.3</td><td>69.2</td></tr><tr><td>UnitedQA (Ours)</td><td>Hybrid</td><td>2x770+330</td><td>54.7</td><td>70.5</td></tr></table>
|
| 165 |
+
|
| 166 |
+
Table 2: Comparison to state-of-the-art models on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is used for evaluation. The overall best model is in $\square$ , the best single model is in $\square$ , and the best model with the smallest reader size is in $\square$ .
|
| 167 |
+
|
| 168 |
+
work (Lee et al., 2019; Karpukhin et al., 2020). Both datasets (see Table 1 for statistics) have been heavily studied in recent work (Lee et al., 2019; Min et al., 2019; Karpukhin et al., 2020; Guu et al., 2020). We follow the standard evaluation protocol and use exact match (EM) as the evaluation metric.
|
| 169 |
+
|
| 170 |
+
NaturalQuestions (Kwiatkowski et al., 2019) is composed of questions by real users to Google Search, each with answers identified by human annotators in Wikipedia. The open-domain version of NaturalQuestions (Lee et al., 2019) only consider questions with short answers, i.e. answers with less than 5 tokens. In the NaturalQuestions, the questions are considered to be more information seeking given that the question askers didn't know the answer beforehand. In addition, we use another evaluation set, i.e. the dev set introduced recently by the EfficientQA competition (Min et al., 2021), which is constructed in the same way as the original NaturalQuestions dataset.
|
| 171 |
+
|
| 172 |
+
TriviaQA (Joshi et al., 2017) contains trivia question-answer pairs that were scraped from the web. Different from NaturalQuestions, the questions here are written with known answers in mind. Specifically, the unfiltered set has been used for developing open-domain QA models.
|
| 173 |
+
|
| 174 |
+
Implementation details For a fair comparison, we use the same retrieval module as Karpukhin et al. (2020) for NaturalQuestions and TriviaQA to mitigate the impact of retrieval difference. Specifically, we use DPR (single) for NaturalQuestions and BM25+DPR (multi) for TriviaQA because of
|
| 175 |
+
|
| 176 |
+
their best end-to-end performance (Karpukhin et al. 2020). For all the experiments, we use 8 and 16 V100-32GB for base and large model training respectively. We train our models with Adam optimizer of a linear scheduler with a warmup raito of 0.1. The extractive models are trained for up to 8 epochs with a learning rate of $2\mathrm{e} - 5$ and a batch passage size per question of 16. The generative models are trained for up to 10 epochs with a learning rate of $1\mathrm{e} - 4$ , a batch size of 64, and 100 retrieved passages per question for model training. We select $\gamma$ in $\{4,8\}$ . After the best configuration is selected based on the dev set, we run our best models 3 times independently with different random seeds and report the median performance on the test set. We also report ensemble results which are based on the linear interpolation over answer predictions from the 3 models.
|
| 177 |
+
|
| 178 |
+
# 3.2 Main results
|
| 179 |
+
|
| 180 |
+
Single Model Results: We first compare our models to two recent models, REALM (Guu et al., 2020) and RAG (Lewis et al., 2020), which are first pre-trained with different retrieval augmented objectives and then fine-tuned for open-domain QA. In addition, we include as baselines DPR (Karpukhin et al., 2020) and T5-FID (Izacard and Grave, 2021), both of which are based on the same retriever as ours. As shown in Table 2, both our extractive and generative models achieve new state-of-the-art results for both studied datasets. Compared with the recent state-of-the-art extractive
|
| 181 |
+
|
| 182 |
+
model (DPR), our base model leads to pronounced $15\%$ relative improvements for both NaturalQuestions $(+6.2$ absolute improvement) and TriviaQA $(+8.4$ absolute improvement). More importantly, UnitedQA- $\mathbf{E}_{\mathrm{base}}$ achieves comparable or even better performance with regard to generative models of larger size, i.e. RAG and T5-FIDbase. It highlights the importance of proper training strategies for open-domain QA models.
|
| 183 |
+
|
| 184 |
+
Hybrid Model Results: In order to evaluate the advantage of the hybrid of the extractive and generative models (UnitedQA), we include two homogeneous ensemble baselines, one consisting of only extractive readers (UnitedQA-E++) and the other ensemble of exclusively generative models (UnitedQA-G++.). For homogeneous ensemble cases, the three-way majority prediction is used. For the hybrid of extractive and generative readers, we select a three-model combination from the set of three generative and three extractive models based on the dev set. We observed that combining predictions from two generative models and one extractive model results in the best hybrid model for both datasets. As expected, all ensemble models show an improvement over their single model counterparts. However, the two homogeneous ensemble baselines, UnitedQA-E++ and UnitedQA-G++, only provide marginal gains over the corresponding best single models. The significant improvement brought by our proposed hybrid approach indicates the benefit of combining extractive and generative readers for open-domain QA.
|
| 185 |
+
|
| 186 |
+
Discussion: Although the proposed hybrid approach has been shown to be highly effective for open-domain QA, we point out that the improved performance comes with increased computational cost. The best combination requires approximately three times the computational cost of a single generative model. Therefore, it would be interesting to explore more efficient hybrid methods, such as effective parameter sharing strategies or unified formulations. Another interesting future direction is to explore customized compression approaches for reducing the model size of retriever and reader separately or jointly through pruning (Han et al., 2016), quantization (Hubara et al., 2018), and knowledge distillation (Hinton et al., 2015). Specifically, given that the hybrid model is more effective, it is likely that a student model can learn more effectively from a hybrid teacher model via knowledge distillation for open-domain QA.
|
| 187 |
+
|
| 188 |
+
<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>(Cheng et al., 2020) +PDR</td><td>43.3</td><td>60.1</td></tr><tr><td>BERTbase</td><td>44.2</td><td>62.2</td></tr><tr><td>-Multi-object</td><td>43.5</td><td>61.3</td></tr><tr><td>-PDR</td><td>41.8</td><td>60.2</td></tr><tr><td>-Multi-object & PDR</td><td>40.6</td><td>58.5</td></tr><tr><td>UnitedQA-Ebase</td><td>46.0</td><td>65.4</td></tr><tr><td>-Multi-object</td><td>45.2</td><td>64.3</td></tr><tr><td>-PDR</td><td>43.1</td><td>63.8</td></tr><tr><td>-Multi-object & PDR</td><td>42.5</td><td>61.2</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 3: Ablation experiments of the extractive model on the dev sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported. The top and bottom models are built on BERTbase and ELECTRAbase, respectively.
|
| 191 |
+
|
| 192 |
+
# 4 Analysis
|
| 193 |
+
|
| 194 |
+
In this section, we first carry out ablation study on the extractive and generative model improvements. Moreover, we aim to take a deeper look and understand the difference between the two models.
|
| 195 |
+
|
| 196 |
+
# 4.1 Ablation Study
|
| 197 |
+
|
| 198 |
+
In Table 3, we present ablation experiments on the effectiveness of different textual representations and methods for improving the extractive model UnitedQA-Ebase. Here, we focus on base models, i.e. BERTbase and ELECTRAbase. Note that the row UnitedQA-Ebase is the corresponding base model reported in Table 2. Compared with the MML-based multi-objective (Cheng et al., 2020), we find that a new multi-objective with HardEM at the multi-passage level and MML at the passage level is more effective for open-domain QA. In addition to the multi-objective training, there is a noticeable improvement brought by the regularization method (PDR) which indicates the importance of proper regularization for learning with noisy supervision. Last but not least, the large improvement of ELECTRA over BERT indicates the importance of deriving better text representations for weakly supervised NLP problems. For the UnitedQA-G, we present the ablation study on analyzing the effectiveness of decoder attention bias component and adversarial training mechanism in Table 4. Both techniques contribute to decent improvements over T5-FID with more pronounced gains brought by adversarial training.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Model</td><td>NQ</td><td>TriviaQA</td></tr><tr><td>T5-FIDlarge</td><td>51.4</td><td>67.6</td></tr><tr><td>UnitiedQA-Glarge</td><td>52.3</td><td>68.6</td></tr><tr><td>-Adv Training</td><td>52.0</td><td>68.2</td></tr><tr><td>-Attention Bias</td><td>51.8</td><td>68.1</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 4: Ablation experiments of the generative model on the test sets of NaturalQuestions (NQ) and TriviaQA. Exact match score is reported.
|
| 203 |
+
|
| 204 |
+
<table><tr><td></td><td></td><td>Top-20</td><td>Top-100</td><td>Δ</td></tr><tr><td rowspan="3">NQ</td><td>Retrieval</td><td>78.4</td><td>85.4</td><td>+9%</td></tr><tr><td>United-E</td><td>49.8</td><td>51.8</td><td>+4%</td></tr><tr><td>United-G</td><td>49.3</td><td>52.3</td><td>+6%</td></tr><tr><td rowspan="3">TriviaQA</td><td>Retrieval</td><td>79.9</td><td>84.4</td><td>+6%</td></tr><tr><td>United-E</td><td>67.1</td><td>68.9</td><td>+3%</td></tr><tr><td>United-G</td><td>65.4</td><td>68.6</td><td>+5%</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 5: Retrieval top- $k$ accuracy and end-to-end QA extract match scores on the test sets of NaturalQuestions (NQ) and TriviaQA. United-E and United-G stand for our extractive and generative models respectively.
|
| 207 |
+
|
| 208 |
+
# 4.2 Impact of Retrieval Accuracy
|
| 209 |
+
|
| 210 |
+
Here, we vary the number of retrieved passages during inference and report the evaluation results in terms of end-to-end QA exact match score of UnitedQA-E and UnitedQA-G along with the corresponding top- $k$ retrieval accuracy. The results are summarized in Table 5. As expected, when the number of retrieved passages increases, both top- $k$ retrieval accuracy and the end-to-end QA performance improve. However, there is a noticeable gap between the improvement of retrieving more passages (i.e., recall) and that of the corresponding end-to-end QA performance, especially for the extractive reader. This is likely caused by additional noise introduced with improved retrieval recall. Specifically, only half of the retriever improvement can be effectively utilized by the extractive model while the generative model can benefit more from retrieving more passages. This suggests that by concatenating all passages in vector space, the generative model are more effective in de-noising in comparison to the extractive model.
|
| 211 |
+
|
| 212 |
+
# 4.3 Breakdown Evaluation
|
| 213 |
+
|
| 214 |
+
Following Lewis et al. (2021), we carry out a breakdown evaluation of model performance over the NaturalQuestions and TriviaQA test sets. Given
|
| 215 |
+
|
| 216 |
+
their superior performance, we again only consider our improved extractive and generative models, i.e. UnitedQA-Elarge and UnitedQA-G respectively. The evaluation is summarized in Table 6. In comparison to their corresponding overall performance, both the extractive and generative models achieve much better performance on the "Overlap" categories (i.e. "Question Overlap" and "Answer Overlap") for both NaturalQuestions and TrivaQA, which indicates that both models perform well for question and answer memorization. Different from question and answer memorization, there is a pronounced performance drop for both models on the "Answer Overlap Only" category where certain amount of relevance inference capability is required to succeed. Lastly, we see that both extractive and generative models suffer some significant performance degradation for the "No Overlap" column which highlights model's generalization evaluation. Nevertheless, the extractive model demonstrate a better QA generalization by achieving a better overall performance on the "No Overlap" category for both datasets.
|
| 217 |
+
|
| 218 |
+
# 4.4 Error Analysis
|
| 219 |
+
|
| 220 |
+
Here, we conduct analyses into prediction errors made by the extractive and generative models based on automatic evaluation. For this study, we use the EfficientQA dev set (Min et al., 2021) which is constructed in the same way as the original NaturalQuestions dataset. Specifically, we group prediction errors into three categorizes: 1) common prediction errors made by both the extractive and generative models, 2) prediction errors made by the extractive model, 3) prediction errors produced by the generative model. In the following, we first carry out a manual inspection into the common errors. Then, we compare the prediction errors made by extractive and generative models, respectively.
|
| 221 |
+
|
| 222 |
+
First of all, there is an error rate of $29\%$ of those consensus predictions made by both extractive and generative models according to the automatic evaluation. Based on 30 randomly selected examples, we find that around $30\%$ of those predictions are actually valid answers as shown in the top part of Table 7. In addition to predictions that are answers at different granularity or semantically equivalent ones, some of those prediction errors are likely caused by the ambiguity in questions. As the given example in Table 7, based on the specificity, the model prediction is also a valid answer. This high-
|
| 223 |
+
|
| 224 |
+
<table><tr><td>Dataset</td><td>Model</td><td>Total</td><td>Question Overlap</td><td>No Question Overlap</td><td>Answer Overlap</td><td>Answer Overlap Only</td><td>No Overlap</td></tr><tr><td rowspan="2">NQ</td><td>UnitedQA-G</td><td>52.3</td><td>72.2</td><td>40.5</td><td>62.7</td><td>45.4</td><td>34.0</td></tr><tr><td>UnitedQA-E</td><td>51.8</td><td>69.4</td><td>41.5</td><td>60.1</td><td>45.1</td><td>37.6</td></tr><tr><td rowspan="2">TriviaQA</td><td>UnitedQA-G</td><td>68.6</td><td>88.4</td><td>62.5</td><td>78.1</td><td>69.6</td><td>44.5</td></tr><tr><td>UnitedQA-E</td><td>68.9</td><td>89.3</td><td>62.7</td><td>78.6</td><td>70.6</td><td>44.3</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 6: Breakdown evaluation on NaturalQuestions (NQ) and TriviaQA based on test splits defined in (Lewis et al., 2021). Exact match scores are reported. UnitedQA-E and UnitedQA-G denote our extractive and generative models respectively.
|
| 227 |
+
|
| 228 |
+
<table><tr><td colspan="2">Valid Answers</td></tr><tr><td>Different granularity</td><td>Q: When was harry potter and the deathly hallows part 2 movie released
|
| 229 |
+
Prediction: 2011 / Gold: 15 July 2011</td></tr><tr><td>Semantically equivalent</td><td>Q: minimum age limit for chief justic of india
|
| 230 |
+
Prediction: 65 / Gold: 65 years</td></tr><tr><td>Ambiguity question</td><td>Q: who won her first tennis grand slam in 2018
|
| 231 |
+
Prediction: Carolin Wozniacki / Gold: Simona Halep</td></tr><tr><td colspan="2">Wrong Answers</td></tr><tr><td>Part as whole error</td><td>Q: the official U.S. poverty line is based on the cost of what
|
| 232 |
+
Prediction: food / Gold: ICP purchasing power</td></tr><tr><td>Entity confusion</td><td>Q: actor who played tommy in terms of endearment
|
| 233 |
+
Prediction: Jeff Daniels / Gold: Troy Bishop</td></tr><tr><td>Event confusion</td><td>Q: when did the saturdaywanan roughriders last won the grey cup
|
| 234 |
+
Prediction: 2007 / Gold: 2013</td></tr></table>
|
| 235 |
+
|
| 236 |
+
Table 7: Examples of prediction errors as judged by the automatic evaluation.
|
| 237 |
+
|
| 238 |
+
lights the limitation of the current evaluation metric, which does not accurately estimate the existing open-domain QA system capabilities. As shown in the bottom part of Table 7, most of representative errors are due to the confusion of related concepts, entities or events that are mentioned frequently together with the corresponding gold answers.
|
| 239 |
+
|
| 240 |
+
Next, all questions from the dev set are categorized based on WH question word, i.e. what, which, when, who, how, where. We then report the relative performance change of each WH category for both extractive and generative models over their corresponding overall prediction accuracy in Figure 2. First, it is easy to see that both extractive and generative models achieve the best performance for entity related who questions, which is likely to be the result of high ratio of samples of this type seen during training. In contrast, the answers to what questions can play a much richer syntactic role in context, making it more difficult for both extractive
|
| 241 |
+
|
| 242 |
+
and generative models to perform well. Interestingly, the generative model exhibits the strength for temporal reasoning, whereas the extractive model does not. This difference suggests that it is worth exploring better temporal modeling strategies to improve the extractive model in the future.
|
| 243 |
+
|
| 244 |
+
# 5 Related Work
|
| 245 |
+
|
| 246 |
+
Open-domain QA Open-domain QA requires a system to answer questions based on evidence retrieved from a large corpus such as Wikipedia (Voorhees, 2000; Chen et al., 2017). Recent progress has been made towards improving evidence retrieval through both sparse vector models like TF-IDF or BM25 (Chen et al., 2017; Min et al., 2019), and dense vector models based on BERT (Lee et al., 2019; Karpukhin et al., 2020; Guu et al., 2020; Qu et al., 2021). Generally, the dense representations complement the sparse vector methods for passage retrieval as they can potentially give
|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
Figure 2: Relative accuracy of different $WH$ questions. The relative accuracy is the relative change of a $WH$ category accuracy to the overall model accuracy.
|
| 250 |
+
|
| 251 |
+
high similarity to semantically related text pairs, even without exact lexical overlap. Unlike most work focusing on a pipeline model, Lee et al. (2019) propose a pre-training objective for jointly training both the retrieval encoder and reader. It is further extended by Guu et al. (2020) with a dynamic update of the passage index during the training. Instead, in this work, we focus on a hybrid reader approach for open-domain QA. By simply combining answer predictions from extractive and generative models, our UnitedQA achieves significant improvements over state-of-the-art models.
|
| 252 |
+
|
| 253 |
+
Reading Comprehension with Noisy Labels There has been a line of work on improving distantly-supervised reading comprehension models by developing learning methods and model architectures that can better use noisy labels. Most of them focus on the document-level QA, where all paragraphs share the same document context. Clark and Gardner (2018) propose a paragraph-pair ranking objective for learning with multiple paragraphs so that the model can distinguish relevant paragraphs from irrelevant ones. In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones. Min et al. (2019) propose a hard EM learning scheme where only passage-level loss is considered for document-level QA. More recently, different probabilistic assumptions with corresponding training and inference methods are examined in (Cheng et al., 2020) again for document-level QA with distant supervision. In our work, we further extend the multi-objective formulation proposed in (Cheng et al., 2020) with the hard EM learning (Min et al., 2019) for enhancing extrac
|
| 254 |
+
|
| 255 |
+
tive open-domain QA, where the input passages are given by a retrieval model and are typically from different documents.
|
| 256 |
+
|
| 257 |
+
# 6 Conclusion
|
| 258 |
+
|
| 259 |
+
In this study, we propose a hybrid model for open-domain QA, called UnitedQA, which combines the strengths of extractive and generative readers. We demonstrate the effectiveness of UnitedQA on two popular open-domain QA benchmarks, NaturalQuestions and TriviaQA. Our results show that the proposed UnitedQA model significantly outperforms single extractive and generative models as well as their corresponding homogeneous ensembles, and sets new state-of-the-art on both benchmarks. We also perform a comprehensive empirical study to investigate the relative contributions of different components of our model and the techniques we use to improve the readers.
|
| 260 |
+
|
| 261 |
+
For future work, it would be interesting to explore model compression approaches for reducing the model size of retriever and reader separately or jointly through pruning, quantization, and knowledge distillation.
|
| 262 |
+
|
| 263 |
+
# Acknowledgments
|
| 264 |
+
|
| 265 |
+
We would like to thank the anonymous reviewers for valuable suggestions, Yuning Mao for valuable discussions and comments, and Microsoft Research Technology Engineering team for computing support.
|
| 266 |
+
|
| 267 |
+
# References
|
| 268 |
+
|
| 269 |
+
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870-1879. Association for Computational Linguistics.
|
| 270 |
+
Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 34-37, Online. Association for Computational Linguistics.
|
| 271 |
+
Hao Cheng, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2020. Probabilistic assumptions matter: Improved models for distantly-supervised document-level question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5657-5667, Online. Association for Computational Linguistics.
|
| 272 |
+
|
| 273 |
+
Hao Cheng, Xiaodong Liu, Lis Pereira, Yaoliang Yu, and Jianfeng Gao. 2021. Posterior differential regularization with f-divergence for improving model robustness. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1078-1089, Online. Association for Computational Linguistics.
|
| 274 |
+
Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 845-855. Association for Computational Linguistics.
|
| 275 |
+
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In International Conference on Learning Representations (ICLR).
|
| 276 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 277 |
+
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Papat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 3929-3938. PMLR.
|
| 278 |
+
Song Han, Huizi Mao, and William J. Dally. 2016. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.
|
| 279 |
+
Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop.
|
| 280 |
+
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2018. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18(187):1-30.
|
| 281 |
+
Gautier Izacard and Edouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874-880, Online. Association for Computational Linguistics.
|
| 282 |
+
|
| 283 |
+
Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. 2020. SMART: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2177-2190, Online. Association for Computational Linguistics.
|
| 284 |
+
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.
|
| 285 |
+
Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. 2019. Technical report on conversational question answering.
|
| 286 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
|
| 287 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
|
| 288 |
+
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6086-6096. Association for Computational Linguistics.
|
| 289 |
+
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459-9474. Curran Associates, Inc.
|
| 290 |
+
Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.
|
| 291 |
+
|
| 292 |
+
Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736-1745.
|
| 293 |
+
Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Kuttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Korel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen tau Yih. 2021. NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned.
|
| 294 |
+
Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM approach for weakly supervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2851-2864, Hong Kong, China. Association for Computational Linguistics.
|
| 295 |
+
Lis Pereira, Xiaodong Liu, Hao Cheng, Hoifung Poon, Jianfeng Gao, and Ichiro Kobayashi. 2021. Targeted adversarial training for natural language understanding. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5385-5393, Online. Association for Computational Linguistics.
|
| 296 |
+
Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized training approach to dense passage retrieval for open-domain question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5835-5847, Online. Association for Computational Linguistics.
|
| 297 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 298 |
+
Ellen Voorhees. 2000. The TREC-8 question answering track report.
|
data/2021/2101_00xxx/2101.00178/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00180/e6c1938e-fd7a-49b4-ac0c-6863dca02ce5_content_list.json
CHANGED
|
@@ -1,3 +1,1280 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Transformer based Automatic COVID-19 Fake News Detection System",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
230,
|
| 8 |
+
140,
|
| 9 |
+
774,
|
| 10 |
+
186
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Sunil Gundapu and Radhika Mamidi",
|
| 17 |
+
"bbox": [
|
| 18 |
+
364,
|
| 19 |
+
212,
|
| 20 |
+
635,
|
| 21 |
+
228
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "International Institute of Information Technology, Hyderabad sunil.g@research.iit.ac.in, radhika.mamidi@iit.ac.in",
|
| 28 |
+
"bbox": [
|
| 29 |
+
290,
|
| 30 |
+
239,
|
| 31 |
+
712,
|
| 32 |
+
268
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract. Recent rapid technological advancements in online social networks such as Twitter have led to a great incline in spreading false information and fake news. Misinformation is especially prevalent in the ongoing coronavirus disease (COVID-19) pandemic, leading to individuals accepting bogus and potentially deleterious claims and articles. Quick detection of fake news can reduce the spread of panic and confusion among the public. For our analysis in this paper, we report a methodology to analyze the reliability of information shared on social media pertaining to the COVID-19 pandemic. Our best approach is based on an ensemble of three transformer models (BERT, ALBERT, and XLNET) to detecting fake news. This model was trained and evaluated in the context of the ConstraintAI 2021 shared task \"COVID19 Fake News Detection in English\" [1]. Our system obtained 0.9855 f1-score on testset and ranked 5th among 160 teams.",
|
| 39 |
+
"bbox": [
|
| 40 |
+
261,
|
| 41 |
+
306,
|
| 42 |
+
743,
|
| 43 |
+
501
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Keywords: pandemic-19, fake news, deep learning, transformer models",
|
| 50 |
+
"bbox": [
|
| 51 |
+
261,
|
| 52 |
+
513,
|
| 53 |
+
715,
|
| 54 |
+
529
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "1 Introduction",
|
| 61 |
+
"text_level": 1,
|
| 62 |
+
"bbox": [
|
| 63 |
+
215,
|
| 64 |
+
556,
|
| 65 |
+
375,
|
| 66 |
+
571
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "The COVID-19 pandemic is considered the global public health crisis of the whole world and the biggest problem people faced after World War II. COVID-19, a contagious disease caused by a coronavirus, has caused more than 75 million confirmed cases and 1.7 million deaths across the world till 2020 December<sup>1</sup>. Unfortunately, the misinformation about COVID-19 has encouraged the growing of the disease and chaos among people. During the Munich Security Council held on February 15, 2020, World Health Organization (WHO) Director-General, Tedros Adhanom Ghebreyesus [2] stated that the world was in a war to fight not only a pandemic, but also an infodemic. So we should address the challenge of fake news detection to stop the spreading of COVID-19 misinformation.",
|
| 73 |
+
"bbox": [
|
| 74 |
+
212,
|
| 75 |
+
588,
|
| 76 |
+
787,
|
| 77 |
+
739
|
| 78 |
+
],
|
| 79 |
+
"page_idx": 0
|
| 80 |
+
},
|
| 81 |
+
{
|
| 82 |
+
"type": "text",
|
| 83 |
+
"text": "Since the global pandemic impacts the people, there is a broader public finding information about the COVID-19, whose safety is intimidated by adversarial agents invested in spreading fake news for economic and political reasons. Besides, due to medical and public health issues, it is also hard to be totally valid and factual, leading to differences that worsen with fake news. This difficulty",
|
| 84 |
+
"bbox": [
|
| 85 |
+
212,
|
| 86 |
+
739,
|
| 87 |
+
787,
|
| 88 |
+
816
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "aside_text",
|
| 94 |
+
"text": "arXiv:2101.00180v3 [cs.CL] 21 Jan 2021",
|
| 95 |
+
"bbox": [
|
| 96 |
+
22,
|
| 97 |
+
272,
|
| 98 |
+
57,
|
| 99 |
+
708
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "page_footnote",
|
| 105 |
+
"text": "<sup>1</sup> https://www.worldometers.info/coronavirus/",
|
| 106 |
+
"bbox": [
|
| 107 |
+
217,
|
| 108 |
+
823,
|
| 109 |
+
535,
|
| 110 |
+
840
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "is compounded by the quick advancement of knowledge about the disease. As researchers gain more knowledge about the virus, claims that looked right may turn out to be false, and vice versa. Detecting this spread of COVID-19 associated fake news, thus, has become a pivotal problem, gaining notable attention from government and global health organizations (WHO, 2020), online social networks (TechCrunch, 2020), and news organizations (BBC, 2020; CNN, 2020; New York Times, 2020).",
|
| 117 |
+
"bbox": [
|
| 118 |
+
212,
|
| 119 |
+
146,
|
| 120 |
+
787,
|
| 121 |
+
252
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 1
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "text",
|
| 127 |
+
"text": "In response to the present disinformation, this paper looks at developing an efficient fake news detection architecture with respect to COVID-19. Initially, we started with developing machine learning (ML) algorithms with Term Frequency and Inverse Document Frequency (TF-IDF) feature vectors to detect misinformation on the provided dataset. These supervised TF-IDF methods are still relevant for many classification tasks and performed pretty well for fake news detection. We developed an effective ensemble model integrated with three transformer models for detecting fake news on the social media platforms. This resulted in higher accuracy and a more generalized model.",
|
| 128 |
+
"bbox": [
|
| 129 |
+
212,
|
| 130 |
+
252,
|
| 131 |
+
787,
|
| 132 |
+
387
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 1
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "text",
|
| 138 |
+
"text": "The rest of this paper is organized as follows, Section II presents some prior works related to fake news, and its spread, on social media platforms. In Section III, we describe the dataset provided in the Constraint AI-2021 shared task. Section IV presents implemented models and framework for misinformation detection. Section V provides the discussions on the results. Finally we conclude this paper in Section VI.",
|
| 139 |
+
"bbox": [
|
| 140 |
+
212,
|
| 141 |
+
388,
|
| 142 |
+
787,
|
| 143 |
+
479
|
| 144 |
+
],
|
| 145 |
+
"page_idx": 1
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"type": "text",
|
| 149 |
+
"text": "2 Related Work",
|
| 150 |
+
"text_level": 1,
|
| 151 |
+
"bbox": [
|
| 152 |
+
215,
|
| 153 |
+
500,
|
| 154 |
+
387,
|
| 155 |
+
515
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 1
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"text": "Fake News Detection: Fake news can be defined as inaccurate and misleading information that is growing knowingly or unknowingly [3]. Recognizing the spread of false information such as rumors, fake news, propaganda, hoaxes, spear phishing, and conspiracy theories is an essential task for natural language processing [4]. Gartner's [5] research studies explained that most people in advanced economies would believe more fake information than truthful information by 2022.",
|
| 162 |
+
"bbox": [
|
| 163 |
+
212,
|
| 164 |
+
529,
|
| 165 |
+
787,
|
| 166 |
+
633
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 1
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"text": "To date, so many automated misinformation detection architectures have been developed. Rohit et al. [6] provided an extensive survey to detect fake news on various online social networks. Ghorbani et al. [7] presented an inclusive overview of the recent studies related to misinformation. Furthermore, they described the impact of misleading information, shown state-of-the-art fake news detection systems, and explored the disinformation detection datasets. The majority of the fake news detection models developed using supervised machine learning algorithms to classify the data as misleading or not [8]. This supervised classification is concluded by comparing the user input text with some already created corpora containing genuine and misleading information [9].",
|
| 173 |
+
"bbox": [
|
| 174 |
+
212,
|
| 175 |
+
635,
|
| 176 |
+
787,
|
| 177 |
+
786
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 1
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"text": "Aswini et al. [10] proposed a deep learning architecture with various word embeddings for Fake News Challenge (FCN-1) dataset<sup>2</sup>. They developed the",
|
| 184 |
+
"bbox": [
|
| 185 |
+
212,
|
| 186 |
+
786,
|
| 187 |
+
787,
|
| 188 |
+
816
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 1
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "page_number",
|
| 194 |
+
"text": "2",
|
| 195 |
+
"bbox": [
|
| 196 |
+
217,
|
| 197 |
+
114,
|
| 198 |
+
228,
|
| 199 |
+
126
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 1
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "header",
|
| 205 |
+
"text": "Gundapu and Mamidi",
|
| 206 |
+
"bbox": [
|
| 207 |
+
271,
|
| 208 |
+
114,
|
| 209 |
+
421,
|
| 210 |
+
128
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 1
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "page_footnote",
|
| 216 |
+
"text": "2 http://www.fakenewschallenge.org/",
|
| 217 |
+
"bbox": [
|
| 218 |
+
217,
|
| 219 |
+
823,
|
| 220 |
+
472,
|
| 221 |
+
840
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 1
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "architecture to accurately predict the stance between a given pair of news headlines and the corresponding article/body. On the same FCN-1 dataset, Sean et al. [11] developed an average weighted model of TalosCNN and TalosTree called TalosComb. TalosCNN is a convolutional neural network with pre-trained word2vec embeddings, and TalosTree is a gradient-boosted decision tree model with SVD, word count, TF-IDF. By analyzing the relationship between the news headline and the corresponding article, Heejung et al. [12] designed the Bidirectional Encoder Representations from Transformers model (BERT) to detect misleading news articles.",
|
| 228 |
+
"bbox": [
|
| 229 |
+
212,
|
| 230 |
+
146,
|
| 231 |
+
787,
|
| 232 |
+
282
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 2
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"text": "COVID-19: In the case of COVID-19 fake news, a large number of misleading contents remain online on social media platforms. NLP researchers have been working on developing algorithms for the detection of online COVID-19 related disinformation. To develop any algorithm, we require a corpus. So members of the NLP community created the various fake news datasets: FakeCovid [13], ReCOVery [14], CoAID [15], and CMU-MisCOVID19 [16]. Yichuan Li et al. [17] developed multi-dimensional and multilingual MM-COVID corpora, which covers six languages. Mabrook et al. [18] created a large Twitter dataset related to COVID-19 misinformation. And authors developed an ensemble-stacking model with six machine learning algorithms on the created dataset for detecting misinformation.",
|
| 239 |
+
"bbox": [
|
| 240 |
+
212,
|
| 241 |
+
309,
|
| 242 |
+
787,
|
| 243 |
+
474
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 2
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "Elhadad et al. [22] constructed a voting ensemble machine learning classifier for fake news detection that uses seven feature extraction techniques and ten machine learning models. Tamanna et al. [20] used the COVIDLIES dataset to detect the misinformation by retrieving the misconceptions relevant to the Twitter posts. For COVID-19 fake news detection and fact-checking, Rutvik et al. [19] proposed a two-stage transformer model. The first model retrieves the most relevant facts about COVID-19 by using a novel fact-checking algorithm, and the second model, by computing the textual entailment, verifies the level of truth. Adapting all these classical and hybrid related work techniques, we developed a COVID-19 fake news detection system in this paper.",
|
| 250 |
+
"bbox": [
|
| 251 |
+
212,
|
| 252 |
+
476,
|
| 253 |
+
787,
|
| 254 |
+
628
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 2
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "3 Dataset Description",
|
| 261 |
+
"text_level": 1,
|
| 262 |
+
"bbox": [
|
| 263 |
+
214,
|
| 264 |
+
655,
|
| 265 |
+
444,
|
| 266 |
+
672
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 2
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "text",
|
| 272 |
+
"text": "The ConstraintAI $^{21}$ shared task organizers developed a COVID-19 fake news detection in English dataset [21] containing 10,700 data points collected from various online social networks such as Twitter, Facebook, and Instagram, etc. From the total dataset, 6,420 data points are reserved for training, 2,140 data points are used for hyperparameter tuning as a part of the validation phase, and the remaining 2,140 social media posts are kept aside for testing. Each dataset except the test set contains social media data points and their corresponding labels, either real or fake.",
|
| 273 |
+
"bbox": [
|
| 274 |
+
212,
|
| 275 |
+
691,
|
| 276 |
+
787,
|
| 277 |
+
811
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 2
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "header",
|
| 283 |
+
"text": "Fake News Detection",
|
| 284 |
+
"bbox": [
|
| 285 |
+
589,
|
| 286 |
+
114,
|
| 287 |
+
730,
|
| 288 |
+
127
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 2
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "page_number",
|
| 294 |
+
"text": "3",
|
| 295 |
+
"bbox": [
|
| 296 |
+
774,
|
| 297 |
+
116,
|
| 298 |
+
784,
|
| 299 |
+
126
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 2
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "page_footnote",
|
| 305 |
+
"text": "3 https://constraint-shared-task-2021.github.io/",
|
| 306 |
+
"bbox": [
|
| 307 |
+
217,
|
| 308 |
+
823,
|
| 309 |
+
542,
|
| 310 |
+
840
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 2
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "table",
|
| 316 |
+
"img_path": "images/d3cde1602775781cd8231a7797fa3ba1db8c27bd52fa659b82bea7c505f6250d.jpg",
|
| 317 |
+
"table_caption": [],
|
| 318 |
+
"table_footnote": [
|
| 319 |
+
"(a) Dataset Statistics"
|
| 320 |
+
],
|
| 321 |
+
"table_body": "<table><tr><td>Corpus</td><td>Real</td><td>Fake</td></tr><tr><td>Train</td><td>3360</td><td>3060</td></tr><tr><td>Valid</td><td>1120</td><td>1020</td></tr><tr><td>Test</td><td>1120</td><td>1020</td></tr></table>",
|
| 322 |
+
"bbox": [
|
| 323 |
+
228,
|
| 324 |
+
155,
|
| 325 |
+
403,
|
| 326 |
+
215
|
| 327 |
+
],
|
| 328 |
+
"page_idx": 3
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"type": "table",
|
| 332 |
+
"img_path": "images/c3c9156cca2eeb2e893c8e8c6e8253c65e3401215641cb1eddb77c8068146b6d.jpg",
|
| 333 |
+
"table_caption": [],
|
| 334 |
+
"table_footnote": [
|
| 335 |
+
"(b) Label-wise example"
|
| 336 |
+
],
|
| 337 |
+
"table_body": "<table><tr><td>Tweet</td><td>Label</td></tr><tr><td>CDC Recommends Mothers Stop Breastfeeding To Boost Vaccine Efficacy</td><td>fake</td></tr><tr><td>1000 COVID-19 testing labs in India: ICMR</td><td>real</td></tr></table>",
|
| 338 |
+
"bbox": [
|
| 339 |
+
431,
|
| 340 |
+
155,
|
| 341 |
+
781,
|
| 342 |
+
215
|
| 343 |
+
],
|
| 344 |
+
"page_idx": 3
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "text",
|
| 348 |
+
"text": "Table 1: Fake news dataset information",
|
| 349 |
+
"bbox": [
|
| 350 |
+
359,
|
| 351 |
+
233,
|
| 352 |
+
643,
|
| 353 |
+
246
|
| 354 |
+
],
|
| 355 |
+
"page_idx": 3
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"text": "Table 1 shows the corpus size and label distribution, and if we observe, the labels in each dataset are all roughly balanced. Table 2 shows some examples from the COVID-19 fake news detection in the English dataset. We illustrate the most occurring word cloud of the real and fake data points after removing the stop words in Figures 1(a) and 1(b). In Figure 1(a), we can see unique words in real-labeled data points which don't often occur in Figure 1(b), like \"covid19\", \"discharged\", \"confirmed\", \"testing\", \"indiafightscorona\", and \"indiawin\", etc.; meanwhile, from Figure 1(b), we can find unique words frequently appearing in the fake articles, which include \"coronavirus\", \"kill\", \"muslim\", \"hydroxychloroquine\", \"china\", and \"facebook post\", but don't frequently appear in the true labeled data points. These frequent textual words can give important information to differentiate the true data points from fake ones.",
|
| 360 |
+
"bbox": [
|
| 361 |
+
212,
|
| 362 |
+
287,
|
| 363 |
+
787,
|
| 364 |
+
470
|
| 365 |
+
],
|
| 366 |
+
"page_idx": 3
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "image",
|
| 370 |
+
"img_path": "images/01516eae682541f1dd926f119670d79630e91f6f628a0c94fdf65efed67d8b56.jpg",
|
| 371 |
+
"image_caption": [
|
| 372 |
+
"(a) Positive word cloud",
|
| 373 |
+
"Fig. 1: Illustration of frequent word cloud"
|
| 374 |
+
],
|
| 375 |
+
"image_footnote": [],
|
| 376 |
+
"bbox": [
|
| 377 |
+
246,
|
| 378 |
+
503,
|
| 379 |
+
485,
|
| 380 |
+
686
|
| 381 |
+
],
|
| 382 |
+
"page_idx": 3
|
| 383 |
+
},
|
| 384 |
+
{
|
| 385 |
+
"type": "image",
|
| 386 |
+
"img_path": "images/c0bfcd572b50f918e8eabcc524622c681b327ab0b9e12b585d010c67652bb061.jpg",
|
| 387 |
+
"image_caption": [
|
| 388 |
+
"(b) Negative word cloud"
|
| 389 |
+
],
|
| 390 |
+
"image_footnote": [],
|
| 391 |
+
"bbox": [
|
| 392 |
+
513,
|
| 393 |
+
503,
|
| 394 |
+
750,
|
| 395 |
+
686
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 3
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "4 Methodology",
|
| 402 |
+
"text_level": 1,
|
| 403 |
+
"bbox": [
|
| 404 |
+
215,
|
| 405 |
+
781,
|
| 406 |
+
380,
|
| 407 |
+
799
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 3
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "text",
|
| 413 |
+
"text": "In this part, we present our transformer based ensemble model that is trained and tuned on the datasets which reported in the previous section. We compare our",
|
| 414 |
+
"bbox": [
|
| 415 |
+
212,
|
| 416 |
+
809,
|
| 417 |
+
785,
|
| 418 |
+
839
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 3
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "page_number",
|
| 424 |
+
"text": "4",
|
| 425 |
+
"bbox": [
|
| 426 |
+
217,
|
| 427 |
+
114,
|
| 428 |
+
228,
|
| 429 |
+
126
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 3
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "header",
|
| 435 |
+
"text": "Gundapu and Mamidi",
|
| 436 |
+
"bbox": [
|
| 437 |
+
271,
|
| 438 |
+
114,
|
| 439 |
+
421,
|
| 440 |
+
128
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 3
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "approach with various machine learning (ML) and deep learning (DL) models with different word embeddings. The full code of system architecture can be found at GitHub<sup>4</sup>.",
|
| 447 |
+
"bbox": [
|
| 448 |
+
212,
|
| 449 |
+
146,
|
| 450 |
+
782,
|
| 451 |
+
191
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 4
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "4.1 Data Preprocessing",
|
| 458 |
+
"text_level": 1,
|
| 459 |
+
"bbox": [
|
| 460 |
+
214,
|
| 461 |
+
210,
|
| 462 |
+
423,
|
| 463 |
+
226
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 4
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "The main aim of this part is to use the NLP techniques to preprocess the input tweet data and prepare for the next step to extract the proper features. In Figure 2, we shown the detailed data preprocessing pipeline with examples.",
|
| 470 |
+
"bbox": [
|
| 471 |
+
212,
|
| 472 |
+
232,
|
| 473 |
+
782,
|
| 474 |
+
279
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 4
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "image",
|
| 480 |
+
"img_path": "images/a50faaa9d5f22c20dfe2069231fc60024eaa05967fe01f234c5447c8f1d7e286.jpg",
|
| 481 |
+
"image_caption": [
|
| 482 |
+
"Fig. 2: Data preprocessing pipeline"
|
| 483 |
+
],
|
| 484 |
+
"image_footnote": [],
|
| 485 |
+
"bbox": [
|
| 486 |
+
222,
|
| 487 |
+
310,
|
| 488 |
+
782,
|
| 489 |
+
421
|
| 490 |
+
],
|
| 491 |
+
"page_idx": 4
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "text",
|
| 495 |
+
"text": "In the preprocessing step, we will forward the tokenized tweet through the pipeline to eliminate the noise in the fake news dataset by remove or normalize the unnecessary tokens. The preprocessing pipeline includes the following subparts:",
|
| 496 |
+
"bbox": [
|
| 497 |
+
212,
|
| 498 |
+
498,
|
| 499 |
+
782,
|
| 500 |
+
559
|
| 501 |
+
],
|
| 502 |
+
"page_idx": 4
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "list",
|
| 506 |
+
"sub_type": "text",
|
| 507 |
+
"list_items": [
|
| 508 |
+
"1. Emoticon Conversion: In this step, we converted the each emoticon in the tweet to text. Example: $\\rightarrow$ Face with medical mask emoji",
|
| 509 |
+
"2. Handling of Hashtags: We identified the hashtag tokens by seeing pound (#) sign and splitted these based on digits or capital letters. Example: #IndiaFightsCorona → IndiaFightsCorona",
|
| 510 |
+
"3. Stemming: We removed the inflectional morphemes like \"ed\", \"est\", \"s\", and \"ing\" from their token stem. Ex: confirmed $\\rightarrow$ \"confirm\" + \"-ed\"",
|
| 511 |
+
"4. Text cleaning: To remove the irrelevant data we used this step. Removed punctuation marks, digits and, non-ASCII glyphs from the tweet."
|
| 512 |
+
],
|
| 513 |
+
"bbox": [
|
| 514 |
+
220,
|
| 515 |
+
565,
|
| 516 |
+
782,
|
| 517 |
+
702
|
| 518 |
+
],
|
| 519 |
+
"page_idx": 4
|
| 520 |
+
},
|
| 521 |
+
{
|
| 522 |
+
"type": "text",
|
| 523 |
+
"text": "4.2 Supervised Machine Learning Models",
|
| 524 |
+
"text_level": 1,
|
| 525 |
+
"bbox": [
|
| 526 |
+
214,
|
| 527 |
+
720,
|
| 528 |
+
571,
|
| 529 |
+
736
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 4
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"text": "To build the finest system for fake news detection, we started our investigations with traditional NLP approaches like Linear Regression (LR), Support Vector MAchines (SVM), Passive Agressive Classifier (PAC), XGBoost, and Multi-Layer Perceptron (MLP). We study the results of above mentioned supervised models with the combination of three types of word vectors:",
|
| 536 |
+
"bbox": [
|
| 537 |
+
212,
|
| 538 |
+
742,
|
| 539 |
+
782,
|
| 540 |
+
818
|
| 541 |
+
],
|
| 542 |
+
"page_idx": 4
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"type": "header",
|
| 546 |
+
"text": "Fake News Detection",
|
| 547 |
+
"bbox": [
|
| 548 |
+
588,
|
| 549 |
+
114,
|
| 550 |
+
730,
|
| 551 |
+
127
|
| 552 |
+
],
|
| 553 |
+
"page_idx": 4
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"type": "page_number",
|
| 557 |
+
"text": "5",
|
| 558 |
+
"bbox": [
|
| 559 |
+
774,
|
| 560 |
+
116,
|
| 561 |
+
784,
|
| 562 |
+
126
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 4
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "page_footnote",
|
| 568 |
+
"text": "4 https://github.com/SunilGundapu/Covid-19-fake-news-detection",
|
| 569 |
+
"bbox": [
|
| 570 |
+
217,
|
| 571 |
+
824,
|
| 572 |
+
663,
|
| 573 |
+
839
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 4
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "list",
|
| 579 |
+
"sub_type": "text",
|
| 580 |
+
"list_items": [
|
| 581 |
+
"1. Word-level, n-gram level, and character level TF-IDF vectors with the feature matrix size of 100000.",
|
| 582 |
+
"2. English Glove [23] word embeddings with the dimension of 300.",
|
| 583 |
+
"3. TF-IDF weighted averaging with Glove embeddings. We described below the fake news vector construction."
|
| 584 |
+
],
|
| 585 |
+
"bbox": [
|
| 586 |
+
222,
|
| 587 |
+
146,
|
| 588 |
+
784,
|
| 589 |
+
220
|
| 590 |
+
],
|
| 591 |
+
"page_idx": 5
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"type": "equation",
|
| 595 |
+
"text": "\n$$\nT w e e t _ {v e c t o r} = \\frac {\\sum_ {i = 1} ^ {N} \\mathbf {t f - i d f} (t o k e n _ {i}) \\times \\mathbf {G l o v e} (t o k e n _ {i})}{\\mathbf {N}} \\tag {1}\n$$\n",
|
| 596 |
+
"text_format": "latex",
|
| 597 |
+
"bbox": [
|
| 598 |
+
321,
|
| 599 |
+
246,
|
| 600 |
+
784,
|
| 601 |
+
294
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 5
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "In the above formula, $\\mathrm{N}$ is the total number of words in the input fake news tweet, and $\\text{token}_i$ is the $i^{th}$ token in the input text. After analyzing the results, TF-IDF weighted averaging gave better results than the standard TF-IDF.",
|
| 608 |
+
"bbox": [
|
| 609 |
+
214,
|
| 610 |
+
299,
|
| 611 |
+
784,
|
| 612 |
+
345
|
| 613 |
+
],
|
| 614 |
+
"page_idx": 5
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "text",
|
| 618 |
+
"text": "4.3 Deep Learning Models",
|
| 619 |
+
"text_level": 1,
|
| 620 |
+
"bbox": [
|
| 621 |
+
215,
|
| 622 |
+
367,
|
| 623 |
+
449,
|
| 624 |
+
383
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 5
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "Supervised machine learning algorithms performed very well on the provided dataset. In this section, we experiment with deep learning models that give better results than traditional classification algorithms.",
|
| 631 |
+
"bbox": [
|
| 632 |
+
212,
|
| 633 |
+
393,
|
| 634 |
+
784,
|
| 635 |
+
439
|
| 636 |
+
],
|
| 637 |
+
"page_idx": 5
|
| 638 |
+
},
|
| 639 |
+
{
|
| 640 |
+
"type": "text",
|
| 641 |
+
"text": "LSTM: We used Long Short-Term Memory (LSTM) [24] architecture with two different pre-trained word embeddings Glove and Fasttext [25]. LSTM is a type of Recurrent Neural Network (RNN) that can solve long term dependency problem, and it is a well-suited model for sequence classification.",
|
| 642 |
+
"bbox": [
|
| 643 |
+
212,
|
| 644 |
+
460,
|
| 645 |
+
784,
|
| 646 |
+
521
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 5
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "We converted the input data points into word vectors by using pre-trained word embeddings. These word vectors are passed as input to the LSTM layer. We stacked up two LSTM layers one after another with the dropout of 0.25. The size of LSTM is 128, and the last time step output is treated as input data point representation. The final time step's outcome is passed as an input to a dense layer for fake news detection.",
|
| 653 |
+
"bbox": [
|
| 654 |
+
212,
|
| 655 |
+
521,
|
| 656 |
+
784,
|
| 657 |
+
612
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 5
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "BiLSTM with Attention: Sometimes not all the tokens in the input text contribute equally to the representation of input text. So we advantage word attention [26] mechanism to catch the tokens' prominent influence on the input data point. We built this attention mechanism on top of BiLSTM layers.",
|
| 664 |
+
"bbox": [
|
| 665 |
+
212,
|
| 666 |
+
635,
|
| 667 |
+
784,
|
| 668 |
+
696
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 5
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "text",
|
| 674 |
+
"text": "The sequence of word vector is passed through a BiLSTM layer, which contains one forward and backward LSTM layer. Attention mechanism applied to the output of BiLSTM layer, which produces a dense vector. This dense vector is forwarded to a fully connected network.",
|
| 675 |
+
"bbox": [
|
| 676 |
+
212,
|
| 677 |
+
696,
|
| 678 |
+
784,
|
| 679 |
+
756
|
| 680 |
+
],
|
| 681 |
+
"page_idx": 5
|
| 682 |
+
},
|
| 683 |
+
{
|
| 684 |
+
"type": "text",
|
| 685 |
+
"text": "CNN: We explored a Convolution Neural Network (CNN) [27] model for misinformation detection. The model consists of an embedding layer, a convolution layer with 3 convolutions, a max-pooling layer, and a fully connected network. In the embedding layer, the input texts are converted into $n \\times d$ sequence matrix,",
|
| 686 |
+
"bbox": [
|
| 687 |
+
212,
|
| 688 |
+
779,
|
| 689 |
+
784,
|
| 690 |
+
839
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 5
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "page_number",
|
| 696 |
+
"text": "6",
|
| 697 |
+
"bbox": [
|
| 698 |
+
217,
|
| 699 |
+
114,
|
| 700 |
+
228,
|
| 701 |
+
126
|
| 702 |
+
],
|
| 703 |
+
"page_idx": 5
|
| 704 |
+
},
|
| 705 |
+
{
|
| 706 |
+
"type": "header",
|
| 707 |
+
"text": "Gundapu and Mamidi",
|
| 708 |
+
"bbox": [
|
| 709 |
+
271,
|
| 710 |
+
114,
|
| 711 |
+
421,
|
| 712 |
+
128
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 5
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "text",
|
| 718 |
+
"text": "where $n$ is the length of the input data point and $d$ is the length of the word embedding dimension. In the convolution layer, fed the sequence matrix through three 1D convolutions of kernel sizes 3, 4, and 5. And each convolutions filter size is 128. The convolution layer's output is max pooled over time and concatenated to get the input datapoint representations in the max-pooling layer. The output of the max-pooling layer is passed to a fully connected network with a softmax output layer.",
|
| 719 |
+
"bbox": [
|
| 720 |
+
212,
|
| 721 |
+
146,
|
| 722 |
+
787,
|
| 723 |
+
252
|
| 724 |
+
],
|
| 725 |
+
"page_idx": 6
|
| 726 |
+
},
|
| 727 |
+
{
|
| 728 |
+
"type": "text",
|
| 729 |
+
"text": "CNN + BiLSTM: A CNN and BiLSTM architecture is an ensemble of CNN and bidirectional LSTM models with Fasttext/Glove word embeddings. In this architecture, the CNN extracts the maximum amount of features/information from the input text using convolution layers. The output of CNN becomes the input to BiLSTM, which keeps the data in chronological order in both directions.",
|
| 730 |
+
"bbox": [
|
| 731 |
+
212,
|
| 732 |
+
273,
|
| 733 |
+
787,
|
| 734 |
+
349
|
| 735 |
+
],
|
| 736 |
+
"page_idx": 6
|
| 737 |
+
},
|
| 738 |
+
{
|
| 739 |
+
"type": "text",
|
| 740 |
+
"text": "The sequence of word vectors are forwarded through a convolution of kernel size 3 with filter size 128. The output of convolution is passed through a BiLSTM. The outcome of BiLSTM is max-pooled over time and followed by one dense layer and a softmax layer.",
|
| 741 |
+
"bbox": [
|
| 742 |
+
212,
|
| 743 |
+
351,
|
| 744 |
+
787,
|
| 745 |
+
411
|
| 746 |
+
],
|
| 747 |
+
"page_idx": 6
|
| 748 |
+
},
|
| 749 |
+
{
|
| 750 |
+
"type": "text",
|
| 751 |
+
"text": "4.4 Transformer Models",
|
| 752 |
+
"text_level": 1,
|
| 753 |
+
"bbox": [
|
| 754 |
+
215,
|
| 755 |
+
434,
|
| 756 |
+
429,
|
| 757 |
+
448
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 6
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "This section explored individual and ensembling of the three transformer models BERT, ALBERT, and XLNet. These models have outperformed the other ML and DL algorithms. We implemented these models using HuggingFace<sup>5</sup> is a PyTorch transformer library. And the hyperparameters of the three models are described in Table 1.",
|
| 764 |
+
"bbox": [
|
| 765 |
+
212,
|
| 766 |
+
459,
|
| 767 |
+
787,
|
| 768 |
+
534
|
| 769 |
+
],
|
| 770 |
+
"page_idx": 6
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "table",
|
| 774 |
+
"img_path": "images/19b7755f79accc664ddb9f6729abf034336bc41dcb7dff1fd4e4f9af80548cee.jpg",
|
| 775 |
+
"table_caption": [],
|
| 776 |
+
"table_footnote": [],
|
| 777 |
+
"table_body": "<table><tr><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>Optimizer</td><td>Max Length</td><td>Type</td></tr><tr><td>BERT</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>BERT-Base</td></tr><tr><td>XLNet</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>XLNetLarge</td></tr><tr><td>ALBERT</td><td>2e-5</td><td>32</td><td>Adam</td><td>128</td><td>ALBERT-Xlarge</td></tr></table>",
|
| 778 |
+
"bbox": [
|
| 779 |
+
217,
|
| 780 |
+
556,
|
| 781 |
+
785,
|
| 782 |
+
646
|
| 783 |
+
],
|
| 784 |
+
"page_idx": 6
|
| 785 |
+
},
|
| 786 |
+
{
|
| 787 |
+
"type": "text",
|
| 788 |
+
"text": "Table 2: Hyperparameters of transformer models",
|
| 789 |
+
"bbox": [
|
| 790 |
+
323,
|
| 791 |
+
656,
|
| 792 |
+
678,
|
| 793 |
+
672
|
| 794 |
+
],
|
| 795 |
+
"page_idx": 6
|
| 796 |
+
},
|
| 797 |
+
{
|
| 798 |
+
"type": "text",
|
| 799 |
+
"text": "BERT: Bidirectional Encoder Representations from Transformers (henceforth, BERT) [28] is a transformer model developed to pre-train deep bidirectional representations from unseen data. This model developed by combining two robust concepts: (i) It's a deep transformer model so that it can process lengthy sentences effectively by using attention mechanism, and (ii) It's a bidirectional",
|
| 800 |
+
"bbox": [
|
| 801 |
+
212,
|
| 802 |
+
739,
|
| 803 |
+
787,
|
| 804 |
+
816
|
| 805 |
+
],
|
| 806 |
+
"page_idx": 6
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"type": "header",
|
| 810 |
+
"text": "Fake News Detection",
|
| 811 |
+
"bbox": [
|
| 812 |
+
588,
|
| 813 |
+
114,
|
| 814 |
+
730,
|
| 815 |
+
127
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 6
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "page_number",
|
| 821 |
+
"text": "7",
|
| 822 |
+
"bbox": [
|
| 823 |
+
774,
|
| 824 |
+
114,
|
| 825 |
+
784,
|
| 826 |
+
126
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 6
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "page_footnote",
|
| 832 |
+
"text": "5 https://huggingface.co/transformers/",
|
| 833 |
+
"bbox": [
|
| 834 |
+
217,
|
| 835 |
+
824,
|
| 836 |
+
482,
|
| 837 |
+
840
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 6
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "text",
|
| 843 |
+
"text": "network, so it takes into account the entire text passage to comprehend the meaning of each token.",
|
| 844 |
+
"bbox": [
|
| 845 |
+
212,
|
| 846 |
+
146,
|
| 847 |
+
782,
|
| 848 |
+
175
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 7
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "BERT implementation has two steps; one is pre-training and another fin-tuning. In the first step, the model is trained on unseen data over various pretraining problems using a dataset in a particular language or in increases data with multiple languages. In the second step, all the initialized parameters are fine-tuned using the labeled data from certain tasks.",
|
| 855 |
+
"bbox": [
|
| 856 |
+
212,
|
| 857 |
+
176,
|
| 858 |
+
784,
|
| 859 |
+
251
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 7
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "We fine-tuned the pre-trained BERT (Base) model for our COVID-19 fake news detection task. BERT base model contains the 12 layers of encoder blocks and 12 bidirectional self-attention heads by considering the sequence of 512 tokens and emitting the representations of a sequence of hidden vectors. We added one additional output layer on top of the BERT model to calculate the conditional probability over the output classes, either fake or real. See FIGURE 1 for the fine-tuned model of BERT.",
|
| 866 |
+
"bbox": [
|
| 867 |
+
212,
|
| 868 |
+
252,
|
| 869 |
+
785,
|
| 870 |
+
356
|
| 871 |
+
],
|
| 872 |
+
"page_idx": 7
|
| 873 |
+
},
|
| 874 |
+
{
|
| 875 |
+
"type": "image",
|
| 876 |
+
"img_path": "images/ca4c033b834382e3fb655dfaa0fa2ec7eea1fac153c069fa2712b7b0bf666452.jpg",
|
| 877 |
+
"image_caption": [
|
| 878 |
+
"Fig. 3: BERT model architecture"
|
| 879 |
+
],
|
| 880 |
+
"image_footnote": [],
|
| 881 |
+
"bbox": [
|
| 882 |
+
292,
|
| 883 |
+
395,
|
| 884 |
+
709,
|
| 885 |
+
608
|
| 886 |
+
],
|
| 887 |
+
"page_idx": 7
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "text",
|
| 891 |
+
"text": "XLNet: XLNet is an enhanced version of BERT. To understand the language context deeper, XLNet [29] uses Transformer-XL [30] as a feature engineering model, which alone is an adoption upon the native Transformer. This Transformer XL model integrates the two components Recurrence Mechanism and Relative Positional Encoding (RPE) to the Transformer used in BERT to handle the long-term dependencies for texts that are longer than the maximum allowed input length. Recurrence Mechanism will give context between two sequences at specific segments and RPE, which carries similarity information between two tokens.",
|
| 892 |
+
"bbox": [
|
| 893 |
+
212,
|
| 894 |
+
703,
|
| 895 |
+
787,
|
| 896 |
+
838
|
| 897 |
+
],
|
| 898 |
+
"page_idx": 7
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "page_number",
|
| 902 |
+
"text": "8",
|
| 903 |
+
"bbox": [
|
| 904 |
+
217,
|
| 905 |
+
114,
|
| 906 |
+
228,
|
| 907 |
+
126
|
| 908 |
+
],
|
| 909 |
+
"page_idx": 7
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "header",
|
| 913 |
+
"text": "Gundapu and Mamidi",
|
| 914 |
+
"bbox": [
|
| 915 |
+
271,
|
| 916 |
+
114,
|
| 917 |
+
421,
|
| 918 |
+
128
|
| 919 |
+
],
|
| 920 |
+
"page_idx": 7
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"text": "The XLNet model has been trained on a huge dataset using the permutation language modeling. This technique is one of the main differences between BERT and XLNet, and it uses permutations to generate data from the forward and backward directions at the same time. We used the pre-trained XLNet model from Hugging Face, then fine-tuned the model with a maximum length of 128 to update the pre-trained model to fit our fake news detection dataset.",
|
| 925 |
+
"bbox": [
|
| 926 |
+
212,
|
| 927 |
+
146,
|
| 928 |
+
787,
|
| 929 |
+
238
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 8
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"text": "ALBERT: Modern language models increasing the model size and quantity of parameters when pre-training natural language representations. They often give better improvements in many downstream tasks, but in some cases, they become harder due to memory limitation and longer hours of training. To address these problems, a self-supervised learning model ALBERT (A Lite BERT) [31] often uses parameter reduction techniques to increase model speed and lower memory consumption. We used the A Lite BERT model for our misinformation detection problem, which achieves better performance than DL models.",
|
| 936 |
+
"bbox": [
|
| 937 |
+
212,
|
| 938 |
+
262,
|
| 939 |
+
789,
|
| 940 |
+
383
|
| 941 |
+
],
|
| 942 |
+
"page_idx": 8
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "text",
|
| 946 |
+
"text": "Ensemble Model: We ensembled the three transformer models BERT, ALBERT, and XLNet for better prediction. See Figure 4 for the ensemble model. Our ensemble model computes an average of all softmax values from these three transformer models after extracting the softmax probabilities from each model. This model relatively better than other models.",
|
| 947 |
+
"bbox": [
|
| 948 |
+
212,
|
| 949 |
+
407,
|
| 950 |
+
787,
|
| 951 |
+
484
|
| 952 |
+
],
|
| 953 |
+
"page_idx": 8
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "image",
|
| 957 |
+
"img_path": "images/a5ebbcdea098609620199e340bcda76ac9b613ac1d09ee9750d7b58429ff70a4.jpg",
|
| 958 |
+
"image_caption": [
|
| 959 |
+
"Fig. 4: Transformer based ensemble model architecture"
|
| 960 |
+
],
|
| 961 |
+
"image_footnote": [],
|
| 962 |
+
"bbox": [
|
| 963 |
+
290,
|
| 964 |
+
523,
|
| 965 |
+
712,
|
| 966 |
+
619
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 8
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "5 Results and Discussion",
|
| 973 |
+
"text_level": 1,
|
| 974 |
+
"bbox": [
|
| 975 |
+
215,
|
| 976 |
+
715,
|
| 977 |
+
475,
|
| 978 |
+
732
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 8
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "text",
|
| 984 |
+
"text": "In this section, we compared the performance of various machine learning, deep learning, and transformer-based models using several evaluation metrics like precision, recall, weighted f1-score and accuracy. The results of the various experiments on the test set are reported in Table 3. The results clearly showing that Transformer based models are considerably better than other machine and deep learning models for our COVID-19 misinformation detection task. And while",
|
| 985 |
+
"bbox": [
|
| 986 |
+
212,
|
| 987 |
+
750,
|
| 988 |
+
787,
|
| 989 |
+
840
|
| 990 |
+
],
|
| 991 |
+
"page_idx": 8
|
| 992 |
+
},
|
| 993 |
+
{
|
| 994 |
+
"type": "header",
|
| 995 |
+
"text": "Fake News Detection",
|
| 996 |
+
"bbox": [
|
| 997 |
+
588,
|
| 998 |
+
114,
|
| 999 |
+
730,
|
| 1000 |
+
127
|
| 1001 |
+
],
|
| 1002 |
+
"page_idx": 8
|
| 1003 |
+
},
|
| 1004 |
+
{
|
| 1005 |
+
"type": "page_number",
|
| 1006 |
+
"text": "9",
|
| 1007 |
+
"bbox": [
|
| 1008 |
+
774,
|
| 1009 |
+
116,
|
| 1010 |
+
784,
|
| 1011 |
+
126
|
| 1012 |
+
],
|
| 1013 |
+
"page_idx": 8
|
| 1014 |
+
},
|
| 1015 |
+
{
|
| 1016 |
+
"type": "table",
|
| 1017 |
+
"img_path": "images/fa27fc6e444efd1e1f7ef24dd940b30ec4ad51d080235af072eaead66f191941.jpg",
|
| 1018 |
+
"table_caption": [],
|
| 1019 |
+
"table_footnote": [],
|
| 1020 |
+
"table_body": "<table><tr><td>Model Type</td><td>Model</td><td>Precision</td><td>Recall</td><td>Accuracy</td><td>F1-Score</td></tr><tr><td rowspan=\"3\">ML \nModels</td><td>SVM</td><td>0.9640</td><td>0.9640</td><td>0.964013</td><td>0.964037</td></tr><tr><td>PAC</td><td>0.9673</td><td>0.9673</td><td>0.967285</td><td>0.967289</td></tr><tr><td>MLP</td><td>0.9645</td><td>0.9645</td><td>0.964494</td><td>0.964485</td></tr><tr><td rowspan=\"4\">Deep Learning \nModels</td><td>LSTM with FastText</td><td>0.9682</td><td>0.9682</td><td>0.9682203</td><td>0.968224</td></tr><tr><td>CNN with FastText</td><td>0.9698</td><td>0.9698</td><td>0.969802</td><td>0.969819</td></tr><tr><td>LSTM + CNN</td><td>0.9762</td><td>0.9762</td><td>0.976163</td><td>0.976168</td></tr><tr><td>BiLSTM + Attention</td><td>0.9790</td><td>0.9785</td><td>0.978524</td><td>0.978504</td></tr><tr><td rowspan=\"4\">Transformer \nModels</td><td>BERT</td><td>0.9813</td><td>0.9813</td><td>0.981306</td><td>0.981308</td></tr><tr><td>ALBERT</td><td>0.9781</td><td>0.9781</td><td>0.978031</td><td>0.978037</td></tr><tr><td>XLNet</td><td>0.9787</td><td>0.9789</td><td>0.978596</td><td>0.978592</td></tr><tr><td>Ensemble Model</td><td>0.9855</td><td>0.9855</td><td>0.985512</td><td>0.985514</td></tr></table>",
|
| 1021 |
+
"bbox": [
|
| 1022 |
+
222,
|
| 1023 |
+
142,
|
| 1024 |
+
781,
|
| 1025 |
+
316
|
| 1026 |
+
],
|
| 1027 |
+
"page_idx": 9
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"text": "doing experiments, we observed that few models good at retrieving prominent features while other models have the best classification performance.",
|
| 1032 |
+
"bbox": [
|
| 1033 |
+
212,
|
| 1034 |
+
383,
|
| 1035 |
+
782,
|
| 1036 |
+
412
|
| 1037 |
+
],
|
| 1038 |
+
"page_idx": 9
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "text",
|
| 1042 |
+
"text": "Classical machine learning models with various TF-IDF feature vectors gave the approximate baseline model results. We observe that the TF-IDF weighted average performed better than the normal TF-IDF vectors. Bi-directional LSTM with attention mechanism f1-score approximate very close to transformer models. The BERT, XLNet, and ALBERT demonstrate better performance than deep learning models. An ensemble of the transformer-based model produces the best F1 score of 0.9855 on the test set. Our transformer based model ranked 5th among 160 teams.",
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
212,
|
| 1045 |
+
414,
|
| 1046 |
+
784,
|
| 1047 |
+
534
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 9
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "table",
|
| 1053 |
+
"img_path": "images/a1754104020ef4b7a2ee818018d3c01e85ca17c523c0a09dd9c4acfb5e7135e5.jpg",
|
| 1054 |
+
"table_caption": [
|
| 1055 |
+
"Table 3: Comparison of various fake news detection models on testset"
|
| 1056 |
+
],
|
| 1057 |
+
"table_footnote": [],
|
| 1058 |
+
"table_body": "<table><tr><td>Test Sample</td><td>BERT</td><td>ALBERT</td><td>XLNet</td><td>Ensemble</td></tr><tr><td>#BillGates is shocked that America's pandemic response is among the worst in the world.</td><td>✓</td><td>✗</td><td>✗</td><td>✓</td></tr><tr><td>We will all come out stronger from this</td><td>✗</td><td>✗</td><td>✓</td><td>✓</td></tr><tr><td>#COVID #pandemic. Just #StaySafeStayHealthy</td><td></td><td></td><td></td><td></td></tr></table>",
|
| 1059 |
+
"bbox": [
|
| 1060 |
+
217,
|
| 1061 |
+
561,
|
| 1062 |
+
785,
|
| 1063 |
+
636
|
| 1064 |
+
],
|
| 1065 |
+
"page_idx": 9
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"text": "Table 4: Misclassified samples from testset",
|
| 1070 |
+
"bbox": [
|
| 1071 |
+
346,
|
| 1072 |
+
645,
|
| 1073 |
+
655,
|
| 1074 |
+
660
|
| 1075 |
+
],
|
| 1076 |
+
"page_idx": 9
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "text",
|
| 1080 |
+
"text": "In some problems, ensembling of four transformer models is very difficult, and sometimes this approach will not perform well. But if we observe the results of individual transformer models on our dataset are very close, meaning that any transformer model can be used for our fake news detection task. This is the major reason behind the ensembling of transformer models.",
|
| 1081 |
+
"bbox": [
|
| 1082 |
+
212,
|
| 1083 |
+
703,
|
| 1084 |
+
784,
|
| 1085 |
+
779
|
| 1086 |
+
],
|
| 1087 |
+
"page_idx": 9
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "text",
|
| 1091 |
+
"text": "In Table 4, we showed the two misclassified test samples. The first test sample actual label is \"real\", but only BERT and ensemble models are predicted correctly, remaining two models wrongly predicted. And the second sample true label is \"fake\", but XLNet and ensemble predicted correctly, remaining two mod-",
|
| 1092 |
+
"bbox": [
|
| 1093 |
+
212,
|
| 1094 |
+
779,
|
| 1095 |
+
785,
|
| 1096 |
+
839
|
| 1097 |
+
],
|
| 1098 |
+
"page_idx": 9
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "page_number",
|
| 1102 |
+
"text": "10",
|
| 1103 |
+
"bbox": [
|
| 1104 |
+
217,
|
| 1105 |
+
114,
|
| 1106 |
+
235,
|
| 1107 |
+
126
|
| 1108 |
+
],
|
| 1109 |
+
"page_idx": 9
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "header",
|
| 1113 |
+
"text": "Gundapu and Mamidi",
|
| 1114 |
+
"bbox": [
|
| 1115 |
+
271,
|
| 1116 |
+
114,
|
| 1117 |
+
419,
|
| 1118 |
+
128
|
| 1119 |
+
],
|
| 1120 |
+
"page_idx": 9
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "text",
|
| 1124 |
+
"text": "els wrongly predicted. However, the ensemble model is correctly predicted in both cases because we are averaging the BERT, ALBERT, and XLNet softmax probabilities. This is a principal observation to ensemble the transformer models.",
|
| 1125 |
+
"bbox": [
|
| 1126 |
+
212,
|
| 1127 |
+
146,
|
| 1128 |
+
787,
|
| 1129 |
+
191
|
| 1130 |
+
],
|
| 1131 |
+
"page_idx": 10
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "text",
|
| 1135 |
+
"text": "6 Conclusion",
|
| 1136 |
+
"text_level": 1,
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
215,
|
| 1139 |
+
214,
|
| 1140 |
+
359,
|
| 1141 |
+
231
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 10
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "text",
|
| 1147 |
+
"text": "In this paper, we presented various algorithms to combat the global infodemic, but transformer-based algorithms performed better than others. And we submitted these models to the Shared Task of COVID-19 fake news detection for English, ConstraintAI-2021 workshop.",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
212,
|
| 1150 |
+
246,
|
| 1151 |
+
785,
|
| 1152 |
+
306
|
| 1153 |
+
],
|
| 1154 |
+
"page_idx": 10
|
| 1155 |
+
},
|
| 1156 |
+
{
|
| 1157 |
+
"type": "text",
|
| 1158 |
+
"text": "Fake news is a progressively significant and tricky problem to solve, particularly in an unanticipated situation like the COVID-19 epidemic. Leveraging state-of-the-art classical and advanced NLP models can help address the problem of COVID-19 fake news detection and other global health emergencies. We intend to explore other contextualized embeddings like FLAIR, ELMo, etc., for a better fake news detecting system in future works.",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
212,
|
| 1161 |
+
306,
|
| 1162 |
+
785,
|
| 1163 |
+
398
|
| 1164 |
+
],
|
| 1165 |
+
"page_idx": 10
|
| 1166 |
+
},
|
| 1167 |
+
{
|
| 1168 |
+
"type": "text",
|
| 1169 |
+
"text": "References",
|
| 1170 |
+
"text_level": 1,
|
| 1171 |
+
"bbox": [
|
| 1172 |
+
215,
|
| 1173 |
+
420,
|
| 1174 |
+
323,
|
| 1175 |
+
436
|
| 1176 |
+
],
|
| 1177 |
+
"page_idx": 10
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "list",
|
| 1181 |
+
"sub_type": "ref_text",
|
| 1182 |
+
"list_items": [
|
| 1183 |
+
"1. Patwa, P., Bhardwaj, M., Guptha, V., Kumari, G., Sharma, S., PYKL, S., Das, A., Ekbal A., Akhtar, S., Chakraborty.: Overview of CONSTRAINTN 2021 Shared Tasks: Detecting English COVID-19 Fake News and Hindi Hostile Posts. (2021). In: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT). Springer.",
|
| 1184 |
+
"2. Datta, R., Yadav, K., Singh, A., Datta, K., Bansal, A.: The infodemics of COVID-19 amongst healthcare professionals in india. Med. J. Armed Forces India, vol. 76, no. 3, pp. 276-283, Jul. 2020.",
|
| 1185 |
+
"3. Chen, X., Sin, S.J.: 'Misinformation? What of it?' Motivations and individual differences in misinformation sharing on social media. In: ASIST (2013)",
|
| 1186 |
+
"4. Thorne, J., Vlachos, A.: Automated Fact Checking: Task formulations, methods and future directions. In: COLING (2018)",
|
| 1187 |
+
"5. Titcomb, J., Carson, J.: www.telegraph.co.uk. Fake news: What exactly is it - and how can you spot it?",
|
| 1188 |
+
"6. Kaliyar, R., Singh, N.: Misinformation Detection on Online Social Media-A Survey. (2019). 1-6. 10.1109/ICCCNT45670.2019.8944587.",
|
| 1189 |
+
"7. Zhang, X., Ghorbani, A.: An overview of online fake news: Characterization, detection, and discussion. (2020). Inf. Process. Manag., 57, 102025.",
|
| 1190 |
+
"8. Khan, J.Y., Khondaker, M.T., Iqbal, A., Afroz, S.: A Benchmark Study on Machine Learning Methods for Fake News Detection. (2019). ArXiv, abs/1905.04749.",
|
| 1191 |
+
"9. Elhadad, M., Li, K.F., Gebali, F.: A Novel Approach for Selecting Hybrid Features from Online News Textual Metadata for Fake News Detection. In: Proc. 3PGCIC, Antwerp, Belgium, 2019, pp. 914-925.",
|
| 1192 |
+
"10. Thota, A., Tilak, P., Ahluwalia, S., Lohia, N.: Fake News Detection: A Deep Learning Approach. (2018). SMU Data Science Review: Vol. 1: No. 3, Article 10.",
|
| 1193 |
+
"11. Sean, B., Doug, S., Yuxi, P: Talos Targets Disinformation with Fake News Challenge Victory. (2017). Available online: https://blog.talosintelligence.com/2017/06/talos-fake-news-challenge.html"
|
| 1194 |
+
],
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
217,
|
| 1197 |
+
450,
|
| 1198 |
+
785,
|
| 1199 |
+
840
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 10
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "header",
|
| 1205 |
+
"text": "Fake News Detection",
|
| 1206 |
+
"bbox": [
|
| 1207 |
+
589,
|
| 1208 |
+
114,
|
| 1209 |
+
730,
|
| 1210 |
+
127
|
| 1211 |
+
],
|
| 1212 |
+
"page_idx": 10
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "page_number",
|
| 1216 |
+
"text": "11",
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
767,
|
| 1219 |
+
116,
|
| 1220 |
+
782,
|
| 1221 |
+
126
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 10
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "list",
|
| 1227 |
+
"sub_type": "ref_text",
|
| 1228 |
+
"list_items": [
|
| 1229 |
+
"12. Jwa, H., Oh, D., Park, K., Kang, J., Lim, H.: exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT). (2019). Applied Sciences, 9, 4062.",
|
| 1230 |
+
"13. Shahi, G.K., Nandini, D.: FakeCovid - A Multilingual Cross-domain Fact Check News Dataset for COVID-19. (2020). ArXiv, abs/2006.11343.",
|
| 1231 |
+
"14. Zhou, X., Mulay, A., Ferrara, E., Zafarani, R.: ReCOVery: A Multimodal Repository for COVID-19 News Credibility Research. (2020). In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management.",
|
| 1232 |
+
"15. Cui, L., Lee, D.: CoAID: COVID-19 Healthcare Misinformation Dataset. (2020). ArXiv, abs/2006.00885.",
|
| 1233 |
+
"16. Memon, S.A., Carley, K.M.: Characterizing COVID-19 Misinformation Communities Using a Novel Twitter Dataset. (2020). ArXiv, abs/2008.00791.",
|
| 1234 |
+
"17. Li, Y., Jiang, B., Shu, K., Liu, H.: MM-COVID: A Multilingual and Multi-modal Data Repository for Combating COVID-19 Disinformation. (2020). ArXiv, abs/2011.04088.",
|
| 1235 |
+
"18. Al-Rakhami, M.S., Al-Amri, A.M.: Lies Kill, Facts Save: Detecting COVID-19 Misinformation in Twitter. (2020). IEEE Access, 8, 155961-155970.",
|
| 1236 |
+
"19. Vijjali, R., Potluri, P., Kumar, S., Teki, S.: Two Stage Transformer Model for COVID-19 Fake News Detection and Fact Checking. (2020).",
|
| 1237 |
+
"20. Hossain, T., RobertL.Logan, I., Ugarte, A., Matsubara, Y., Young, S.,Singh, S.: COVIDLies: Detecting COVID-19 Misinformation on Social Media. (2020). NLP4COVID@EMNLP.",
|
| 1238 |
+
"21. Patwa, P., Sharma, S., Pykl, S., Guptha, V., Kumari, G., Akhtar, M.S., Ekbal, A., Das, A., Chakraborty, T. (2020). Fighting an Infodemic: COVID-19 Fake News Dataset. ArXiv, abs/2011.03327.",
|
| 1239 |
+
"22. Elhadad, M.K., Li, K., Gebali, F.: Detecting Misleading Information on COVID-19. (2020). IEEE Access, 8, 165201-165215.",
|
| 1240 |
+
"23. Pennington, J., Socher, R., Manning, C.D.: Glove: Global Vectors for Word Representation. (2014). In: EMNLP.",
|
| 1241 |
+
"24. Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. (1997). Neural Computation, 9, 1735-1780.",
|
| 1242 |
+
"25. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching Word Vectors with Subword Information. (2017). Transactions of the Association for Computational Linguistics, 5, 135-146.",
|
| 1243 |
+
"26. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is All you Need. (2017). NIPS.",
|
| 1244 |
+
"27. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. (2015). Nature, 521(7553), pp.436-444.",
|
| 1245 |
+
"28. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (2019). NAACL-HLT.",
|
| 1246 |
+
"29. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XL-Net: Generalized Autoregressive Pretraining for Language Understanding. (2019). NeurIPS.",
|
| 1247 |
+
"30. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. (2019). ACL.",
|
| 1248 |
+
"31. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. (2020). ArXiv, abs/1909.11942."
|
| 1249 |
+
],
|
| 1250 |
+
"bbox": [
|
| 1251 |
+
215,
|
| 1252 |
+
147,
|
| 1253 |
+
784,
|
| 1254 |
+
825
|
| 1255 |
+
],
|
| 1256 |
+
"page_idx": 11
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "page_number",
|
| 1260 |
+
"text": "12",
|
| 1261 |
+
"bbox": [
|
| 1262 |
+
217,
|
| 1263 |
+
114,
|
| 1264 |
+
235,
|
| 1265 |
+
126
|
| 1266 |
+
],
|
| 1267 |
+
"page_idx": 11
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "header",
|
| 1271 |
+
"text": "Gundapu and Mamidi",
|
| 1272 |
+
"bbox": [
|
| 1273 |
+
271,
|
| 1274 |
+
114,
|
| 1275 |
+
419,
|
| 1276 |
+
128
|
| 1277 |
+
],
|
| 1278 |
+
"page_idx": 11
|
| 1279 |
+
}
|
| 1280 |
+
]
|
data/2021/2101_00xxx/2101.00180/e6c1938e-fd7a-49b4-ac0c-6863dca02ce5_model.json
CHANGED
|
@@ -1,3 +1,1720 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "aside_text",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.023,
|
| 7 |
+
0.273,
|
| 8 |
+
0.058,
|
| 9 |
+
0.709
|
| 10 |
+
],
|
| 11 |
+
"angle": 270,
|
| 12 |
+
"content": "arXiv:2101.00180v3 [cs.CL] 21 Jan 2021"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "title",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.231,
|
| 18 |
+
0.141,
|
| 19 |
+
0.776,
|
| 20 |
+
0.187
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Transformer based Automatic COVID-19 Fake News Detection System"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.366,
|
| 29 |
+
0.213,
|
| 30 |
+
0.637,
|
| 31 |
+
0.229
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "Sunil Gundapu and Radhika Mamidi"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.292,
|
| 40 |
+
0.241,
|
| 41 |
+
0.713,
|
| 42 |
+
0.269
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "International Institute of Information Technology, Hyderabad sunil.g@research.iit.ac.in, radhika.mamidi@iit.ac.in"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.262,
|
| 51 |
+
0.308,
|
| 52 |
+
0.744,
|
| 53 |
+
0.502
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Abstract. Recent rapid technological advancements in online social networks such as Twitter have led to a great incline in spreading false information and fake news. Misinformation is especially prevalent in the ongoing coronavirus disease (COVID-19) pandemic, leading to individuals accepting bogus and potentially deleterious claims and articles. Quick detection of fake news can reduce the spread of panic and confusion among the public. For our analysis in this paper, we report a methodology to analyze the reliability of information shared on social media pertaining to the COVID-19 pandemic. Our best approach is based on an ensemble of three transformer models (BERT, ALBERT, and XLNET) to detecting fake news. This model was trained and evaluated in the context of the ConstraintAI 2021 shared task \"COVID19 Fake News Detection in English\" [1]. Our system obtained 0.9855 f1-score on testset and ranked 5th among 160 teams."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.263,
|
| 62 |
+
0.515,
|
| 63 |
+
0.717,
|
| 64 |
+
0.53
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "Keywords: pandemic-19, fake news, deep learning, transformer models"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.217,
|
| 73 |
+
0.557,
|
| 74 |
+
0.377,
|
| 75 |
+
0.573
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "1 Introduction"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.214,
|
| 84 |
+
0.589,
|
| 85 |
+
0.788,
|
| 86 |
+
0.74
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "The COVID-19 pandemic is considered the global public health crisis of the whole world and the biggest problem people faced after World War II. COVID-19, a contagious disease caused by a coronavirus, has caused more than 75 million confirmed cases and 1.7 million deaths across the world till 2020 December<sup>1</sup>. Unfortunately, the misinformation about COVID-19 has encouraged the growing of the disease and chaos among people. During the Munich Security Council held on February 15, 2020, World Health Organization (WHO) Director-General, Tedros Adhanom Ghebreyesus [2] stated that the world was in a war to fight not only a pandemic, but also an infodemic. So we should address the challenge of fake news detection to stop the spreading of COVID-19 misinformation."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.214,
|
| 95 |
+
0.741,
|
| 96 |
+
0.788,
|
| 97 |
+
0.817
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Since the global pandemic impacts the people, there is a broader public finding information about the COVID-19, whose safety is intimidated by adversarial agents invested in spreading fake news for economic and political reasons. Besides, due to medical and public health issues, it is also hard to be totally valid and factual, leading to differences that worsen with fake news. This difficulty"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "page_footnote",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.218,
|
| 106 |
+
0.824,
|
| 107 |
+
0.537,
|
| 108 |
+
0.841
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "<sup>1</sup> https://www.worldometers.info/coronavirus/"
|
| 112 |
+
}
|
| 113 |
+
],
|
| 114 |
+
[
|
| 115 |
+
{
|
| 116 |
+
"type": "page_number",
|
| 117 |
+
"bbox": [
|
| 118 |
+
0.218,
|
| 119 |
+
0.116,
|
| 120 |
+
0.23,
|
| 121 |
+
0.127
|
| 122 |
+
],
|
| 123 |
+
"angle": 0,
|
| 124 |
+
"content": "2"
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "header",
|
| 128 |
+
"bbox": [
|
| 129 |
+
0.272,
|
| 130 |
+
0.115,
|
| 131 |
+
0.422,
|
| 132 |
+
0.129
|
| 133 |
+
],
|
| 134 |
+
"angle": 0,
|
| 135 |
+
"content": "Gundapu and Mamidi"
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "text",
|
| 139 |
+
"bbox": [
|
| 140 |
+
0.214,
|
| 141 |
+
0.147,
|
| 142 |
+
0.788,
|
| 143 |
+
0.253
|
| 144 |
+
],
|
| 145 |
+
"angle": 0,
|
| 146 |
+
"content": "is compounded by the quick advancement of knowledge about the disease. As researchers gain more knowledge about the virus, claims that looked right may turn out to be false, and vice versa. Detecting this spread of COVID-19 associated fake news, thus, has become a pivotal problem, gaining notable attention from government and global health organizations (WHO, 2020), online social networks (TechCrunch, 2020), and news organizations (BBC, 2020; CNN, 2020; New York Times, 2020)."
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.214,
|
| 152 |
+
0.253,
|
| 153 |
+
0.788,
|
| 154 |
+
0.388
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": "In response to the present disinformation, this paper looks at developing an efficient fake news detection architecture with respect to COVID-19. Initially, we started with developing machine learning (ML) algorithms with Term Frequency and Inverse Document Frequency (TF-IDF) feature vectors to detect misinformation on the provided dataset. These supervised TF-IDF methods are still relevant for many classification tasks and performed pretty well for fake news detection. We developed an effective ensemble model integrated with three transformer models for detecting fake news on the social media platforms. This resulted in higher accuracy and a more generalized model."
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.214,
|
| 163 |
+
0.389,
|
| 164 |
+
0.788,
|
| 165 |
+
0.48
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "The rest of this paper is organized as follows, Section II presents some prior works related to fake news, and its spread, on social media platforms. In Section III, we describe the dataset provided in the Constraint AI-2021 shared task. Section IV presents implemented models and framework for misinformation detection. Section V provides the discussions on the results. Finally we conclude this paper in Section VI."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "title",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.216,
|
| 174 |
+
0.501,
|
| 175 |
+
0.388,
|
| 176 |
+
0.516
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "2 Related Work"
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.214,
|
| 185 |
+
0.53,
|
| 186 |
+
0.788,
|
| 187 |
+
0.635
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "Fake News Detection: Fake news can be defined as inaccurate and misleading information that is growing knowingly or unknowingly [3]. Recognizing the spread of false information such as rumors, fake news, propaganda, hoaxes, spear phishing, and conspiracy theories is an essential task for natural language processing [4]. Gartner's [5] research studies explained that most people in advanced economies would believe more fake information than truthful information by 2022."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.214,
|
| 196 |
+
0.636,
|
| 197 |
+
0.788,
|
| 198 |
+
0.787
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "To date, so many automated misinformation detection architectures have been developed. Rohit et al. [6] provided an extensive survey to detect fake news on various online social networks. Ghorbani et al. [7] presented an inclusive overview of the recent studies related to misinformation. Furthermore, they described the impact of misleading information, shown state-of-the-art fake news detection systems, and explored the disinformation detection datasets. The majority of the fake news detection models developed using supervised machine learning algorithms to classify the data as misleading or not [8]. This supervised classification is concluded by comparing the user input text with some already created corpora containing genuine and misleading information [9]."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.214,
|
| 207 |
+
0.787,
|
| 208 |
+
0.788,
|
| 209 |
+
0.818
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Aswini et al. [10] proposed a deep learning architecture with various word embeddings for Fake News Challenge (FCN-1) dataset<sup>2</sup>. They developed the"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "page_footnote",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.218,
|
| 218 |
+
0.824,
|
| 219 |
+
0.473,
|
| 220 |
+
0.841
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "2 http://www.fakenewschallenge.org/"
|
| 224 |
+
}
|
| 225 |
+
],
|
| 226 |
+
[
|
| 227 |
+
{
|
| 228 |
+
"type": "header",
|
| 229 |
+
"bbox": [
|
| 230 |
+
0.59,
|
| 231 |
+
0.115,
|
| 232 |
+
0.732,
|
| 233 |
+
0.128
|
| 234 |
+
],
|
| 235 |
+
"angle": 0,
|
| 236 |
+
"content": "Fake News Detection"
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "page_number",
|
| 240 |
+
"bbox": [
|
| 241 |
+
0.775,
|
| 242 |
+
0.117,
|
| 243 |
+
0.785,
|
| 244 |
+
0.127
|
| 245 |
+
],
|
| 246 |
+
"angle": 0,
|
| 247 |
+
"content": "3"
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"bbox": [
|
| 252 |
+
0.214,
|
| 253 |
+
0.147,
|
| 254 |
+
0.788,
|
| 255 |
+
0.284
|
| 256 |
+
],
|
| 257 |
+
"angle": 0,
|
| 258 |
+
"content": "architecture to accurately predict the stance between a given pair of news headlines and the corresponding article/body. On the same FCN-1 dataset, Sean et al. [11] developed an average weighted model of TalosCNN and TalosTree called TalosComb. TalosCNN is a convolutional neural network with pre-trained word2vec embeddings, and TalosTree is a gradient-boosted decision tree model with SVD, word count, TF-IDF. By analyzing the relationship between the news headline and the corresponding article, Heejung et al. [12] designed the Bidirectional Encoder Representations from Transformers model (BERT) to detect misleading news articles."
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "text",
|
| 262 |
+
"bbox": [
|
| 263 |
+
0.214,
|
| 264 |
+
0.31,
|
| 265 |
+
0.788,
|
| 266 |
+
0.475
|
| 267 |
+
],
|
| 268 |
+
"angle": 0,
|
| 269 |
+
"content": "COVID-19: In the case of COVID-19 fake news, a large number of misleading contents remain online on social media platforms. NLP researchers have been working on developing algorithms for the detection of online COVID-19 related disinformation. To develop any algorithm, we require a corpus. So members of the NLP community created the various fake news datasets: FakeCovid [13], ReCOVery [14], CoAID [15], and CMU-MisCOVID19 [16]. Yichuan Li et al. [17] developed multi-dimensional and multilingual MM-COVID corpora, which covers six languages. Mabrook et al. [18] created a large Twitter dataset related to COVID-19 misinformation. And authors developed an ensemble-stacking model with six machine learning algorithms on the created dataset for detecting misinformation."
|
| 270 |
+
},
|
| 271 |
+
{
|
| 272 |
+
"type": "text",
|
| 273 |
+
"bbox": [
|
| 274 |
+
0.214,
|
| 275 |
+
0.477,
|
| 276 |
+
0.788,
|
| 277 |
+
0.629
|
| 278 |
+
],
|
| 279 |
+
"angle": 0,
|
| 280 |
+
"content": "Elhadad et al. [22] constructed a voting ensemble machine learning classifier for fake news detection that uses seven feature extraction techniques and ten machine learning models. Tamanna et al. [20] used the COVIDLIES dataset to detect the misinformation by retrieving the misconceptions relevant to the Twitter posts. For COVID-19 fake news detection and fact-checking, Rutvik et al. [19] proposed a two-stage transformer model. The first model retrieves the most relevant facts about COVID-19 by using a novel fact-checking algorithm, and the second model, by computing the textual entailment, verifies the level of truth. Adapting all these classical and hybrid related work techniques, we developed a COVID-19 fake news detection system in this paper."
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"type": "title",
|
| 284 |
+
"bbox": [
|
| 285 |
+
0.215,
|
| 286 |
+
0.656,
|
| 287 |
+
0.446,
|
| 288 |
+
0.673
|
| 289 |
+
],
|
| 290 |
+
"angle": 0,
|
| 291 |
+
"content": "3 Dataset Description"
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "text",
|
| 295 |
+
"bbox": [
|
| 296 |
+
0.214,
|
| 297 |
+
0.692,
|
| 298 |
+
0.788,
|
| 299 |
+
0.813
|
| 300 |
+
],
|
| 301 |
+
"angle": 0,
|
| 302 |
+
"content": "The ConstraintAI\\(^{21}\\) shared task organizers developed a COVID-19 fake news detection in English dataset [21] containing 10,700 data points collected from various online social networks such as Twitter, Facebook, and Instagram, etc. From the total dataset, 6,420 data points are reserved for training, 2,140 data points are used for hyperparameter tuning as a part of the validation phase, and the remaining 2,140 social media posts are kept aside for testing. Each dataset except the test set contains social media data points and their corresponding labels, either real or fake."
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "page_footnote",
|
| 306 |
+
"bbox": [
|
| 307 |
+
0.218,
|
| 308 |
+
0.824,
|
| 309 |
+
0.543,
|
| 310 |
+
0.841
|
| 311 |
+
],
|
| 312 |
+
"angle": 0,
|
| 313 |
+
"content": "3 https://constraint-shared-task-2021.github.io/"
|
| 314 |
+
}
|
| 315 |
+
],
|
| 316 |
+
[
|
| 317 |
+
{
|
| 318 |
+
"type": "page_number",
|
| 319 |
+
"bbox": [
|
| 320 |
+
0.218,
|
| 321 |
+
0.116,
|
| 322 |
+
0.23,
|
| 323 |
+
0.127
|
| 324 |
+
],
|
| 325 |
+
"angle": 0,
|
| 326 |
+
"content": "4"
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"type": "header",
|
| 330 |
+
"bbox": [
|
| 331 |
+
0.272,
|
| 332 |
+
0.115,
|
| 333 |
+
0.422,
|
| 334 |
+
0.129
|
| 335 |
+
],
|
| 336 |
+
"angle": 0,
|
| 337 |
+
"content": "Gundapu and Mamidi"
|
| 338 |
+
},
|
| 339 |
+
{
|
| 340 |
+
"type": "table",
|
| 341 |
+
"bbox": [
|
| 342 |
+
0.229,
|
| 343 |
+
0.156,
|
| 344 |
+
0.404,
|
| 345 |
+
0.217
|
| 346 |
+
],
|
| 347 |
+
"angle": 0,
|
| 348 |
+
"content": "<table><tr><td>Corpus</td><td>Real</td><td>Fake</td></tr><tr><td>Train</td><td>3360</td><td>3060</td></tr><tr><td>Valid</td><td>1120</td><td>1020</td></tr><tr><td>Test</td><td>1120</td><td>1020</td></tr></table>"
|
| 349 |
+
},
|
| 350 |
+
{
|
| 351 |
+
"type": "table_footnote",
|
| 352 |
+
"bbox": [
|
| 353 |
+
0.241,
|
| 354 |
+
0.22,
|
| 355 |
+
0.384,
|
| 356 |
+
0.233
|
| 357 |
+
],
|
| 358 |
+
"angle": 0,
|
| 359 |
+
"content": "(a) Dataset Statistics"
|
| 360 |
+
},
|
| 361 |
+
{
|
| 362 |
+
"type": "table",
|
| 363 |
+
"bbox": [
|
| 364 |
+
0.432,
|
| 365 |
+
0.156,
|
| 366 |
+
0.782,
|
| 367 |
+
0.217
|
| 368 |
+
],
|
| 369 |
+
"angle": 0,
|
| 370 |
+
"content": "<table><tr><td>Tweet</td><td>Label</td></tr><tr><td>CDC Recommends Mothers Stop Breastfeeding To Boost Vaccine Efficacy</td><td>fake</td></tr><tr><td>1000 COVID-19 testing labs in India: ICMR</td><td>real</td></tr></table>"
|
| 371 |
+
},
|
| 372 |
+
{
|
| 373 |
+
"type": "table_footnote",
|
| 374 |
+
"bbox": [
|
| 375 |
+
0.526,
|
| 376 |
+
0.22,
|
| 377 |
+
0.682,
|
| 378 |
+
0.234
|
| 379 |
+
],
|
| 380 |
+
"angle": 0,
|
| 381 |
+
"content": "(b) Label-wise example"
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"type": "table_caption",
|
| 385 |
+
"bbox": [
|
| 386 |
+
0.36,
|
| 387 |
+
0.234,
|
| 388 |
+
0.644,
|
| 389 |
+
0.247
|
| 390 |
+
],
|
| 391 |
+
"angle": 0,
|
| 392 |
+
"content": "Table 1: Fake news dataset information"
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"type": "text",
|
| 396 |
+
"bbox": [
|
| 397 |
+
0.214,
|
| 398 |
+
0.288,
|
| 399 |
+
0.788,
|
| 400 |
+
0.471
|
| 401 |
+
],
|
| 402 |
+
"angle": 0,
|
| 403 |
+
"content": "Table 1 shows the corpus size and label distribution, and if we observe, the labels in each dataset are all roughly balanced. Table 2 shows some examples from the COVID-19 fake news detection in the English dataset. We illustrate the most occurring word cloud of the real and fake data points after removing the stop words in Figures 1(a) and 1(b). In Figure 1(a), we can see unique words in real-labeled data points which don't often occur in Figure 1(b), like \"covid19\", \"discharged\", \"confirmed\", \"testing\", \"indiafightscorona\", and \"indiawin\", etc.; meanwhile, from Figure 1(b), we can find unique words frequently appearing in the fake articles, which include \"coronavirus\", \"kill\", \"muslim\", \"hydroxychloroquine\", \"china\", and \"facebook post\", but don't frequently appear in the true labeled data points. These frequent textual words can give important information to differentiate the true data points from fake ones."
|
| 404 |
+
},
|
| 405 |
+
{
|
| 406 |
+
"type": "image",
|
| 407 |
+
"bbox": [
|
| 408 |
+
0.248,
|
| 409 |
+
0.505,
|
| 410 |
+
0.486,
|
| 411 |
+
0.688
|
| 412 |
+
],
|
| 413 |
+
"angle": 0,
|
| 414 |
+
"content": null
|
| 415 |
+
},
|
| 416 |
+
{
|
| 417 |
+
"type": "image_caption",
|
| 418 |
+
"bbox": [
|
| 419 |
+
0.291,
|
| 420 |
+
0.691,
|
| 421 |
+
0.446,
|
| 422 |
+
0.705
|
| 423 |
+
],
|
| 424 |
+
"angle": 0,
|
| 425 |
+
"content": "(a) Positive word cloud"
|
| 426 |
+
},
|
| 427 |
+
{
|
| 428 |
+
"type": "image",
|
| 429 |
+
"bbox": [
|
| 430 |
+
0.514,
|
| 431 |
+
0.505,
|
| 432 |
+
0.75,
|
| 433 |
+
0.688
|
| 434 |
+
],
|
| 435 |
+
"angle": 0,
|
| 436 |
+
"content": null
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "image_caption",
|
| 440 |
+
"bbox": [
|
| 441 |
+
0.554,
|
| 442 |
+
0.691,
|
| 443 |
+
0.716,
|
| 444 |
+
0.706
|
| 445 |
+
],
|
| 446 |
+
"angle": 0,
|
| 447 |
+
"content": "(b) Negative word cloud"
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "image_caption",
|
| 451 |
+
"bbox": [
|
| 452 |
+
0.351,
|
| 453 |
+
0.717,
|
| 454 |
+
0.651,
|
| 455 |
+
0.733
|
| 456 |
+
],
|
| 457 |
+
"angle": 0,
|
| 458 |
+
"content": "Fig. 1: Illustration of frequent word cloud"
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "title",
|
| 462 |
+
"bbox": [
|
| 463 |
+
0.216,
|
| 464 |
+
0.782,
|
| 465 |
+
0.381,
|
| 466 |
+
0.8
|
| 467 |
+
],
|
| 468 |
+
"angle": 0,
|
| 469 |
+
"content": "4 Methodology"
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"bbox": [
|
| 474 |
+
0.214,
|
| 475 |
+
0.81,
|
| 476 |
+
0.787,
|
| 477 |
+
0.84
|
| 478 |
+
],
|
| 479 |
+
"angle": 0,
|
| 480 |
+
"content": "In this part, we present our transformer based ensemble model that is trained and tuned on the datasets which reported in the previous section. We compare our"
|
| 481 |
+
}
|
| 482 |
+
],
|
| 483 |
+
[
|
| 484 |
+
{
|
| 485 |
+
"type": "header",
|
| 486 |
+
"bbox": [
|
| 487 |
+
0.589,
|
| 488 |
+
0.115,
|
| 489 |
+
0.732,
|
| 490 |
+
0.128
|
| 491 |
+
],
|
| 492 |
+
"angle": 0,
|
| 493 |
+
"content": "Fake News Detection"
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "page_number",
|
| 497 |
+
"bbox": [
|
| 498 |
+
0.775,
|
| 499 |
+
0.117,
|
| 500 |
+
0.785,
|
| 501 |
+
0.127
|
| 502 |
+
],
|
| 503 |
+
"angle": 0,
|
| 504 |
+
"content": "5"
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "text",
|
| 508 |
+
"bbox": [
|
| 509 |
+
0.214,
|
| 510 |
+
0.147,
|
| 511 |
+
0.784,
|
| 512 |
+
0.193
|
| 513 |
+
],
|
| 514 |
+
"angle": 0,
|
| 515 |
+
"content": "approach with various machine learning (ML) and deep learning (DL) models with different word embeddings. The full code of system architecture can be found at GitHub<sup>4</sup>."
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "title",
|
| 519 |
+
"bbox": [
|
| 520 |
+
0.215,
|
| 521 |
+
0.211,
|
| 522 |
+
0.424,
|
| 523 |
+
0.227
|
| 524 |
+
],
|
| 525 |
+
"angle": 0,
|
| 526 |
+
"content": "4.1 Data Preprocessing"
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "text",
|
| 530 |
+
"bbox": [
|
| 531 |
+
0.214,
|
| 532 |
+
0.233,
|
| 533 |
+
0.784,
|
| 534 |
+
0.28
|
| 535 |
+
],
|
| 536 |
+
"angle": 0,
|
| 537 |
+
"content": "The main aim of this part is to use the NLP techniques to preprocess the input tweet data and prepare for the next step to extract the proper features. In Figure 2, we shown the detailed data preprocessing pipeline with examples."
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "image",
|
| 541 |
+
"bbox": [
|
| 542 |
+
0.223,
|
| 543 |
+
0.311,
|
| 544 |
+
0.784,
|
| 545 |
+
0.422
|
| 546 |
+
],
|
| 547 |
+
"angle": 0,
|
| 548 |
+
"content": null
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"type": "image_caption",
|
| 552 |
+
"bbox": [
|
| 553 |
+
0.375,
|
| 554 |
+
0.432,
|
| 555 |
+
0.626,
|
| 556 |
+
0.447
|
| 557 |
+
],
|
| 558 |
+
"angle": 0,
|
| 559 |
+
"content": "Fig. 2: Data preprocessing pipeline"
|
| 560 |
+
},
|
| 561 |
+
{
|
| 562 |
+
"type": "text",
|
| 563 |
+
"bbox": [
|
| 564 |
+
0.214,
|
| 565 |
+
0.499,
|
| 566 |
+
0.784,
|
| 567 |
+
0.56
|
| 568 |
+
],
|
| 569 |
+
"angle": 0,
|
| 570 |
+
"content": "In the preprocessing step, we will forward the tokenized tweet through the pipeline to eliminate the noise in the fake news dataset by remove or normalize the unnecessary tokens. The preprocessing pipeline includes the following subparts:"
|
| 571 |
+
},
|
| 572 |
+
{
|
| 573 |
+
"type": "text",
|
| 574 |
+
"bbox": [
|
| 575 |
+
0.222,
|
| 576 |
+
0.566,
|
| 577 |
+
0.784,
|
| 578 |
+
0.598
|
| 579 |
+
],
|
| 580 |
+
"angle": 0,
|
| 581 |
+
"content": "1. Emoticon Conversion: In this step, we converted the each emoticon in the tweet to text. Example: \\(\\rightarrow\\) Face with medical mask emoji"
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"type": "text",
|
| 585 |
+
"bbox": [
|
| 586 |
+
0.222,
|
| 587 |
+
0.599,
|
| 588 |
+
0.784,
|
| 589 |
+
0.643
|
| 590 |
+
],
|
| 591 |
+
"angle": 0,
|
| 592 |
+
"content": "2. Handling of Hashtags: We identified the hashtag tokens by seeing pound (#) sign and splitted these based on digits or capital letters. Example: #IndiaFightsCorona → IndiaFightsCorona"
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"bbox": [
|
| 597 |
+
0.222,
|
| 598 |
+
0.643,
|
| 599 |
+
0.784,
|
| 600 |
+
0.673
|
| 601 |
+
],
|
| 602 |
+
"angle": 0,
|
| 603 |
+
"content": "3. Stemming: We removed the inflectional morphemes like \"ed\", \"est\", \"s\", and \"ing\" from their token stem. Ex: confirmed \\(\\rightarrow\\) \"confirm\" + \"-ed\""
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"bbox": [
|
| 608 |
+
0.222,
|
| 609 |
+
0.673,
|
| 610 |
+
0.784,
|
| 611 |
+
0.703
|
| 612 |
+
],
|
| 613 |
+
"angle": 0,
|
| 614 |
+
"content": "4. Text cleaning: To remove the irrelevant data we used this step. Removed punctuation marks, digits and, non-ASCII glyphs from the tweet."
|
| 615 |
+
},
|
| 616 |
+
{
|
| 617 |
+
"type": "list",
|
| 618 |
+
"bbox": [
|
| 619 |
+
0.222,
|
| 620 |
+
0.566,
|
| 621 |
+
0.784,
|
| 622 |
+
0.703
|
| 623 |
+
],
|
| 624 |
+
"angle": 0,
|
| 625 |
+
"content": null
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "title",
|
| 629 |
+
"bbox": [
|
| 630 |
+
0.215,
|
| 631 |
+
0.722,
|
| 632 |
+
0.572,
|
| 633 |
+
0.737
|
| 634 |
+
],
|
| 635 |
+
"angle": 0,
|
| 636 |
+
"content": "4.2 Supervised Machine Learning Models"
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "text",
|
| 640 |
+
"bbox": [
|
| 641 |
+
0.214,
|
| 642 |
+
0.743,
|
| 643 |
+
0.784,
|
| 644 |
+
0.819
|
| 645 |
+
],
|
| 646 |
+
"angle": 0,
|
| 647 |
+
"content": "To build the finest system for fake news detection, we started our investigations with traditional NLP approaches like Linear Regression (LR), Support Vector MAchines (SVM), Passive Agressive Classifier (PAC), XGBoost, and Multi-Layer Perceptron (MLP). We study the results of above mentioned supervised models with the combination of three types of word vectors:"
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "page_footnote",
|
| 651 |
+
"bbox": [
|
| 652 |
+
0.218,
|
| 653 |
+
0.825,
|
| 654 |
+
0.664,
|
| 655 |
+
0.84
|
| 656 |
+
],
|
| 657 |
+
"angle": 0,
|
| 658 |
+
"content": "4 https://github.com/SunilGundapu/Covid-19-fake-news-detection"
|
| 659 |
+
}
|
| 660 |
+
],
|
| 661 |
+
[
|
| 662 |
+
{
|
| 663 |
+
"type": "page_number",
|
| 664 |
+
"bbox": [
|
| 665 |
+
0.218,
|
| 666 |
+
0.116,
|
| 667 |
+
0.23,
|
| 668 |
+
0.127
|
| 669 |
+
],
|
| 670 |
+
"angle": 0,
|
| 671 |
+
"content": "6"
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"type": "header",
|
| 675 |
+
"bbox": [
|
| 676 |
+
0.272,
|
| 677 |
+
0.115,
|
| 678 |
+
0.422,
|
| 679 |
+
0.129
|
| 680 |
+
],
|
| 681 |
+
"angle": 0,
|
| 682 |
+
"content": "Gundapu and Mamidi"
|
| 683 |
+
},
|
| 684 |
+
{
|
| 685 |
+
"type": "text",
|
| 686 |
+
"bbox": [
|
| 687 |
+
0.223,
|
| 688 |
+
0.147,
|
| 689 |
+
0.784,
|
| 690 |
+
0.175
|
| 691 |
+
],
|
| 692 |
+
"angle": 0,
|
| 693 |
+
"content": "1. Word-level, n-gram level, and character level TF-IDF vectors with the feature matrix size of 100000."
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "text",
|
| 697 |
+
"bbox": [
|
| 698 |
+
0.223,
|
| 699 |
+
0.177,
|
| 700 |
+
0.701,
|
| 701 |
+
0.192
|
| 702 |
+
],
|
| 703 |
+
"angle": 0,
|
| 704 |
+
"content": "2. English Glove [23] word embeddings with the dimension of 300."
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"bbox": [
|
| 709 |
+
0.223,
|
| 710 |
+
0.192,
|
| 711 |
+
0.785,
|
| 712 |
+
0.222
|
| 713 |
+
],
|
| 714 |
+
"angle": 0,
|
| 715 |
+
"content": "3. TF-IDF weighted averaging with Glove embeddings. We described below the fake news vector construction."
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "list",
|
| 719 |
+
"bbox": [
|
| 720 |
+
0.223,
|
| 721 |
+
0.147,
|
| 722 |
+
0.785,
|
| 723 |
+
0.222
|
| 724 |
+
],
|
| 725 |
+
"angle": 0,
|
| 726 |
+
"content": null
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "equation",
|
| 730 |
+
"bbox": [
|
| 731 |
+
0.323,
|
| 732 |
+
0.247,
|
| 733 |
+
0.785,
|
| 734 |
+
0.295
|
| 735 |
+
],
|
| 736 |
+
"angle": 0,
|
| 737 |
+
"content": "\\[\nT w e e t _ {v e c t o r} = \\frac {\\sum_ {i = 1} ^ {N} \\mathbf {t f - i d f} (t o k e n _ {i}) \\times \\mathbf {G l o v e} (t o k e n _ {i})}{\\mathbf {N}} \\tag {1}\n\\]"
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "text",
|
| 741 |
+
"bbox": [
|
| 742 |
+
0.215,
|
| 743 |
+
0.3,
|
| 744 |
+
0.785,
|
| 745 |
+
0.346
|
| 746 |
+
],
|
| 747 |
+
"angle": 0,
|
| 748 |
+
"content": "In the above formula, \\( \\mathrm{N} \\) is the total number of words in the input fake news tweet, and \\( \\text{token}_i \\) is the \\( i^{th} \\) token in the input text. After analyzing the results, TF-IDF weighted averaging gave better results than the standard TF-IDF."
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "title",
|
| 752 |
+
"bbox": [
|
| 753 |
+
0.216,
|
| 754 |
+
0.368,
|
| 755 |
+
0.45,
|
| 756 |
+
0.384
|
| 757 |
+
],
|
| 758 |
+
"angle": 0,
|
| 759 |
+
"content": "4.3 Deep Learning Models"
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"bbox": [
|
| 764 |
+
0.214,
|
| 765 |
+
0.394,
|
| 766 |
+
0.785,
|
| 767 |
+
0.44
|
| 768 |
+
],
|
| 769 |
+
"angle": 0,
|
| 770 |
+
"content": "Supervised machine learning algorithms performed very well on the provided dataset. In this section, we experiment with deep learning models that give better results than traditional classification algorithms."
|
| 771 |
+
},
|
| 772 |
+
{
|
| 773 |
+
"type": "text",
|
| 774 |
+
"bbox": [
|
| 775 |
+
0.214,
|
| 776 |
+
0.461,
|
| 777 |
+
0.785,
|
| 778 |
+
0.522
|
| 779 |
+
],
|
| 780 |
+
"angle": 0,
|
| 781 |
+
"content": "LSTM: We used Long Short-Term Memory (LSTM) [24] architecture with two different pre-trained word embeddings Glove and Fasttext [25]. LSTM is a type of Recurrent Neural Network (RNN) that can solve long term dependency problem, and it is a well-suited model for sequence classification."
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"type": "text",
|
| 785 |
+
"bbox": [
|
| 786 |
+
0.214,
|
| 787 |
+
0.522,
|
| 788 |
+
0.785,
|
| 789 |
+
0.613
|
| 790 |
+
],
|
| 791 |
+
"angle": 0,
|
| 792 |
+
"content": "We converted the input data points into word vectors by using pre-trained word embeddings. These word vectors are passed as input to the LSTM layer. We stacked up two LSTM layers one after another with the dropout of 0.25. The size of LSTM is 128, and the last time step output is treated as input data point representation. The final time step's outcome is passed as an input to a dense layer for fake news detection."
|
| 793 |
+
},
|
| 794 |
+
{
|
| 795 |
+
"type": "text",
|
| 796 |
+
"bbox": [
|
| 797 |
+
0.214,
|
| 798 |
+
0.636,
|
| 799 |
+
0.785,
|
| 800 |
+
0.697
|
| 801 |
+
],
|
| 802 |
+
"angle": 0,
|
| 803 |
+
"content": "BiLSTM with Attention: Sometimes not all the tokens in the input text contribute equally to the representation of input text. So we advantage word attention [26] mechanism to catch the tokens' prominent influence on the input data point. We built this attention mechanism on top of BiLSTM layers."
|
| 804 |
+
},
|
| 805 |
+
{
|
| 806 |
+
"type": "text",
|
| 807 |
+
"bbox": [
|
| 808 |
+
0.214,
|
| 809 |
+
0.697,
|
| 810 |
+
0.785,
|
| 811 |
+
0.757
|
| 812 |
+
],
|
| 813 |
+
"angle": 0,
|
| 814 |
+
"content": "The sequence of word vector is passed through a BiLSTM layer, which contains one forward and backward LSTM layer. Attention mechanism applied to the output of BiLSTM layer, which produces a dense vector. This dense vector is forwarded to a fully connected network."
|
| 815 |
+
},
|
| 816 |
+
{
|
| 817 |
+
"type": "text",
|
| 818 |
+
"bbox": [
|
| 819 |
+
0.214,
|
| 820 |
+
0.78,
|
| 821 |
+
0.785,
|
| 822 |
+
0.84
|
| 823 |
+
],
|
| 824 |
+
"angle": 0,
|
| 825 |
+
"content": "CNN: We explored a Convolution Neural Network (CNN) [27] model for misinformation detection. The model consists of an embedding layer, a convolution layer with 3 convolutions, a max-pooling layer, and a fully connected network. In the embedding layer, the input texts are converted into \\( n \\times d \\) sequence matrix,"
|
| 826 |
+
}
|
| 827 |
+
],
|
| 828 |
+
[
|
| 829 |
+
{
|
| 830 |
+
"type": "header",
|
| 831 |
+
"bbox": [
|
| 832 |
+
0.589,
|
| 833 |
+
0.115,
|
| 834 |
+
0.732,
|
| 835 |
+
0.128
|
| 836 |
+
],
|
| 837 |
+
"angle": 0,
|
| 838 |
+
"content": "Fake News Detection"
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "page_number",
|
| 842 |
+
"bbox": [
|
| 843 |
+
0.775,
|
| 844 |
+
0.116,
|
| 845 |
+
0.785,
|
| 846 |
+
0.127
|
| 847 |
+
],
|
| 848 |
+
"angle": 0,
|
| 849 |
+
"content": "7"
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"bbox": [
|
| 854 |
+
0.214,
|
| 855 |
+
0.147,
|
| 856 |
+
0.788,
|
| 857 |
+
0.253
|
| 858 |
+
],
|
| 859 |
+
"angle": 0,
|
| 860 |
+
"content": "where \\( n \\) is the length of the input data point and \\( d \\) is the length of the word embedding dimension. In the convolution layer, fed the sequence matrix through three 1D convolutions of kernel sizes 3, 4, and 5. And each convolutions filter size is 128. The convolution layer's output is max pooled over time and concatenated to get the input datapoint representations in the max-pooling layer. The output of the max-pooling layer is passed to a fully connected network with a softmax output layer."
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "text",
|
| 864 |
+
"bbox": [
|
| 865 |
+
0.214,
|
| 866 |
+
0.275,
|
| 867 |
+
0.788,
|
| 868 |
+
0.351
|
| 869 |
+
],
|
| 870 |
+
"angle": 0,
|
| 871 |
+
"content": "CNN + BiLSTM: A CNN and BiLSTM architecture is an ensemble of CNN and bidirectional LSTM models with Fasttext/Glove word embeddings. In this architecture, the CNN extracts the maximum amount of features/information from the input text using convolution layers. The output of CNN becomes the input to BiLSTM, which keeps the data in chronological order in both directions."
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "text",
|
| 875 |
+
"bbox": [
|
| 876 |
+
0.214,
|
| 877 |
+
0.352,
|
| 878 |
+
0.788,
|
| 879 |
+
0.412
|
| 880 |
+
],
|
| 881 |
+
"angle": 0,
|
| 882 |
+
"content": "The sequence of word vectors are forwarded through a convolution of kernel size 3 with filter size 128. The output of convolution is passed through a BiLSTM. The outcome of BiLSTM is max-pooled over time and followed by one dense layer and a softmax layer."
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "title",
|
| 886 |
+
"bbox": [
|
| 887 |
+
0.216,
|
| 888 |
+
0.435,
|
| 889 |
+
0.431,
|
| 890 |
+
0.449
|
| 891 |
+
],
|
| 892 |
+
"angle": 0,
|
| 893 |
+
"content": "4.4 Transformer Models"
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "text",
|
| 897 |
+
"bbox": [
|
| 898 |
+
0.214,
|
| 899 |
+
0.46,
|
| 900 |
+
0.788,
|
| 901 |
+
0.535
|
| 902 |
+
],
|
| 903 |
+
"angle": 0,
|
| 904 |
+
"content": "This section explored individual and ensembling of the three transformer models BERT, ALBERT, and XLNet. These models have outperformed the other ML and DL algorithms. We implemented these models using HuggingFace<sup>5</sup> is a PyTorch transformer library. And the hyperparameters of the three models are described in Table 1."
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "table",
|
| 908 |
+
"bbox": [
|
| 909 |
+
0.218,
|
| 910 |
+
0.558,
|
| 911 |
+
0.787,
|
| 912 |
+
0.647
|
| 913 |
+
],
|
| 914 |
+
"angle": 0,
|
| 915 |
+
"content": "<table><tr><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>Optimizer</td><td>Max Length</td><td>Type</td></tr><tr><td>BERT</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>BERT-Base</td></tr><tr><td>XLNet</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>XLNetLarge</td></tr><tr><td>ALBERT</td><td>2e-5</td><td>32</td><td>Adam</td><td>128</td><td>ALBERT-Xlarge</td></tr></table>"
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "table_caption",
|
| 919 |
+
"bbox": [
|
| 920 |
+
0.325,
|
| 921 |
+
0.657,
|
| 922 |
+
0.679,
|
| 923 |
+
0.673
|
| 924 |
+
],
|
| 925 |
+
"angle": 0,
|
| 926 |
+
"content": "Table 2: Hyperparameters of transformer models"
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "text",
|
| 930 |
+
"bbox": [
|
| 931 |
+
0.214,
|
| 932 |
+
0.74,
|
| 933 |
+
0.788,
|
| 934 |
+
0.817
|
| 935 |
+
],
|
| 936 |
+
"angle": 0,
|
| 937 |
+
"content": "BERT: Bidirectional Encoder Representations from Transformers (henceforth, BERT) [28] is a transformer model developed to pre-train deep bidirectional representations from unseen data. This model developed by combining two robust concepts: (i) It's a deep transformer model so that it can process lengthy sentences effectively by using attention mechanism, and (ii) It's a bidirectional"
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "page_footnote",
|
| 941 |
+
"bbox": [
|
| 942 |
+
0.218,
|
| 943 |
+
0.825,
|
| 944 |
+
0.483,
|
| 945 |
+
0.841
|
| 946 |
+
],
|
| 947 |
+
"angle": 0,
|
| 948 |
+
"content": "5 https://huggingface.co/transformers/"
|
| 949 |
+
}
|
| 950 |
+
],
|
| 951 |
+
[
|
| 952 |
+
{
|
| 953 |
+
"type": "page_number",
|
| 954 |
+
"bbox": [
|
| 955 |
+
0.218,
|
| 956 |
+
0.116,
|
| 957 |
+
0.23,
|
| 958 |
+
0.127
|
| 959 |
+
],
|
| 960 |
+
"angle": 0,
|
| 961 |
+
"content": "8"
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "header",
|
| 965 |
+
"bbox": [
|
| 966 |
+
0.272,
|
| 967 |
+
0.115,
|
| 968 |
+
0.422,
|
| 969 |
+
0.129
|
| 970 |
+
],
|
| 971 |
+
"angle": 0,
|
| 972 |
+
"content": "Gundapu and Mamidi"
|
| 973 |
+
},
|
| 974 |
+
{
|
| 975 |
+
"type": "text",
|
| 976 |
+
"bbox": [
|
| 977 |
+
0.214,
|
| 978 |
+
0.147,
|
| 979 |
+
0.784,
|
| 980 |
+
0.176
|
| 981 |
+
],
|
| 982 |
+
"angle": 0,
|
| 983 |
+
"content": "network, so it takes into account the entire text passage to comprehend the meaning of each token."
|
| 984 |
+
},
|
| 985 |
+
{
|
| 986 |
+
"type": "text",
|
| 987 |
+
"bbox": [
|
| 988 |
+
0.214,
|
| 989 |
+
0.178,
|
| 990 |
+
0.785,
|
| 991 |
+
0.252
|
| 992 |
+
],
|
| 993 |
+
"angle": 0,
|
| 994 |
+
"content": "BERT implementation has two steps; one is pre-training and another fin-tuning. In the first step, the model is trained on unseen data over various pretraining problems using a dataset in a particular language or in increases data with multiple languages. In the second step, all the initialized parameters are fine-tuned using the labeled data from certain tasks."
|
| 995 |
+
},
|
| 996 |
+
{
|
| 997 |
+
"type": "text",
|
| 998 |
+
"bbox": [
|
| 999 |
+
0.214,
|
| 1000 |
+
0.253,
|
| 1001 |
+
0.787,
|
| 1002 |
+
0.357
|
| 1003 |
+
],
|
| 1004 |
+
"angle": 0,
|
| 1005 |
+
"content": "We fine-tuned the pre-trained BERT (Base) model for our COVID-19 fake news detection task. BERT base model contains the 12 layers of encoder blocks and 12 bidirectional self-attention heads by considering the sequence of 512 tokens and emitting the representations of a sequence of hidden vectors. We added one additional output layer on top of the BERT model to calculate the conditional probability over the output classes, either fake or real. See FIGURE 1 for the fine-tuned model of BERT."
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "image",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
0.293,
|
| 1011 |
+
0.396,
|
| 1012 |
+
0.71,
|
| 1013 |
+
0.609
|
| 1014 |
+
],
|
| 1015 |
+
"angle": 0,
|
| 1016 |
+
"content": null
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "image_caption",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
0.382,
|
| 1022 |
+
0.62,
|
| 1023 |
+
0.621,
|
| 1024 |
+
0.636
|
| 1025 |
+
],
|
| 1026 |
+
"angle": 0,
|
| 1027 |
+
"content": "Fig. 3: BERT model architecture"
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
0.214,
|
| 1033 |
+
0.704,
|
| 1034 |
+
0.788,
|
| 1035 |
+
0.839
|
| 1036 |
+
],
|
| 1037 |
+
"angle": 0,
|
| 1038 |
+
"content": "XLNet: XLNet is an enhanced version of BERT. To understand the language context deeper, XLNet [29] uses Transformer-XL [30] as a feature engineering model, which alone is an adoption upon the native Transformer. This Transformer XL model integrates the two components Recurrence Mechanism and Relative Positional Encoding (RPE) to the Transformer used in BERT to handle the long-term dependencies for texts that are longer than the maximum allowed input length. Recurrence Mechanism will give context between two sequences at specific segments and RPE, which carries similarity information between two tokens."
|
| 1039 |
+
}
|
| 1040 |
+
],
|
| 1041 |
+
[
|
| 1042 |
+
{
|
| 1043 |
+
"type": "header",
|
| 1044 |
+
"bbox": [
|
| 1045 |
+
0.589,
|
| 1046 |
+
0.115,
|
| 1047 |
+
0.732,
|
| 1048 |
+
0.128
|
| 1049 |
+
],
|
| 1050 |
+
"angle": 0,
|
| 1051 |
+
"content": "Fake News Detection"
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"type": "page_number",
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
0.775,
|
| 1057 |
+
0.117,
|
| 1058 |
+
0.785,
|
| 1059 |
+
0.127
|
| 1060 |
+
],
|
| 1061 |
+
"angle": 0,
|
| 1062 |
+
"content": "9"
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "text",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
0.214,
|
| 1068 |
+
0.147,
|
| 1069 |
+
0.788,
|
| 1070 |
+
0.239
|
| 1071 |
+
],
|
| 1072 |
+
"angle": 0,
|
| 1073 |
+
"content": "The XLNet model has been trained on a huge dataset using the permutation language modeling. This technique is one of the main differences between BERT and XLNet, and it uses permutations to generate data from the forward and backward directions at the same time. We used the pre-trained XLNet model from Hugging Face, then fine-tuned the model with a maximum length of 128 to update the pre-trained model to fit our fake news detection dataset."
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "text",
|
| 1077 |
+
"bbox": [
|
| 1078 |
+
0.214,
|
| 1079 |
+
0.263,
|
| 1080 |
+
0.79,
|
| 1081 |
+
0.385
|
| 1082 |
+
],
|
| 1083 |
+
"angle": 0,
|
| 1084 |
+
"content": "ALBERT: Modern language models increasing the model size and quantity of parameters when pre-training natural language representations. They often give better improvements in many downstream tasks, but in some cases, they become harder due to memory limitation and longer hours of training. To address these problems, a self-supervised learning model ALBERT (A Lite BERT) [31] often uses parameter reduction techniques to increase model speed and lower memory consumption. We used the A Lite BERT model for our misinformation detection problem, which achieves better performance than DL models."
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "text",
|
| 1088 |
+
"bbox": [
|
| 1089 |
+
0.214,
|
| 1090 |
+
0.408,
|
| 1091 |
+
0.788,
|
| 1092 |
+
0.485
|
| 1093 |
+
],
|
| 1094 |
+
"angle": 0,
|
| 1095 |
+
"content": "Ensemble Model: We ensembled the three transformer models BERT, ALBERT, and XLNet for better prediction. See Figure 4 for the ensemble model. Our ensemble model computes an average of all softmax values from these three transformer models after extracting the softmax probabilities from each model. This model relatively better than other models."
|
| 1096 |
+
},
|
| 1097 |
+
{
|
| 1098 |
+
"type": "image",
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
0.292,
|
| 1101 |
+
0.524,
|
| 1102 |
+
0.713,
|
| 1103 |
+
0.62
|
| 1104 |
+
],
|
| 1105 |
+
"angle": 0,
|
| 1106 |
+
"content": null
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"type": "image_caption",
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
0.304,
|
| 1112 |
+
0.628,
|
| 1113 |
+
0.699,
|
| 1114 |
+
0.644
|
| 1115 |
+
],
|
| 1116 |
+
"angle": 0,
|
| 1117 |
+
"content": "Fig. 4: Transformer based ensemble model architecture"
|
| 1118 |
+
},
|
| 1119 |
+
{
|
| 1120 |
+
"type": "title",
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
0.216,
|
| 1123 |
+
0.716,
|
| 1124 |
+
0.476,
|
| 1125 |
+
0.733
|
| 1126 |
+
],
|
| 1127 |
+
"angle": 0,
|
| 1128 |
+
"content": "5 Results and Discussion"
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "text",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
0.214,
|
| 1134 |
+
0.75,
|
| 1135 |
+
0.788,
|
| 1136 |
+
0.841
|
| 1137 |
+
],
|
| 1138 |
+
"angle": 0,
|
| 1139 |
+
"content": "In this section, we compared the performance of various machine learning, deep learning, and transformer-based models using several evaluation metrics like precision, recall, weighted f1-score and accuracy. The results of the various experiments on the test set are reported in Table 3. The results clearly showing that Transformer based models are considerably better than other machine and deep learning models for our COVID-19 misinformation detection task. And while"
|
| 1140 |
+
}
|
| 1141 |
+
],
|
| 1142 |
+
[
|
| 1143 |
+
{
|
| 1144 |
+
"type": "page_number",
|
| 1145 |
+
"bbox": [
|
| 1146 |
+
0.218,
|
| 1147 |
+
0.116,
|
| 1148 |
+
0.236,
|
| 1149 |
+
0.127
|
| 1150 |
+
],
|
| 1151 |
+
"angle": 0,
|
| 1152 |
+
"content": "10"
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "header",
|
| 1156 |
+
"bbox": [
|
| 1157 |
+
0.272,
|
| 1158 |
+
0.115,
|
| 1159 |
+
0.421,
|
| 1160 |
+
0.129
|
| 1161 |
+
],
|
| 1162 |
+
"angle": 0,
|
| 1163 |
+
"content": "Gundapu and Mamidi"
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "table",
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
0.223,
|
| 1169 |
+
0.143,
|
| 1170 |
+
0.782,
|
| 1171 |
+
0.317
|
| 1172 |
+
],
|
| 1173 |
+
"angle": 0,
|
| 1174 |
+
"content": "<table><tr><td>Model Type</td><td>Model</td><td>Precision</td><td>Recall</td><td>Accuracy</td><td>F1-Score</td></tr><tr><td rowspan=\"3\">ML \nModels</td><td>SVM</td><td>0.9640</td><td>0.9640</td><td>0.964013</td><td>0.964037</td></tr><tr><td>PAC</td><td>0.9673</td><td>0.9673</td><td>0.967285</td><td>0.967289</td></tr><tr><td>MLP</td><td>0.9645</td><td>0.9645</td><td>0.964494</td><td>0.964485</td></tr><tr><td rowspan=\"4\">Deep Learning \nModels</td><td>LSTM with FastText</td><td>0.9682</td><td>0.9682</td><td>0.9682203</td><td>0.968224</td></tr><tr><td>CNN with FastText</td><td>0.9698</td><td>0.9698</td><td>0.969802</td><td>0.969819</td></tr><tr><td>LSTM + CNN</td><td>0.9762</td><td>0.9762</td><td>0.976163</td><td>0.976168</td></tr><tr><td>BiLSTM + Attention</td><td>0.9790</td><td>0.9785</td><td>0.978524</td><td>0.978504</td></tr><tr><td rowspan=\"4\">Transformer \nModels</td><td>BERT</td><td>0.9813</td><td>0.9813</td><td>0.981306</td><td>0.981308</td></tr><tr><td>ALBERT</td><td>0.9781</td><td>0.9781</td><td>0.978031</td><td>0.978037</td></tr><tr><td>XLNet</td><td>0.9787</td><td>0.9789</td><td>0.978596</td><td>0.978592</td></tr><tr><td>Ensemble Model</td><td>0.9855</td><td>0.9855</td><td>0.985512</td><td>0.985514</td></tr></table>"
|
| 1175 |
+
},
|
| 1176 |
+
{
|
| 1177 |
+
"type": "table_caption",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
0.248,
|
| 1180 |
+
0.326,
|
| 1181 |
+
0.754,
|
| 1182 |
+
0.341
|
| 1183 |
+
],
|
| 1184 |
+
"angle": 0,
|
| 1185 |
+
"content": "Table 3: Comparison of various fake news detection models on testset"
|
| 1186 |
+
},
|
| 1187 |
+
{
|
| 1188 |
+
"type": "text",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
0.214,
|
| 1191 |
+
0.384,
|
| 1192 |
+
0.784,
|
| 1193 |
+
0.414
|
| 1194 |
+
],
|
| 1195 |
+
"angle": 0,
|
| 1196 |
+
"content": "doing experiments, we observed that few models good at retrieving prominent features while other models have the best classification performance."
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "text",
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
0.214,
|
| 1202 |
+
0.415,
|
| 1203 |
+
0.785,
|
| 1204 |
+
0.535
|
| 1205 |
+
],
|
| 1206 |
+
"angle": 0,
|
| 1207 |
+
"content": "Classical machine learning models with various TF-IDF feature vectors gave the approximate baseline model results. We observe that the TF-IDF weighted average performed better than the normal TF-IDF vectors. Bi-directional LSTM with attention mechanism f1-score approximate very close to transformer models. The BERT, XLNet, and ALBERT demonstrate better performance than deep learning models. An ensemble of the transformer-based model produces the best F1 score of 0.9855 on the test set. Our transformer based model ranked 5th among 160 teams."
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "table",
|
| 1211 |
+
"bbox": [
|
| 1212 |
+
0.218,
|
| 1213 |
+
0.562,
|
| 1214 |
+
0.787,
|
| 1215 |
+
0.637
|
| 1216 |
+
],
|
| 1217 |
+
"angle": 0,
|
| 1218 |
+
"content": "<table><tr><td>Test Sample</td><td>BERT</td><td>ALBERT</td><td>XLNet</td><td>Ensemble</td></tr><tr><td>#BillGates is shocked that America's pandemic response is among the worst in the world.</td><td>✓</td><td>✗</td><td>✗</td><td>✓</td></tr><tr><td>We will all come out stronger from this</td><td>✗</td><td>✗</td><td>✓</td><td>✓</td></tr><tr><td>#COVID #pandemic. Just #StaySafeStayHealthy</td><td></td><td></td><td></td><td></td></tr></table>"
|
| 1219 |
+
},
|
| 1220 |
+
{
|
| 1221 |
+
"type": "table_caption",
|
| 1222 |
+
"bbox": [
|
| 1223 |
+
0.347,
|
| 1224 |
+
0.646,
|
| 1225 |
+
0.656,
|
| 1226 |
+
0.661
|
| 1227 |
+
],
|
| 1228 |
+
"angle": 0,
|
| 1229 |
+
"content": "Table 4: Misclassified samples from testset"
|
| 1230 |
+
},
|
| 1231 |
+
{
|
| 1232 |
+
"type": "text",
|
| 1233 |
+
"bbox": [
|
| 1234 |
+
0.214,
|
| 1235 |
+
0.704,
|
| 1236 |
+
0.785,
|
| 1237 |
+
0.78
|
| 1238 |
+
],
|
| 1239 |
+
"angle": 0,
|
| 1240 |
+
"content": "In some problems, ensembling of four transformer models is very difficult, and sometimes this approach will not perform well. But if we observe the results of individual transformer models on our dataset are very close, meaning that any transformer model can be used for our fake news detection task. This is the major reason behind the ensembling of transformer models."
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "text",
|
| 1244 |
+
"bbox": [
|
| 1245 |
+
0.214,
|
| 1246 |
+
0.78,
|
| 1247 |
+
0.787,
|
| 1248 |
+
0.84
|
| 1249 |
+
],
|
| 1250 |
+
"angle": 0,
|
| 1251 |
+
"content": "In Table 4, we showed the two misclassified test samples. The first test sample actual label is \"real\", but only BERT and ensemble models are predicted correctly, remaining two models wrongly predicted. And the second sample true label is \"fake\", but XLNet and ensemble predicted correctly, remaining two mod-"
|
| 1252 |
+
}
|
| 1253 |
+
],
|
| 1254 |
+
[
|
| 1255 |
+
{
|
| 1256 |
+
"type": "header",
|
| 1257 |
+
"bbox": [
|
| 1258 |
+
0.59,
|
| 1259 |
+
0.115,
|
| 1260 |
+
0.732,
|
| 1261 |
+
0.128
|
| 1262 |
+
],
|
| 1263 |
+
"angle": 0,
|
| 1264 |
+
"content": "Fake News Detection"
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "page_number",
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
0.769,
|
| 1270 |
+
0.117,
|
| 1271 |
+
0.784,
|
| 1272 |
+
0.127
|
| 1273 |
+
],
|
| 1274 |
+
"angle": 0,
|
| 1275 |
+
"content": "11"
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "text",
|
| 1279 |
+
"bbox": [
|
| 1280 |
+
0.214,
|
| 1281 |
+
0.147,
|
| 1282 |
+
0.788,
|
| 1283 |
+
0.193
|
| 1284 |
+
],
|
| 1285 |
+
"angle": 0,
|
| 1286 |
+
"content": "els wrongly predicted. However, the ensemble model is correctly predicted in both cases because we are averaging the BERT, ALBERT, and XLNet softmax probabilities. This is a principal observation to ensemble the transformer models."
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "title",
|
| 1290 |
+
"bbox": [
|
| 1291 |
+
0.216,
|
| 1292 |
+
0.215,
|
| 1293 |
+
0.36,
|
| 1294 |
+
0.232
|
| 1295 |
+
],
|
| 1296 |
+
"angle": 0,
|
| 1297 |
+
"content": "6 Conclusion"
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "text",
|
| 1301 |
+
"bbox": [
|
| 1302 |
+
0.214,
|
| 1303 |
+
0.247,
|
| 1304 |
+
0.786,
|
| 1305 |
+
0.307
|
| 1306 |
+
],
|
| 1307 |
+
"angle": 0,
|
| 1308 |
+
"content": "In this paper, we presented various algorithms to combat the global infodemic, but transformer-based algorithms performed better than others. And we submitted these models to the Shared Task of COVID-19 fake news detection for English, ConstraintAI-2021 workshop."
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"type": "text",
|
| 1312 |
+
"bbox": [
|
| 1313 |
+
0.214,
|
| 1314 |
+
0.308,
|
| 1315 |
+
0.787,
|
| 1316 |
+
0.399
|
| 1317 |
+
],
|
| 1318 |
+
"angle": 0,
|
| 1319 |
+
"content": "Fake news is a progressively significant and tricky problem to solve, particularly in an unanticipated situation like the COVID-19 epidemic. Leveraging state-of-the-art classical and advanced NLP models can help address the problem of COVID-19 fake news detection and other global health emergencies. We intend to explore other contextualized embeddings like FLAIR, ELMo, etc., for a better fake news detecting system in future works."
|
| 1320 |
+
},
|
| 1321 |
+
{
|
| 1322 |
+
"type": "title",
|
| 1323 |
+
"bbox": [
|
| 1324 |
+
0.216,
|
| 1325 |
+
0.421,
|
| 1326 |
+
0.325,
|
| 1327 |
+
0.437
|
| 1328 |
+
],
|
| 1329 |
+
"angle": 0,
|
| 1330 |
+
"content": "References"
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "ref_text",
|
| 1334 |
+
"bbox": [
|
| 1335 |
+
0.223,
|
| 1336 |
+
0.452,
|
| 1337 |
+
0.787,
|
| 1338 |
+
0.522
|
| 1339 |
+
],
|
| 1340 |
+
"angle": 0,
|
| 1341 |
+
"content": "1. Patwa, P., Bhardwaj, M., Guptha, V., Kumari, G., Sharma, S., PYKL, S., Das, A., Ekbal A., Akhtar, S., Chakraborty.: Overview of CONSTRAINTN 2021 Shared Tasks: Detecting English COVID-19 Fake News and Hindi Hostile Posts. (2021). In: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT). Springer."
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "ref_text",
|
| 1345 |
+
"bbox": [
|
| 1346 |
+
0.223,
|
| 1347 |
+
0.523,
|
| 1348 |
+
0.787,
|
| 1349 |
+
0.563
|
| 1350 |
+
],
|
| 1351 |
+
"angle": 0,
|
| 1352 |
+
"content": "2. Datta, R., Yadav, K., Singh, A., Datta, K., Bansal, A.: The infodemics of COVID-19 amongst healthcare professionals in india. Med. J. Armed Forces India, vol. 76, no. 3, pp. 276-283, Jul. 2020."
|
| 1353 |
+
},
|
| 1354 |
+
{
|
| 1355 |
+
"type": "ref_text",
|
| 1356 |
+
"bbox": [
|
| 1357 |
+
0.223,
|
| 1358 |
+
0.564,
|
| 1359 |
+
0.787,
|
| 1360 |
+
0.592
|
| 1361 |
+
],
|
| 1362 |
+
"angle": 0,
|
| 1363 |
+
"content": "3. Chen, X., Sin, S.J.: 'Misinformation? What of it?' Motivations and individual differences in misinformation sharing on social media. In: ASIST (2013)"
|
| 1364 |
+
},
|
| 1365 |
+
{
|
| 1366 |
+
"type": "ref_text",
|
| 1367 |
+
"bbox": [
|
| 1368 |
+
0.223,
|
| 1369 |
+
0.593,
|
| 1370 |
+
0.787,
|
| 1371 |
+
0.619
|
| 1372 |
+
],
|
| 1373 |
+
"angle": 0,
|
| 1374 |
+
"content": "4. Thorne, J., Vlachos, A.: Automated Fact Checking: Task formulations, methods and future directions. In: COLING (2018)"
|
| 1375 |
+
},
|
| 1376 |
+
{
|
| 1377 |
+
"type": "ref_text",
|
| 1378 |
+
"bbox": [
|
| 1379 |
+
0.223,
|
| 1380 |
+
0.62,
|
| 1381 |
+
0.787,
|
| 1382 |
+
0.646
|
| 1383 |
+
],
|
| 1384 |
+
"angle": 0,
|
| 1385 |
+
"content": "5. Titcomb, J., Carson, J.: www.telegraph.co.uk. Fake news: What exactly is it - and how can you spot it?"
|
| 1386 |
+
},
|
| 1387 |
+
{
|
| 1388 |
+
"type": "ref_text",
|
| 1389 |
+
"bbox": [
|
| 1390 |
+
0.223,
|
| 1391 |
+
0.647,
|
| 1392 |
+
0.787,
|
| 1393 |
+
0.674
|
| 1394 |
+
],
|
| 1395 |
+
"angle": 0,
|
| 1396 |
+
"content": "6. Kaliyar, R., Singh, N.: Misinformation Detection on Online Social Media-A Survey. (2019). 1-6. 10.1109/ICCCNT45670.2019.8944587."
|
| 1397 |
+
},
|
| 1398 |
+
{
|
| 1399 |
+
"type": "ref_text",
|
| 1400 |
+
"bbox": [
|
| 1401 |
+
0.223,
|
| 1402 |
+
0.675,
|
| 1403 |
+
0.787,
|
| 1404 |
+
0.702
|
| 1405 |
+
],
|
| 1406 |
+
"angle": 0,
|
| 1407 |
+
"content": "7. Zhang, X., Ghorbani, A.: An overview of online fake news: Characterization, detection, and discussion. (2020). Inf. Process. Manag., 57, 102025."
|
| 1408 |
+
},
|
| 1409 |
+
{
|
| 1410 |
+
"type": "ref_text",
|
| 1411 |
+
"bbox": [
|
| 1412 |
+
0.223,
|
| 1413 |
+
0.703,
|
| 1414 |
+
0.787,
|
| 1415 |
+
0.73
|
| 1416 |
+
],
|
| 1417 |
+
"angle": 0,
|
| 1418 |
+
"content": "8. Khan, J.Y., Khondaker, M.T., Iqbal, A., Afroz, S.: A Benchmark Study on Machine Learning Methods for Fake News Detection. (2019). ArXiv, abs/1905.04749."
|
| 1419 |
+
},
|
| 1420 |
+
{
|
| 1421 |
+
"type": "ref_text",
|
| 1422 |
+
"bbox": [
|
| 1423 |
+
0.223,
|
| 1424 |
+
0.731,
|
| 1425 |
+
0.787,
|
| 1426 |
+
0.771
|
| 1427 |
+
],
|
| 1428 |
+
"angle": 0,
|
| 1429 |
+
"content": "9. Elhadad, M., Li, K.F., Gebali, F.: A Novel Approach for Selecting Hybrid Features from Online News Textual Metadata for Fake News Detection. In: Proc. 3PGCIC, Antwerp, Belgium, 2019, pp. 914-925."
|
| 1430 |
+
},
|
| 1431 |
+
{
|
| 1432 |
+
"type": "ref_text",
|
| 1433 |
+
"bbox": [
|
| 1434 |
+
0.218,
|
| 1435 |
+
0.772,
|
| 1436 |
+
0.787,
|
| 1437 |
+
0.798
|
| 1438 |
+
],
|
| 1439 |
+
"angle": 0,
|
| 1440 |
+
"content": "10. Thota, A., Tilak, P., Ahluwalia, S., Lohia, N.: Fake News Detection: A Deep Learning Approach. (2018). SMU Data Science Review: Vol. 1: No. 3, Article 10."
|
| 1441 |
+
},
|
| 1442 |
+
{
|
| 1443 |
+
"type": "ref_text",
|
| 1444 |
+
"bbox": [
|
| 1445 |
+
0.218,
|
| 1446 |
+
0.799,
|
| 1447 |
+
0.787,
|
| 1448 |
+
0.841
|
| 1449 |
+
],
|
| 1450 |
+
"angle": 0,
|
| 1451 |
+
"content": "11. Sean, B., Doug, S., Yuxi, P: Talos Targets Disinformation with Fake News Challenge Victory. (2017). Available online: https://blog.talosintelligence.com/2017/06/talos-fake-news-challenge.html"
|
| 1452 |
+
},
|
| 1453 |
+
{
|
| 1454 |
+
"type": "list",
|
| 1455 |
+
"bbox": [
|
| 1456 |
+
0.218,
|
| 1457 |
+
0.452,
|
| 1458 |
+
0.787,
|
| 1459 |
+
0.841
|
| 1460 |
+
],
|
| 1461 |
+
"angle": 0,
|
| 1462 |
+
"content": null
|
| 1463 |
+
}
|
| 1464 |
+
],
|
| 1465 |
+
[
|
| 1466 |
+
{
|
| 1467 |
+
"type": "page_number",
|
| 1468 |
+
"bbox": [
|
| 1469 |
+
0.218,
|
| 1470 |
+
0.116,
|
| 1471 |
+
0.236,
|
| 1472 |
+
0.127
|
| 1473 |
+
],
|
| 1474 |
+
"angle": 0,
|
| 1475 |
+
"content": "12"
|
| 1476 |
+
},
|
| 1477 |
+
{
|
| 1478 |
+
"type": "header",
|
| 1479 |
+
"bbox": [
|
| 1480 |
+
0.272,
|
| 1481 |
+
0.115,
|
| 1482 |
+
0.421,
|
| 1483 |
+
0.129
|
| 1484 |
+
],
|
| 1485 |
+
"angle": 0,
|
| 1486 |
+
"content": "Gundapu and Mamidi"
|
| 1487 |
+
},
|
| 1488 |
+
{
|
| 1489 |
+
"type": "ref_text",
|
| 1490 |
+
"bbox": [
|
| 1491 |
+
0.218,
|
| 1492 |
+
0.148,
|
| 1493 |
+
0.785,
|
| 1494 |
+
0.189
|
| 1495 |
+
],
|
| 1496 |
+
"angle": 0,
|
| 1497 |
+
"content": "12. Jwa, H., Oh, D., Park, K., Kang, J., Lim, H.: exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT). (2019). Applied Sciences, 9, 4062."
|
| 1498 |
+
},
|
| 1499 |
+
{
|
| 1500 |
+
"type": "ref_text",
|
| 1501 |
+
"bbox": [
|
| 1502 |
+
0.218,
|
| 1503 |
+
0.19,
|
| 1504 |
+
0.785,
|
| 1505 |
+
0.217
|
| 1506 |
+
],
|
| 1507 |
+
"angle": 0,
|
| 1508 |
+
"content": "13. Shahi, G.K., Nandini, D.: FakeCovid - A Multilingual Cross-domain Fact Check News Dataset for COVID-19. (2020). ArXiv, abs/2006.11343."
|
| 1509 |
+
},
|
| 1510 |
+
{
|
| 1511 |
+
"type": "ref_text",
|
| 1512 |
+
"bbox": [
|
| 1513 |
+
0.218,
|
| 1514 |
+
0.218,
|
| 1515 |
+
0.785,
|
| 1516 |
+
0.259
|
| 1517 |
+
],
|
| 1518 |
+
"angle": 0,
|
| 1519 |
+
"content": "14. Zhou, X., Mulay, A., Ferrara, E., Zafarani, R.: ReCOVery: A Multimodal Repository for COVID-19 News Credibility Research. (2020). In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management."
|
| 1520 |
+
},
|
| 1521 |
+
{
|
| 1522 |
+
"type": "ref_text",
|
| 1523 |
+
"bbox": [
|
| 1524 |
+
0.218,
|
| 1525 |
+
0.26,
|
| 1526 |
+
0.785,
|
| 1527 |
+
0.285
|
| 1528 |
+
],
|
| 1529 |
+
"angle": 0,
|
| 1530 |
+
"content": "15. Cui, L., Lee, D.: CoAID: COVID-19 Healthcare Misinformation Dataset. (2020). ArXiv, abs/2006.00885."
|
| 1531 |
+
},
|
| 1532 |
+
{
|
| 1533 |
+
"type": "ref_text",
|
| 1534 |
+
"bbox": [
|
| 1535 |
+
0.217,
|
| 1536 |
+
0.287,
|
| 1537 |
+
0.785,
|
| 1538 |
+
0.314
|
| 1539 |
+
],
|
| 1540 |
+
"angle": 0,
|
| 1541 |
+
"content": "16. Memon, S.A., Carley, K.M.: Characterizing COVID-19 Misinformation Communities Using a Novel Twitter Dataset. (2020). ArXiv, abs/2008.00791."
|
| 1542 |
+
},
|
| 1543 |
+
{
|
| 1544 |
+
"type": "ref_text",
|
| 1545 |
+
"bbox": [
|
| 1546 |
+
0.217,
|
| 1547 |
+
0.315,
|
| 1548 |
+
0.785,
|
| 1549 |
+
0.354
|
| 1550 |
+
],
|
| 1551 |
+
"angle": 0,
|
| 1552 |
+
"content": "17. Li, Y., Jiang, B., Shu, K., Liu, H.: MM-COVID: A Multilingual and Multi-modal Data Repository for Combating COVID-19 Disinformation. (2020). ArXiv, abs/2011.04088."
|
| 1553 |
+
},
|
| 1554 |
+
{
|
| 1555 |
+
"type": "ref_text",
|
| 1556 |
+
"bbox": [
|
| 1557 |
+
0.217,
|
| 1558 |
+
0.355,
|
| 1559 |
+
0.785,
|
| 1560 |
+
0.383
|
| 1561 |
+
],
|
| 1562 |
+
"angle": 0,
|
| 1563 |
+
"content": "18. Al-Rakhami, M.S., Al-Amri, A.M.: Lies Kill, Facts Save: Detecting COVID-19 Misinformation in Twitter. (2020). IEEE Access, 8, 155961-155970."
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "ref_text",
|
| 1567 |
+
"bbox": [
|
| 1568 |
+
0.217,
|
| 1569 |
+
0.384,
|
| 1570 |
+
0.785,
|
| 1571 |
+
0.411
|
| 1572 |
+
],
|
| 1573 |
+
"angle": 0,
|
| 1574 |
+
"content": "19. Vijjali, R., Potluri, P., Kumar, S., Teki, S.: Two Stage Transformer Model for COVID-19 Fake News Detection and Fact Checking. (2020)."
|
| 1575 |
+
},
|
| 1576 |
+
{
|
| 1577 |
+
"type": "ref_text",
|
| 1578 |
+
"bbox": [
|
| 1579 |
+
0.217,
|
| 1580 |
+
0.412,
|
| 1581 |
+
0.785,
|
| 1582 |
+
0.451
|
| 1583 |
+
],
|
| 1584 |
+
"angle": 0,
|
| 1585 |
+
"content": "20. Hossain, T., RobertL.Logan, I., Ugarte, A., Matsubara, Y., Young, S.,Singh, S.: COVIDLies: Detecting COVID-19 Misinformation on Social Media. (2020). NLP4COVID@EMNLP."
|
| 1586 |
+
},
|
| 1587 |
+
{
|
| 1588 |
+
"type": "ref_text",
|
| 1589 |
+
"bbox": [
|
| 1590 |
+
0.217,
|
| 1591 |
+
0.453,
|
| 1592 |
+
0.785,
|
| 1593 |
+
0.493
|
| 1594 |
+
],
|
| 1595 |
+
"angle": 0,
|
| 1596 |
+
"content": "21. Patwa, P., Sharma, S., Pykl, S., Guptha, V., Kumari, G., Akhtar, M.S., Ekbal, A., Das, A., Chakraborty, T. (2020). Fighting an Infodemic: COVID-19 Fake News Dataset. ArXiv, abs/2011.03327."
|
| 1597 |
+
},
|
| 1598 |
+
{
|
| 1599 |
+
"type": "ref_text",
|
| 1600 |
+
"bbox": [
|
| 1601 |
+
0.217,
|
| 1602 |
+
0.494,
|
| 1603 |
+
0.785,
|
| 1604 |
+
0.522
|
| 1605 |
+
],
|
| 1606 |
+
"angle": 0,
|
| 1607 |
+
"content": "22. Elhadad, M.K., Li, K., Gebali, F.: Detecting Misleading Information on COVID-19. (2020). IEEE Access, 8, 165201-165215."
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "ref_text",
|
| 1611 |
+
"bbox": [
|
| 1612 |
+
0.217,
|
| 1613 |
+
0.522,
|
| 1614 |
+
0.785,
|
| 1615 |
+
0.549
|
| 1616 |
+
],
|
| 1617 |
+
"angle": 0,
|
| 1618 |
+
"content": "23. Pennington, J., Socher, R., Manning, C.D.: Glove: Global Vectors for Word Representation. (2014). In: EMNLP."
|
| 1619 |
+
},
|
| 1620 |
+
{
|
| 1621 |
+
"type": "ref_text",
|
| 1622 |
+
"bbox": [
|
| 1623 |
+
0.217,
|
| 1624 |
+
0.55,
|
| 1625 |
+
0.785,
|
| 1626 |
+
0.577
|
| 1627 |
+
],
|
| 1628 |
+
"angle": 0,
|
| 1629 |
+
"content": "24. Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. (1997). Neural Computation, 9, 1735-1780."
|
| 1630 |
+
},
|
| 1631 |
+
{
|
| 1632 |
+
"type": "ref_text",
|
| 1633 |
+
"bbox": [
|
| 1634 |
+
0.217,
|
| 1635 |
+
0.578,
|
| 1636 |
+
0.785,
|
| 1637 |
+
0.618
|
| 1638 |
+
],
|
| 1639 |
+
"angle": 0,
|
| 1640 |
+
"content": "25. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching Word Vectors with Subword Information. (2017). Transactions of the Association for Computational Linguistics, 5, 135-146."
|
| 1641 |
+
},
|
| 1642 |
+
{
|
| 1643 |
+
"type": "ref_text",
|
| 1644 |
+
"bbox": [
|
| 1645 |
+
0.217,
|
| 1646 |
+
0.619,
|
| 1647 |
+
0.785,
|
| 1648 |
+
0.646
|
| 1649 |
+
],
|
| 1650 |
+
"angle": 0,
|
| 1651 |
+
"content": "26. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is All you Need. (2017). NIPS."
|
| 1652 |
+
},
|
| 1653 |
+
{
|
| 1654 |
+
"type": "ref_text",
|
| 1655 |
+
"bbox": [
|
| 1656 |
+
0.217,
|
| 1657 |
+
0.647,
|
| 1658 |
+
0.785,
|
| 1659 |
+
0.673
|
| 1660 |
+
],
|
| 1661 |
+
"angle": 0,
|
| 1662 |
+
"content": "27. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. (2015). Nature, 521(7553), pp.436-444."
|
| 1663 |
+
},
|
| 1664 |
+
{
|
| 1665 |
+
"type": "ref_text",
|
| 1666 |
+
"bbox": [
|
| 1667 |
+
0.217,
|
| 1668 |
+
0.674,
|
| 1669 |
+
0.785,
|
| 1670 |
+
0.702
|
| 1671 |
+
],
|
| 1672 |
+
"angle": 0,
|
| 1673 |
+
"content": "28. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (2019). NAACL-HLT."
|
| 1674 |
+
},
|
| 1675 |
+
{
|
| 1676 |
+
"type": "ref_text",
|
| 1677 |
+
"bbox": [
|
| 1678 |
+
0.217,
|
| 1679 |
+
0.702,
|
| 1680 |
+
0.785,
|
| 1681 |
+
0.741
|
| 1682 |
+
],
|
| 1683 |
+
"angle": 0,
|
| 1684 |
+
"content": "29. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XL-Net: Generalized Autoregressive Pretraining for Language Understanding. (2019). NeurIPS."
|
| 1685 |
+
},
|
| 1686 |
+
{
|
| 1687 |
+
"type": "ref_text",
|
| 1688 |
+
"bbox": [
|
| 1689 |
+
0.217,
|
| 1690 |
+
0.742,
|
| 1691 |
+
0.785,
|
| 1692 |
+
0.784
|
| 1693 |
+
],
|
| 1694 |
+
"angle": 0,
|
| 1695 |
+
"content": "30. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. (2019). ACL."
|
| 1696 |
+
},
|
| 1697 |
+
{
|
| 1698 |
+
"type": "ref_text",
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
0.217,
|
| 1701 |
+
0.785,
|
| 1702 |
+
0.785,
|
| 1703 |
+
0.827
|
| 1704 |
+
],
|
| 1705 |
+
"angle": 0,
|
| 1706 |
+
"content": "31. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. (2020). ArXiv, abs/1909.11942."
|
| 1707 |
+
},
|
| 1708 |
+
{
|
| 1709 |
+
"type": "list",
|
| 1710 |
+
"bbox": [
|
| 1711 |
+
0.217,
|
| 1712 |
+
0.148,
|
| 1713 |
+
0.785,
|
| 1714 |
+
0.827
|
| 1715 |
+
],
|
| 1716 |
+
"angle": 0,
|
| 1717 |
+
"content": null
|
| 1718 |
+
}
|
| 1719 |
+
]
|
| 1720 |
+
]
|
data/2021/2101_00xxx/2101.00180/full.md
CHANGED
|
@@ -1,3 +1,208 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Transformer based Automatic COVID-19 Fake News Detection System
|
| 2 |
+
|
| 3 |
+
Sunil Gundapu and Radhika Mamidi
|
| 4 |
+
|
| 5 |
+
International Institute of Information Technology, Hyderabad sunil.g@research.iit.ac.in, radhika.mamidi@iit.ac.in
|
| 6 |
+
|
| 7 |
+
Abstract. Recent rapid technological advancements in online social networks such as Twitter have led to a great incline in spreading false information and fake news. Misinformation is especially prevalent in the ongoing coronavirus disease (COVID-19) pandemic, leading to individuals accepting bogus and potentially deleterious claims and articles. Quick detection of fake news can reduce the spread of panic and confusion among the public. For our analysis in this paper, we report a methodology to analyze the reliability of information shared on social media pertaining to the COVID-19 pandemic. Our best approach is based on an ensemble of three transformer models (BERT, ALBERT, and XLNET) to detecting fake news. This model was trained and evaluated in the context of the ConstraintAI 2021 shared task "COVID19 Fake News Detection in English" [1]. Our system obtained 0.9855 f1-score on testset and ranked 5th among 160 teams.
|
| 8 |
+
|
| 9 |
+
Keywords: pandemic-19, fake news, deep learning, transformer models
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
The COVID-19 pandemic is considered the global public health crisis of the whole world and the biggest problem people faced after World War II. COVID-19, a contagious disease caused by a coronavirus, has caused more than 75 million confirmed cases and 1.7 million deaths across the world till 2020 December<sup>1</sup>. Unfortunately, the misinformation about COVID-19 has encouraged the growing of the disease and chaos among people. During the Munich Security Council held on February 15, 2020, World Health Organization (WHO) Director-General, Tedros Adhanom Ghebreyesus [2] stated that the world was in a war to fight not only a pandemic, but also an infodemic. So we should address the challenge of fake news detection to stop the spreading of COVID-19 misinformation.
|
| 14 |
+
|
| 15 |
+
Since the global pandemic impacts the people, there is a broader public finding information about the COVID-19, whose safety is intimidated by adversarial agents invested in spreading fake news for economic and political reasons. Besides, due to medical and public health issues, it is also hard to be totally valid and factual, leading to differences that worsen with fake news. This difficulty
|
| 16 |
+
|
| 17 |
+
is compounded by the quick advancement of knowledge about the disease. As researchers gain more knowledge about the virus, claims that looked right may turn out to be false, and vice versa. Detecting this spread of COVID-19 associated fake news, thus, has become a pivotal problem, gaining notable attention from government and global health organizations (WHO, 2020), online social networks (TechCrunch, 2020), and news organizations (BBC, 2020; CNN, 2020; New York Times, 2020).
|
| 18 |
+
|
| 19 |
+
In response to the present disinformation, this paper looks at developing an efficient fake news detection architecture with respect to COVID-19. Initially, we started with developing machine learning (ML) algorithms with Term Frequency and Inverse Document Frequency (TF-IDF) feature vectors to detect misinformation on the provided dataset. These supervised TF-IDF methods are still relevant for many classification tasks and performed pretty well for fake news detection. We developed an effective ensemble model integrated with three transformer models for detecting fake news on the social media platforms. This resulted in higher accuracy and a more generalized model.
|
| 20 |
+
|
| 21 |
+
The rest of this paper is organized as follows, Section II presents some prior works related to fake news, and its spread, on social media platforms. In Section III, we describe the dataset provided in the Constraint AI-2021 shared task. Section IV presents implemented models and framework for misinformation detection. Section V provides the discussions on the results. Finally we conclude this paper in Section VI.
|
| 22 |
+
|
| 23 |
+
# 2 Related Work
|
| 24 |
+
|
| 25 |
+
Fake News Detection: Fake news can be defined as inaccurate and misleading information that is growing knowingly or unknowingly [3]. Recognizing the spread of false information such as rumors, fake news, propaganda, hoaxes, spear phishing, and conspiracy theories is an essential task for natural language processing [4]. Gartner's [5] research studies explained that most people in advanced economies would believe more fake information than truthful information by 2022.
|
| 26 |
+
|
| 27 |
+
To date, so many automated misinformation detection architectures have been developed. Rohit et al. [6] provided an extensive survey to detect fake news on various online social networks. Ghorbani et al. [7] presented an inclusive overview of the recent studies related to misinformation. Furthermore, they described the impact of misleading information, shown state-of-the-art fake news detection systems, and explored the disinformation detection datasets. The majority of the fake news detection models developed using supervised machine learning algorithms to classify the data as misleading or not [8]. This supervised classification is concluded by comparing the user input text with some already created corpora containing genuine and misleading information [9].
|
| 28 |
+
|
| 29 |
+
Aswini et al. [10] proposed a deep learning architecture with various word embeddings for Fake News Challenge (FCN-1) dataset<sup>2</sup>. They developed the
|
| 30 |
+
|
| 31 |
+
architecture to accurately predict the stance between a given pair of news headlines and the corresponding article/body. On the same FCN-1 dataset, Sean et al. [11] developed an average weighted model of TalosCNN and TalosTree called TalosComb. TalosCNN is a convolutional neural network with pre-trained word2vec embeddings, and TalosTree is a gradient-boosted decision tree model with SVD, word count, TF-IDF. By analyzing the relationship between the news headline and the corresponding article, Heejung et al. [12] designed the Bidirectional Encoder Representations from Transformers model (BERT) to detect misleading news articles.
|
| 32 |
+
|
| 33 |
+
COVID-19: In the case of COVID-19 fake news, a large number of misleading contents remain online on social media platforms. NLP researchers have been working on developing algorithms for the detection of online COVID-19 related disinformation. To develop any algorithm, we require a corpus. So members of the NLP community created the various fake news datasets: FakeCovid [13], ReCOVery [14], CoAID [15], and CMU-MisCOVID19 [16]. Yichuan Li et al. [17] developed multi-dimensional and multilingual MM-COVID corpora, which covers six languages. Mabrook et al. [18] created a large Twitter dataset related to COVID-19 misinformation. And authors developed an ensemble-stacking model with six machine learning algorithms on the created dataset for detecting misinformation.
|
| 34 |
+
|
| 35 |
+
Elhadad et al. [22] constructed a voting ensemble machine learning classifier for fake news detection that uses seven feature extraction techniques and ten machine learning models. Tamanna et al. [20] used the COVIDLIES dataset to detect the misinformation by retrieving the misconceptions relevant to the Twitter posts. For COVID-19 fake news detection and fact-checking, Rutvik et al. [19] proposed a two-stage transformer model. The first model retrieves the most relevant facts about COVID-19 by using a novel fact-checking algorithm, and the second model, by computing the textual entailment, verifies the level of truth. Adapting all these classical and hybrid related work techniques, we developed a COVID-19 fake news detection system in this paper.
|
| 36 |
+
|
| 37 |
+
# 3 Dataset Description
|
| 38 |
+
|
| 39 |
+
The ConstraintAI $^{21}$ shared task organizers developed a COVID-19 fake news detection in English dataset [21] containing 10,700 data points collected from various online social networks such as Twitter, Facebook, and Instagram, etc. From the total dataset, 6,420 data points are reserved for training, 2,140 data points are used for hyperparameter tuning as a part of the validation phase, and the remaining 2,140 social media posts are kept aside for testing. Each dataset except the test set contains social media data points and their corresponding labels, either real or fake.
|
| 40 |
+
|
| 41 |
+
<table><tr><td>Corpus</td><td>Real</td><td>Fake</td></tr><tr><td>Train</td><td>3360</td><td>3060</td></tr><tr><td>Valid</td><td>1120</td><td>1020</td></tr><tr><td>Test</td><td>1120</td><td>1020</td></tr></table>
|
| 42 |
+
|
| 43 |
+
(a) Dataset Statistics
|
| 44 |
+
|
| 45 |
+
<table><tr><td>Tweet</td><td>Label</td></tr><tr><td>CDC Recommends Mothers Stop Breastfeeding To Boost Vaccine Efficacy</td><td>fake</td></tr><tr><td>1000 COVID-19 testing labs in India: ICMR</td><td>real</td></tr></table>
|
| 46 |
+
|
| 47 |
+
(b) Label-wise example
|
| 48 |
+
|
| 49 |
+
Table 1: Fake news dataset information
|
| 50 |
+
|
| 51 |
+
Table 1 shows the corpus size and label distribution, and if we observe, the labels in each dataset are all roughly balanced. Table 2 shows some examples from the COVID-19 fake news detection in the English dataset. We illustrate the most occurring word cloud of the real and fake data points after removing the stop words in Figures 1(a) and 1(b). In Figure 1(a), we can see unique words in real-labeled data points which don't often occur in Figure 1(b), like "covid19", "discharged", "confirmed", "testing", "indiafightscorona", and "indiawin", etc.; meanwhile, from Figure 1(b), we can find unique words frequently appearing in the fake articles, which include "coronavirus", "kill", "muslim", "hydroxychloroquine", "china", and "facebook post", but don't frequently appear in the true labeled data points. These frequent textual words can give important information to differentiate the true data points from fake ones.
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
(a) Positive word cloud
|
| 55 |
+
Fig. 1: Illustration of frequent word cloud
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
(b) Negative word cloud
|
| 59 |
+
|
| 60 |
+
# 4 Methodology
|
| 61 |
+
|
| 62 |
+
In this part, we present our transformer based ensemble model that is trained and tuned on the datasets which reported in the previous section. We compare our
|
| 63 |
+
|
| 64 |
+
approach with various machine learning (ML) and deep learning (DL) models with different word embeddings. The full code of system architecture can be found at GitHub<sup>4</sup>.
|
| 65 |
+
|
| 66 |
+
# 4.1 Data Preprocessing
|
| 67 |
+
|
| 68 |
+
The main aim of this part is to use the NLP techniques to preprocess the input tweet data and prepare for the next step to extract the proper features. In Figure 2, we shown the detailed data preprocessing pipeline with examples.
|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
Fig. 2: Data preprocessing pipeline
|
| 72 |
+
|
| 73 |
+
In the preprocessing step, we will forward the tokenized tweet through the pipeline to eliminate the noise in the fake news dataset by remove or normalize the unnecessary tokens. The preprocessing pipeline includes the following subparts:
|
| 74 |
+
|
| 75 |
+
1. Emoticon Conversion: In this step, we converted the each emoticon in the tweet to text. Example: $\rightarrow$ Face with medical mask emoji
|
| 76 |
+
2. Handling of Hashtags: We identified the hashtag tokens by seeing pound (#) sign and splitted these based on digits or capital letters. Example: #IndiaFightsCorona → IndiaFightsCorona
|
| 77 |
+
3. Stemming: We removed the inflectional morphemes like "ed", "est", "s", and "ing" from their token stem. Ex: confirmed $\rightarrow$ "confirm" + "-ed"
|
| 78 |
+
4. Text cleaning: To remove the irrelevant data we used this step. Removed punctuation marks, digits and, non-ASCII glyphs from the tweet.
|
| 79 |
+
|
| 80 |
+
# 4.2 Supervised Machine Learning Models
|
| 81 |
+
|
| 82 |
+
To build the finest system for fake news detection, we started our investigations with traditional NLP approaches like Linear Regression (LR), Support Vector MAchines (SVM), Passive Agressive Classifier (PAC), XGBoost, and Multi-Layer Perceptron (MLP). We study the results of above mentioned supervised models with the combination of three types of word vectors:
|
| 83 |
+
|
| 84 |
+
1. Word-level, n-gram level, and character level TF-IDF vectors with the feature matrix size of 100000.
|
| 85 |
+
2. English Glove [23] word embeddings with the dimension of 300.
|
| 86 |
+
3. TF-IDF weighted averaging with Glove embeddings. We described below the fake news vector construction.
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
T w e e t _ {v e c t o r} = \frac {\sum_ {i = 1} ^ {N} \mathbf {t f - i d f} (t o k e n _ {i}) \times \mathbf {G l o v e} (t o k e n _ {i})}{\mathbf {N}} \tag {1}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
In the above formula, $\mathrm{N}$ is the total number of words in the input fake news tweet, and $\text{token}_i$ is the $i^{th}$ token in the input text. After analyzing the results, TF-IDF weighted averaging gave better results than the standard TF-IDF.
|
| 93 |
+
|
| 94 |
+
# 4.3 Deep Learning Models
|
| 95 |
+
|
| 96 |
+
Supervised machine learning algorithms performed very well on the provided dataset. In this section, we experiment with deep learning models that give better results than traditional classification algorithms.
|
| 97 |
+
|
| 98 |
+
LSTM: We used Long Short-Term Memory (LSTM) [24] architecture with two different pre-trained word embeddings Glove and Fasttext [25]. LSTM is a type of Recurrent Neural Network (RNN) that can solve long term dependency problem, and it is a well-suited model for sequence classification.
|
| 99 |
+
|
| 100 |
+
We converted the input data points into word vectors by using pre-trained word embeddings. These word vectors are passed as input to the LSTM layer. We stacked up two LSTM layers one after another with the dropout of 0.25. The size of LSTM is 128, and the last time step output is treated as input data point representation. The final time step's outcome is passed as an input to a dense layer for fake news detection.
|
| 101 |
+
|
| 102 |
+
BiLSTM with Attention: Sometimes not all the tokens in the input text contribute equally to the representation of input text. So we advantage word attention [26] mechanism to catch the tokens' prominent influence on the input data point. We built this attention mechanism on top of BiLSTM layers.
|
| 103 |
+
|
| 104 |
+
The sequence of word vector is passed through a BiLSTM layer, which contains one forward and backward LSTM layer. Attention mechanism applied to the output of BiLSTM layer, which produces a dense vector. This dense vector is forwarded to a fully connected network.
|
| 105 |
+
|
| 106 |
+
CNN: We explored a Convolution Neural Network (CNN) [27] model for misinformation detection. The model consists of an embedding layer, a convolution layer with 3 convolutions, a max-pooling layer, and a fully connected network. In the embedding layer, the input texts are converted into $n \times d$ sequence matrix,
|
| 107 |
+
|
| 108 |
+
where $n$ is the length of the input data point and $d$ is the length of the word embedding dimension. In the convolution layer, fed the sequence matrix through three 1D convolutions of kernel sizes 3, 4, and 5. And each convolutions filter size is 128. The convolution layer's output is max pooled over time and concatenated to get the input datapoint representations in the max-pooling layer. The output of the max-pooling layer is passed to a fully connected network with a softmax output layer.
|
| 109 |
+
|
| 110 |
+
CNN + BiLSTM: A CNN and BiLSTM architecture is an ensemble of CNN and bidirectional LSTM models with Fasttext/Glove word embeddings. In this architecture, the CNN extracts the maximum amount of features/information from the input text using convolution layers. The output of CNN becomes the input to BiLSTM, which keeps the data in chronological order in both directions.
|
| 111 |
+
|
| 112 |
+
The sequence of word vectors are forwarded through a convolution of kernel size 3 with filter size 128. The output of convolution is passed through a BiLSTM. The outcome of BiLSTM is max-pooled over time and followed by one dense layer and a softmax layer.
|
| 113 |
+
|
| 114 |
+
# 4.4 Transformer Models
|
| 115 |
+
|
| 116 |
+
This section explored individual and ensembling of the three transformer models BERT, ALBERT, and XLNet. These models have outperformed the other ML and DL algorithms. We implemented these models using HuggingFace<sup>5</sup> is a PyTorch transformer library. And the hyperparameters of the three models are described in Table 1.
|
| 117 |
+
|
| 118 |
+
<table><tr><td>Model</td><td>Learning Rate</td><td>Batch Size</td><td>Optimizer</td><td>Max Length</td><td>Type</td></tr><tr><td>BERT</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>BERT-Base</td></tr><tr><td>XLNet</td><td>2e-5</td><td>16</td><td>Adam</td><td>128</td><td>XLNetLarge</td></tr><tr><td>ALBERT</td><td>2e-5</td><td>32</td><td>Adam</td><td>128</td><td>ALBERT-Xlarge</td></tr></table>
|
| 119 |
+
|
| 120 |
+
Table 2: Hyperparameters of transformer models
|
| 121 |
+
|
| 122 |
+
BERT: Bidirectional Encoder Representations from Transformers (henceforth, BERT) [28] is a transformer model developed to pre-train deep bidirectional representations from unseen data. This model developed by combining two robust concepts: (i) It's a deep transformer model so that it can process lengthy sentences effectively by using attention mechanism, and (ii) It's a bidirectional
|
| 123 |
+
|
| 124 |
+
network, so it takes into account the entire text passage to comprehend the meaning of each token.
|
| 125 |
+
|
| 126 |
+
BERT implementation has two steps; one is pre-training and another fin-tuning. In the first step, the model is trained on unseen data over various pretraining problems using a dataset in a particular language or in increases data with multiple languages. In the second step, all the initialized parameters are fine-tuned using the labeled data from certain tasks.
|
| 127 |
+
|
| 128 |
+
We fine-tuned the pre-trained BERT (Base) model for our COVID-19 fake news detection task. BERT base model contains the 12 layers of encoder blocks and 12 bidirectional self-attention heads by considering the sequence of 512 tokens and emitting the representations of a sequence of hidden vectors. We added one additional output layer on top of the BERT model to calculate the conditional probability over the output classes, either fake or real. See FIGURE 1 for the fine-tuned model of BERT.
|
| 129 |
+
|
| 130 |
+

|
| 131 |
+
Fig. 3: BERT model architecture
|
| 132 |
+
|
| 133 |
+
XLNet: XLNet is an enhanced version of BERT. To understand the language context deeper, XLNet [29] uses Transformer-XL [30] as a feature engineering model, which alone is an adoption upon the native Transformer. This Transformer XL model integrates the two components Recurrence Mechanism and Relative Positional Encoding (RPE) to the Transformer used in BERT to handle the long-term dependencies for texts that are longer than the maximum allowed input length. Recurrence Mechanism will give context between two sequences at specific segments and RPE, which carries similarity information between two tokens.
|
| 134 |
+
|
| 135 |
+
The XLNet model has been trained on a huge dataset using the permutation language modeling. This technique is one of the main differences between BERT and XLNet, and it uses permutations to generate data from the forward and backward directions at the same time. We used the pre-trained XLNet model from Hugging Face, then fine-tuned the model with a maximum length of 128 to update the pre-trained model to fit our fake news detection dataset.
|
| 136 |
+
|
| 137 |
+
ALBERT: Modern language models increasing the model size and quantity of parameters when pre-training natural language representations. They often give better improvements in many downstream tasks, but in some cases, they become harder due to memory limitation and longer hours of training. To address these problems, a self-supervised learning model ALBERT (A Lite BERT) [31] often uses parameter reduction techniques to increase model speed and lower memory consumption. We used the A Lite BERT model for our misinformation detection problem, which achieves better performance than DL models.
|
| 138 |
+
|
| 139 |
+
Ensemble Model: We ensembled the three transformer models BERT, ALBERT, and XLNet for better prediction. See Figure 4 for the ensemble model. Our ensemble model computes an average of all softmax values from these three transformer models after extracting the softmax probabilities from each model. This model relatively better than other models.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
Fig. 4: Transformer based ensemble model architecture
|
| 143 |
+
|
| 144 |
+
# 5 Results and Discussion
|
| 145 |
+
|
| 146 |
+
In this section, we compared the performance of various machine learning, deep learning, and transformer-based models using several evaluation metrics like precision, recall, weighted f1-score and accuracy. The results of the various experiments on the test set are reported in Table 3. The results clearly showing that Transformer based models are considerably better than other machine and deep learning models for our COVID-19 misinformation detection task. And while
|
| 147 |
+
|
| 148 |
+
<table><tr><td>Model Type</td><td>Model</td><td>Precision</td><td>Recall</td><td>Accuracy</td><td>F1-Score</td></tr><tr><td rowspan="3">ML
|
| 149 |
+
Models</td><td>SVM</td><td>0.9640</td><td>0.9640</td><td>0.964013</td><td>0.964037</td></tr><tr><td>PAC</td><td>0.9673</td><td>0.9673</td><td>0.967285</td><td>0.967289</td></tr><tr><td>MLP</td><td>0.9645</td><td>0.9645</td><td>0.964494</td><td>0.964485</td></tr><tr><td rowspan="4">Deep Learning
|
| 150 |
+
Models</td><td>LSTM with FastText</td><td>0.9682</td><td>0.9682</td><td>0.9682203</td><td>0.968224</td></tr><tr><td>CNN with FastText</td><td>0.9698</td><td>0.9698</td><td>0.969802</td><td>0.969819</td></tr><tr><td>LSTM + CNN</td><td>0.9762</td><td>0.9762</td><td>0.976163</td><td>0.976168</td></tr><tr><td>BiLSTM + Attention</td><td>0.9790</td><td>0.9785</td><td>0.978524</td><td>0.978504</td></tr><tr><td rowspan="4">Transformer
|
| 151 |
+
Models</td><td>BERT</td><td>0.9813</td><td>0.9813</td><td>0.981306</td><td>0.981308</td></tr><tr><td>ALBERT</td><td>0.9781</td><td>0.9781</td><td>0.978031</td><td>0.978037</td></tr><tr><td>XLNet</td><td>0.9787</td><td>0.9789</td><td>0.978596</td><td>0.978592</td></tr><tr><td>Ensemble Model</td><td>0.9855</td><td>0.9855</td><td>0.985512</td><td>0.985514</td></tr></table>
|
| 152 |
+
|
| 153 |
+
doing experiments, we observed that few models good at retrieving prominent features while other models have the best classification performance.
|
| 154 |
+
|
| 155 |
+
Classical machine learning models with various TF-IDF feature vectors gave the approximate baseline model results. We observe that the TF-IDF weighted average performed better than the normal TF-IDF vectors. Bi-directional LSTM with attention mechanism f1-score approximate very close to transformer models. The BERT, XLNet, and ALBERT demonstrate better performance than deep learning models. An ensemble of the transformer-based model produces the best F1 score of 0.9855 on the test set. Our transformer based model ranked 5th among 160 teams.
|
| 156 |
+
|
| 157 |
+
Table 3: Comparison of various fake news detection models on testset
|
| 158 |
+
|
| 159 |
+
<table><tr><td>Test Sample</td><td>BERT</td><td>ALBERT</td><td>XLNet</td><td>Ensemble</td></tr><tr><td>#BillGates is shocked that America's pandemic response is among the worst in the world.</td><td>✓</td><td>✗</td><td>✗</td><td>✓</td></tr><tr><td>We will all come out stronger from this</td><td>✗</td><td>✗</td><td>✓</td><td>✓</td></tr><tr><td>#COVID #pandemic. Just #StaySafeStayHealthy</td><td></td><td></td><td></td><td></td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 4: Misclassified samples from testset
|
| 162 |
+
|
| 163 |
+
In some problems, ensembling of four transformer models is very difficult, and sometimes this approach will not perform well. But if we observe the results of individual transformer models on our dataset are very close, meaning that any transformer model can be used for our fake news detection task. This is the major reason behind the ensembling of transformer models.
|
| 164 |
+
|
| 165 |
+
In Table 4, we showed the two misclassified test samples. The first test sample actual label is "real", but only BERT and ensemble models are predicted correctly, remaining two models wrongly predicted. And the second sample true label is "fake", but XLNet and ensemble predicted correctly, remaining two mod-
|
| 166 |
+
|
| 167 |
+
els wrongly predicted. However, the ensemble model is correctly predicted in both cases because we are averaging the BERT, ALBERT, and XLNet softmax probabilities. This is a principal observation to ensemble the transformer models.
|
| 168 |
+
|
| 169 |
+
# 6 Conclusion
|
| 170 |
+
|
| 171 |
+
In this paper, we presented various algorithms to combat the global infodemic, but transformer-based algorithms performed better than others. And we submitted these models to the Shared Task of COVID-19 fake news detection for English, ConstraintAI-2021 workshop.
|
| 172 |
+
|
| 173 |
+
Fake news is a progressively significant and tricky problem to solve, particularly in an unanticipated situation like the COVID-19 epidemic. Leveraging state-of-the-art classical and advanced NLP models can help address the problem of COVID-19 fake news detection and other global health emergencies. We intend to explore other contextualized embeddings like FLAIR, ELMo, etc., for a better fake news detecting system in future works.
|
| 174 |
+
|
| 175 |
+
# References
|
| 176 |
+
|
| 177 |
+
1. Patwa, P., Bhardwaj, M., Guptha, V., Kumari, G., Sharma, S., PYKL, S., Das, A., Ekbal A., Akhtar, S., Chakraborty.: Overview of CONSTRAINTN 2021 Shared Tasks: Detecting English COVID-19 Fake News and Hindi Hostile Posts. (2021). In: Proceedings of the First Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation (CONSTRAINT). Springer.
|
| 178 |
+
2. Datta, R., Yadav, K., Singh, A., Datta, K., Bansal, A.: The infodemics of COVID-19 amongst healthcare professionals in india. Med. J. Armed Forces India, vol. 76, no. 3, pp. 276-283, Jul. 2020.
|
| 179 |
+
3. Chen, X., Sin, S.J.: 'Misinformation? What of it?' Motivations and individual differences in misinformation sharing on social media. In: ASIST (2013)
|
| 180 |
+
4. Thorne, J., Vlachos, A.: Automated Fact Checking: Task formulations, methods and future directions. In: COLING (2018)
|
| 181 |
+
5. Titcomb, J., Carson, J.: www.telegraph.co.uk. Fake news: What exactly is it - and how can you spot it?
|
| 182 |
+
6. Kaliyar, R., Singh, N.: Misinformation Detection on Online Social Media-A Survey. (2019). 1-6. 10.1109/ICCCNT45670.2019.8944587.
|
| 183 |
+
7. Zhang, X., Ghorbani, A.: An overview of online fake news: Characterization, detection, and discussion. (2020). Inf. Process. Manag., 57, 102025.
|
| 184 |
+
8. Khan, J.Y., Khondaker, M.T., Iqbal, A., Afroz, S.: A Benchmark Study on Machine Learning Methods for Fake News Detection. (2019). ArXiv, abs/1905.04749.
|
| 185 |
+
9. Elhadad, M., Li, K.F., Gebali, F.: A Novel Approach for Selecting Hybrid Features from Online News Textual Metadata for Fake News Detection. In: Proc. 3PGCIC, Antwerp, Belgium, 2019, pp. 914-925.
|
| 186 |
+
10. Thota, A., Tilak, P., Ahluwalia, S., Lohia, N.: Fake News Detection: A Deep Learning Approach. (2018). SMU Data Science Review: Vol. 1: No. 3, Article 10.
|
| 187 |
+
11. Sean, B., Doug, S., Yuxi, P: Talos Targets Disinformation with Fake News Challenge Victory. (2017). Available online: https://blog.talosintelligence.com/2017/06/talos-fake-news-challenge.html
|
| 188 |
+
|
| 189 |
+
12. Jwa, H., Oh, D., Park, K., Kang, J., Lim, H.: exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT). (2019). Applied Sciences, 9, 4062.
|
| 190 |
+
13. Shahi, G.K., Nandini, D.: FakeCovid - A Multilingual Cross-domain Fact Check News Dataset for COVID-19. (2020). ArXiv, abs/2006.11343.
|
| 191 |
+
14. Zhou, X., Mulay, A., Ferrara, E., Zafarani, R.: ReCOVery: A Multimodal Repository for COVID-19 News Credibility Research. (2020). In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management.
|
| 192 |
+
15. Cui, L., Lee, D.: CoAID: COVID-19 Healthcare Misinformation Dataset. (2020). ArXiv, abs/2006.00885.
|
| 193 |
+
16. Memon, S.A., Carley, K.M.: Characterizing COVID-19 Misinformation Communities Using a Novel Twitter Dataset. (2020). ArXiv, abs/2008.00791.
|
| 194 |
+
17. Li, Y., Jiang, B., Shu, K., Liu, H.: MM-COVID: A Multilingual and Multi-modal Data Repository for Combating COVID-19 Disinformation. (2020). ArXiv, abs/2011.04088.
|
| 195 |
+
18. Al-Rakhami, M.S., Al-Amri, A.M.: Lies Kill, Facts Save: Detecting COVID-19 Misinformation in Twitter. (2020). IEEE Access, 8, 155961-155970.
|
| 196 |
+
19. Vijjali, R., Potluri, P., Kumar, S., Teki, S.: Two Stage Transformer Model for COVID-19 Fake News Detection and Fact Checking. (2020).
|
| 197 |
+
20. Hossain, T., RobertL.Logan, I., Ugarte, A., Matsubara, Y., Young, S.,Singh, S.: COVIDLies: Detecting COVID-19 Misinformation on Social Media. (2020). NLP4COVID@EMNLP.
|
| 198 |
+
21. Patwa, P., Sharma, S., Pykl, S., Guptha, V., Kumari, G., Akhtar, M.S., Ekbal, A., Das, A., Chakraborty, T. (2020). Fighting an Infodemic: COVID-19 Fake News Dataset. ArXiv, abs/2011.03327.
|
| 199 |
+
22. Elhadad, M.K., Li, K., Gebali, F.: Detecting Misleading Information on COVID-19. (2020). IEEE Access, 8, 165201-165215.
|
| 200 |
+
23. Pennington, J., Socher, R., Manning, C.D.: Glove: Global Vectors for Word Representation. (2014). In: EMNLP.
|
| 201 |
+
24. Hochreiter, S., Schmidhuber, J.: Long Short-Term Memory. (1997). Neural Computation, 9, 1735-1780.
|
| 202 |
+
25. Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching Word Vectors with Subword Information. (2017). Transactions of the Association for Computational Linguistics, 5, 135-146.
|
| 203 |
+
26. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is All you Need. (2017). NIPS.
|
| 204 |
+
27. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. (2015). Nature, 521(7553), pp.436-444.
|
| 205 |
+
28. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (2019). NAACL-HLT.
|
| 206 |
+
29. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., Le, Q.V.: XL-Net: Generalized Autoregressive Pretraining for Language Understanding. (2019). NeurIPS.
|
| 207 |
+
30. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. (2019). ACL.
|
| 208 |
+
31. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. (2020). ArXiv, abs/1909.11942.
|
data/2021/2101_00xxx/2101.00180/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00190/d5499c11-780e-4ee3-a8b8-c44413fe322b_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00190/d5499c11-780e-4ee3-a8b8-c44413fe322b_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00190/full.md
CHANGED
|
@@ -1,3 +1,385 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Prefix-Tuning: Optimizing Continuous Prompts for Generation
|
| 2 |
+
|
| 3 |
+
Xiang Lisa Li
|
| 4 |
+
Stanford University
|
| 5 |
+
xlisali@stanford.edu
|
| 6 |
+
|
| 7 |
+
Percy Liang
|
| 8 |
+
Stanford University
|
| 9 |
+
pliang@cs.stanford.edu
|
| 10 |
+
|
| 11 |
+
# Abstract
|
| 12 |
+
|
| 13 |
+
Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only $0.1\%$ of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.
|
| 14 |
+
|
| 15 |
+
# 1 Introduction
|
| 16 |
+
|
| 17 |
+
Fine-tuning is the prevalent paradigm for using large pretrained language models (LMs) (Radford et al., 2019; Devlin et al., 2019) to perform downstream tasks (e.g., summarization), but it requires updating and storing all the parameters of the LM. Consequently, to build and deploy NLP systems that rely on large pretrained LMs, one currently needs to store a modified copy of the LM parameters for each task. This can be prohibitively expensive, given the large size of current LMs; for example, GPT-2 has 774M parameters (Radford et al., 2019) and GPT-3 has 175B parameters (Brown et al., 2020).
|
| 18 |
+
|
| 19 |
+
A natural approach to this problem is lightweight fine-tuning, which freezes most of the pretrained parameters and augments the model with small trainable modules. For example, adapter-tuning
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Fine-tuning (top) updates all Transformer parameters (the red Transformer box) and requires storing a full model copy for each task. We propose prefix-tuning (bottom), which freezes the Transformer parameters and only optimizes the prefix (the red prefix blocks). Consequently, we only need to store the prefix for each task, making prefix-tuning modular and space-efficient. Note that each vertical block denote transformer activations at one time step.
|
| 23 |
+
|
| 24 |
+
(Rebuffi et al., 2017; Houlsby et al., 2019) inserts additional task-specific layers between the layers of pretrained language models. Adapter-tuning has promising performance on natural language understanding and generation benchmarks, attaining comparable performance with fine-tuning while adding only around $2 - 4\%$ task-specific parameters (Houlsby et al., 2019; Lin et al., 2020).
|
| 25 |
+
|
| 26 |
+
On the extreme end, GPT-3 (Brown et al., 2020) can be deployed without any task-specific tuning. Instead, users prepend a natural language task instruction (e.g., $TL;DR$ for summarization) and a few examples to the task input; then generate the output from the LM. This approach is known as in-context learning or prompting.
|
| 27 |
+
|
| 28 |
+
In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation (NLG) tasks, inspired by prompting. Consider the task of generating a textual de
|
| 29 |
+
|
| 30 |
+
scription of a data table, as shown in Figure 1, where the task input is a linearized table (e.g., "name: Starbucks | type: coffee shop") and the output is a textual description (e.g., "Starbucks serves coffee"). Prefix-tuning prepends a sequence of continuous task-specific vectors to the input, which we call a prefix, depicted by red blocks in Figure 1 (bottom). For subsequent tokens, the Transformer can attend to the prefix as if it were a sequence of "virtual tokens", but unlike prompting, the prefix consists entirely of free parameters which do not correspond to real tokens. In contrast to fine-tuning in Figure 1 (top), which updates all Transformer parameters and thus requires storing a tuned copy of the model for each task, prefix-tuning only optimizes the prefix. Consequently, we only need to store one copy of the large Transformer and a learned task-specific prefix, yielding a very small overhead for each additional task (e.g., 250K parameters for table-to-text).
|
| 31 |
+
|
| 32 |
+
In contrast to fine-tuning, prefix-tuning is modular: we train an upstream prefix which steers a downstream LM, which remains unmodified. Thus, a single LM can support many tasks at once. In the context of personalization where the tasks correspond to different users (Shokri and Shmatikov, 2015; McMahan et al., 2016), we could have a separate prefix for each user trained only on that user's data, thereby avoiding data cross-contamination. Moreover, the prefix-based architecture enables us to even process examples from multiple users/tasks in a single batch, something that is not possible with other lightweight fine-tuning approaches.
|
| 33 |
+
|
| 34 |
+
We evaluate prefix-tuning on table-to-text generation using GPT-2 and abstractive summarization using BART. In terms of storage, prefix-tuning stores $1000\mathrm{x}$ fewer parameters than fine-tuning. In terms of performance when trained on full datasets, prefix-tuning and fine-tuning are comparable for table-to-text (§6.1), while prefix-tuning suffers a small degradation for summarization (§6.2). In low-data settings, prefix-tuning on average outperforms fine-tuning on both tasks (§6.3). Prefix-tuning also extrapolates better to tables (for table-to-text) and articles (for summarization) with unseen topics (§6.4).
|
| 35 |
+
|
| 36 |
+
# 2 Related Work
|
| 37 |
+
|
| 38 |
+
Fine-tuning for natural language generation. Current state-of-the-art systems for natural language generation are based on fine-tuning pre
|
| 39 |
+
|
| 40 |
+
trained LMs. For table-to-text generation, Kale (2020) fine-tunes a sequence-to-sequence model (T5; Raffel et al., 2020). For extractive and abstractive summarization, researchers fine-tune masked language models (e.g., BERT; Devlin et al., 2019) and encode-decoder models (e.g., BART; Lewis et al., 2020) respectively (Zhong et al., 2020; Liu and Lapata, 2019; Raffel et al., 2020). For other conditional NLG tasks such as machine translation and dialogue generation, fine-tuning is also the prevalent paradigm (Zhang et al., 2020c; Stickland et al., 2020; Zhu et al., 2020; Liu et al., 2020). In this paper, we focus on table-to-text using GPT-2 and summarization using BART, but prefix-tuning can be applied to other generation tasks and pretrained models.
|
| 41 |
+
|
| 42 |
+
Lightweight fine-tuning. Lightweight fine-tuning freezes most of the pretrained parameters and modifies the pretrained model with small trainable modules. The key challenge is to identify high-performing architectures of the modules and the subset of pretrained parameters to tune. One line of research considers removing parameters: some model weights are ablated away by training a binary mask over model parameters (Zhao et al., 2020; Radiya-Dixit and Wang, 2020). Another line of research considers inserting parameters. For example, Zhang et al. (2020a) trains a "side" network that is fused with the pretrained model via summation; adapter-tuning inserts task-specific layers (adapters) between each layer of the pretrained LM (Houlsby et al., 2019; Lin et al., 2020; Rebuffi et al., 2017; Pfeiffer et al., 2020). Compared to this line of work, which tunes around $3.6\%$ of the LM parameters, our method obtains a further $30x$ reduction in task-specific parameters, tuning only $0.1\%$ while maintaining comparable performance.
|
| 43 |
+
|
| 44 |
+
Prompting. Prompting means preponding instructions and a few examples to the task input and generating the output from the LM. GPT-3 (Brown et al., 2020) uses manually designed prompts to adapt its generation for different tasks, and this framework is termed in-context learning. However, since Transformers can only condition on a bounded-length context (e.g., 2048 tokens for GPT-3), in-context learning is unable to fully exploit training sets longer than the context window. Sun and Lai (2020) also prompt by keywords to control for sentiment or topic of the generated sentence. In natural language understanding tasks, prompt
|
| 45 |
+
|
| 46 |
+
engineering has been explored in prior works for models like BERT and RoBERTa (Liu et al., 2019; Jiang et al., 2020; Schick and Schütze, 2020). For example, AutoPrompt (Shin et al., 2020) searches for a sequence of discrete trigger words and concatenates it with each input to elicit sentiment or factual knowledge from a masked LM. In contrast with AutoPrompt, our method optimizes continuous prefixes, which are more expressive (\$7.2); moreover, we focus on language generation tasks.
|
| 47 |
+
|
| 48 |
+
Continuous vectors have been used to steer language models; for example, Subramani et al. (2020) showed that a pretrained LSTM language model can reconstruct arbitrary sentences by optimizing a continuous vector for each sentence, making the vector input-specific. In contrast, prefix-tuning optimizes a task-specific prefix that applies to all instances of that task. As a result, unlike the previous work whose application is limited to sentence reconstruction, prefix-tuning can be applied to NLG tasks.
|
| 49 |
+
|
| 50 |
+
Controllable generation. Controllable generation aims to steer a pretrained language model to match a sentence level attribute (e.g., positive sentiment or topic on sports). Such control can happen at training time: Keskar et al. (2019) pretrains the language model (CTRL) to condition on metadata such as keywords or URLs. Additionally, the control can happen at decoding time, by weighted decoding (GeDi, Krause et al., 2020) or iteratively updating the past activations (PPLM, Dathathri et al., 2020). However, there is no straightforward way to apply these controllable generation techniques to enforce fine-grained control over generated contents, as demanded by tasks like table-to-text and summarization.
|
| 51 |
+
|
| 52 |
+
# 3 Problem Statement
|
| 53 |
+
|
| 54 |
+
Consider a conditional generation task where the input is a context $x$ and the output $y$ is a sequence of tokens. We focus on two tasks, shown in Figure 2 (right): In table-to-text, $x$ corresponds to a linearized data table and $y$ is a textual description; in summarization, $x$ is an article and $y$ is a short summary.
|
| 55 |
+
|
| 56 |
+
# 3.1 Autoregressive LM
|
| 57 |
+
|
| 58 |
+
Assume we have an autoregressive language model $p_{\phi}(y \mid x)$ based on the Transformer (Vaswani et al., 2017) architecture (e.g., GPT-2; Radford et al.,
|
| 59 |
+
|
| 60 |
+
2019) and parametrized by $\phi$ . As shown in Figure 2 (top), let $z = [x; y]$ be the concatenation of $x$ and $y$ ; let $\mathsf{X}_{\mathrm{idx}}$ denote the sequence of indices that corresponds to $x$ , and $\mathsf{Y}_{\mathrm{idx}}$ denote the same for $y$ .
|
| 61 |
+
|
| 62 |
+
The activation at time step $i$ is $h_i \in \mathbb{R}^d$ , where $h_i = [h_i^{(1)};\dots ;h_i^{(n)}]$ is a concatenation of all activation layers at this time step, and $h_i^{(j)}$ is the activation of the $j$ -th Transformer layer at time step $i$ .<sup>1</sup>
|
| 63 |
+
|
| 64 |
+
The autoregressive Transformer model computes $h_i$ as a function of $z_i$ and the past activations in its left context, as follows:
|
| 65 |
+
|
| 66 |
+
$$
|
| 67 |
+
h _ {i} = \operatorname {L M} _ {\phi} \left(z _ {i}, h _ {< i}\right), \tag {1}
|
| 68 |
+
$$
|
| 69 |
+
|
| 70 |
+
where the last layer of $h_i$ is used to compute the distribution for the next token: $p_{\phi}(z_{i+1} \mid h_{\leq i}) = \text{softmax}(W_{\phi} h_i^{(n)})$ and $W_{\phi}$ is a pretrained matrix that maps $h_i^{(n)}$ to logits over the vocabulary.
|
| 71 |
+
|
| 72 |
+
# 3.2 Encoder-Decoder Architecture
|
| 73 |
+
|
| 74 |
+
We can also use an encoder-decoder architecture (e.g., BART; Lewis et al., 2020) to model $p_{\phi}(y \mid x)$ , where $x$ is encoded by the bidirectional encoder, and the decoder predicts $y$ autoregressively (conditioned on the encoded $x$ and its left context). We use the same indexing and activation notation, as shown in Figure 2 (bottom). $h_i$ for all $i \in \mathsf{X}_{\mathrm{idx}}$ is computed by the bidirectional Transformer encoder; $h_i$ for all $i \in \mathsf{Y}_{\mathrm{idx}}$ is computed by the autoregressive decoder using the same equation (1).
|
| 75 |
+
|
| 76 |
+
# 3.3 Method: Fine-tuning
|
| 77 |
+
|
| 78 |
+
In the fine-tuning framework, we initialize with the pretrained parameters $\phi$ . Here $p_{\phi}$ is a trainable language model distribution and we perform gradient updates on the following log-likelihood objective:
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\max _ {\phi} \log p _ {\phi} (y \mid x) = \sum_ {i \in Y _ {\mathrm {i d x}}} \log p _ {\phi} \left(z _ {i} \mid h _ {< i}\right). \tag {2}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
# 4 Prefix-Tuning
|
| 85 |
+
|
| 86 |
+
We propose prefix-tuning as an alternative to fine-tuning for conditional generation tasks. We first provide intuition in §4.1 before defining our method formally in §4.2.
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
Figure 2: An annotated example of prefix-tuning using an autoregressive LM (top) and an encoder-decoder model (bottom). The prefix activations $\forall i\in \mathsf{P}_{\mathrm{idx}},h_i$ are drawn from a trainable matrix $P_{\theta}$ . The remaining activations are computed by the Transformer.
|
| 90 |
+
|
| 91 |
+
# 4.1 Intuition
|
| 92 |
+
|
| 93 |
+
Based on intuition from prompting, we believe that having a proper context can steer the LM without changing its parameters. For example, if we want the LM to generate a word (e.g., Obama), we can comprehend its common collocations as context (e.g., Barack), and the LM will assign much higher probability to the desired word. Extending this intuition beyond generating a single word or sentence, we want to find a context that steers the LM to solve an NLG task. Intuitively, the context can influence the encoding of $x$ by guiding what to extract from $x$ ; and can influence the generation of $y$ by steering the next token distribution. However, it's non-obvious whether such a context exists. Natural language task instructions (e.g., "summarize the following table in one sentence") might guide an expert annotator to solve the task, but fail for most pretrained LMs. Data-driven optimization over the discrete instructions might help, but discrete optimization is computationally challenging.
|
| 94 |
+
|
| 95 |
+
Instead of optimizing over discrete tokens, we can optimize the instruction as continuous word embeddings, whose effects will be propagated upward to all Transformer activation layers and rightward to subsequent tokens. This is strictly more expressive than a discrete prompt which requires matching the embedding of a real word. Meanwhile, this is less expressive than intervening all layers of the activations (§7.2), which avoids long-range dependencies and includes more tunable parameters. Prefix-tuning, therefore, optimizes all layers of the prefix.
|
| 96 |
+
|
| 97 |
+
# 4.2 Method
|
| 98 |
+
|
| 99 |
+
Prefix-tuning prepends a prefix for an autoregressive LM to obtain $z = \left[\mathrm{PREFIX}; x; y\right]$ , or prepends prefixes for both encoder and encoder to obtain $z = \left[\mathrm{PREFIX}; x; \mathrm{PREFIX}'; y\right]$ , as shown in Figure 2. Here, $\mathsf{P}_{\mathrm{idx}}$ denotes the sequence of prefix indices, and we use $|\mathsf{P}_{\mathrm{idx}}|$ to denote the length of the prefix.
|
| 100 |
+
|
| 101 |
+
We follow the recurrence relation in equation (1), except that the prefix are free parameters. Prefix-tuning initializes a trainable matrix $P_{\theta}$ (parametrized by $\theta$ ) of dimension $|\mathsf{P}_{\mathrm{idx}}| \times \dim(h_i)$ to store the prefix parameters.
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
h _ {i} = \left\{ \begin{array}{l l} P _ {\theta} [ i,: ], & \text {i f} i \in \mathsf {P} _ {\mathrm {i d x}}, \\ \operatorname {L M} _ {\phi} \left(z _ {i}, h _ {< i}\right), & \text {o t h e r w i s e}. \end{array} \right. \tag {3}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
The training objective is the same as equation (2), but the set of trainable parameters changes: the language model parameters $\phi$ are fixed and the prefix parameters $\theta$ are the only trainable parameters.
|
| 108 |
+
|
| 109 |
+
Here, $h_i$ (for all $i$ ) is a function of the trainable $P_{\theta}$ . When $i \in \mathsf{P}_{\mathrm{idx}}$ , this is clear because $h_i$ copies directly from $P_{\theta}$ . When $i \notin \mathsf{P}_{\mathrm{idx}}$ , $h_i$ still depends on $P_{\theta}$ , because the prefix activations are always in the left context and will therefore affect any activations to its right.
|
| 110 |
+
|
| 111 |
+
# 4.3 Parametrization of $P_{\theta}$
|
| 112 |
+
|
| 113 |
+
Empirically, directly updating the $P_{\theta}$ parameters leads to unstable optimization and a slight drop in performance. So we reparametrize the matrix $P_{\theta}[i,:] = \mathrm{MLP}_{\theta}(P_{\theta}'[i,:])$ by a smaller matrix $(P_{\theta}')$ composed with a large feedforward neural network $(\mathrm{MLP}_{\theta})$ . Note that $P_{\theta}$ and $P_{\theta}'$ has the same rows
|
| 114 |
+
|
| 115 |
+
dimension (i.e. the prefix length), but different columns dimension.4 Once training is complete, these reparametrization parameters can be dropped, and only the prefix $(P_{\theta})$ needs to be saved.
|
| 116 |
+
|
| 117 |
+
# 5 Experimental Setup
|
| 118 |
+
|
| 119 |
+
# 5.1 Datasets and Metrics
|
| 120 |
+
|
| 121 |
+
We evaluate on three standard neural generation datasets for the table-to-text task: E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Radev et al., 2020). The datasets are ordered by increasing complexity and size. E2E only has 1 domain (i.e. restaurant reviews); WebNLG has 14 domains, and DART is open-domain, using open-domain tables from Wikipedia.
|
| 122 |
+
|
| 123 |
+
The E2E dataset contains approximately 50K examples with 8 distinct fields; it contains multiple test references for one source table, and the average output length is 22.9. We use the official evaluation script, which reports BLEU (Papineni et al., 2002), NIST (Belz and Reiter, 2006), METEOR (Lavie and Agarwal, 2007), ROUGE-L (Lin, 2004), and CIDEr (Vedantam et al., 2015).
|
| 124 |
+
|
| 125 |
+
The WebNLG (Gardent et al., 2017) dataset consists of 22K examples, and the input $x$ is a sequence of (subject, property, object) triples. The average output length is 22.5. In the training and validation splits, the input describes entities from 9 distinct DBpedia categories (e.g., Monument). The test split consists of two parts: the first half contains DB categories seen in training data, and the second half contains 5 unseen categories. These unseen categories are used to evaluate extrapolation. We use the official evaluation script, which reports BLEU, METEOR and TER (Snover et al., 2006).
|
| 126 |
+
|
| 127 |
+
DART (Radev et al., 2020) is an open domain table-to-text dataset, with similar input format (entity-relation-entity triples) as WebNLG. The average output length is 21.6. It consists of 82K examples from WikiSQL, WikiTableQuestions, E2E, and WebNLG and applies some manual or automated conversion. We use the official evaluation script and report BLEU, METEOR, TER, MoverScore (Zhao et al., 2019), BERTScore (Zhang et al., 2020b) and BLEURT (Sellam et al., 2020).
|
| 128 |
+
|
| 129 |
+
For the summarization task, we use the XSUM (Narayan et al., 2018) dataset, which is an abstrac
|
| 130 |
+
|
| 131 |
+
tive summarization dataset on news articles. There are 225K examples. The average length of the articles is 431 words and the average length of the summaries is 23.3. We report ROUGE-1, ROUGE-2 and ROUGE-L.
|
| 132 |
+
|
| 133 |
+
# 5.2 Methods
|
| 134 |
+
|
| 135 |
+
For table-to-text generation, we compare prefix-tuning with three other methods: fine-tuning (FINE-TUNE), fine-tuning only the top 2 layers (FT-TOP2), and adapter-tuning (ADAPTER). We also report the current state-of-the-art results on these datasets: On E2E, Shen et al. (2019) uses a pragmatically informed model without pretraining. On WebNLG, Kale (2020) fine-tunes T5-large. On DART, no official models trained on this dataset version are released. For summarization, we compare against fine-tuning BART (Lewis et al., 2020).
|
| 136 |
+
|
| 137 |
+
# 5.3 Architectures and Hyperparameters
|
| 138 |
+
|
| 139 |
+
For table-to-text, we use GPT-2_MEDIUM and GPT-2Large; the source tables are linearized. For summarization, we use BARTLarge, and the source articles are truncated to 512 BPE tokens.
|
| 140 |
+
|
| 141 |
+
Our implementation is based on the Hugging Face Transformer models (Wolf et al., 2020). At training time, we use the AdamW optimizer (Loshchilov and Hutter, 2019) and a linear learning rate scheduler, as suggested by the Hugging Face default setup. The hyperparameters we tune include the number of epochs, batch size, learning rate, and prefix length. Hyperparameter details are in the appendix. A default setting trains for 10 epochs, using a batch size of 5, a learning rate of $5 \cdot 10^{-5}$ and a prefix length of 10. The table-to-text models are trained on TITAN Xp or GeForce GTX TITAN X machines. Prefix-tuning takes 0.2 hours per epochs to train on 22K examples, whereas fine-tuning takes around 0.3 hours. The summarization models are trained on Tesla V100 machines, taking 1.25h per epoch on the XSUM dataset.
|
| 142 |
+
|
| 143 |
+
At decoding time, for the three table-to-text datasets, we use beam search with a beam size of 5. For summarization, we use a beam size of 6
|
| 144 |
+
|
| 145 |
+
and length normalization of 0.8. Decoding takes 1.2 seconds per sentence (without batching) for table-to-text, and 2.6 seconds per batch (using a batch size of 10) for summarization.
|
| 146 |
+
|
| 147 |
+
# 6 Main Results
|
| 148 |
+
|
| 149 |
+
# 6.1 Table-to-text Generation
|
| 150 |
+
|
| 151 |
+
We find that adding only $0.1\%$ task-specific parameters, prefix-tuning is effective in table-to-text generation, outperforming other lightweight baselines (ADAPTER and FT-TOP2) and achieving a comparable performance with fine-tuning. This trend is true across all three datasets: E2E, WebNLG, and DART.
|
| 152 |
+
|
| 153 |
+
For a fair comparison, we match the number of parameters for prefix-tuning and adapter-tuning to be $0.1\%$ . Table 1 shows that prefix-tuning is significantly better than ADAPTER $(0.1\%)$ , attaining 4.1 BLEU improvement per dataset on average. Even when we compare with fine-tuning $(100\%)$ and adapter-tuning $(3.0\%)$ , which update significantly more parameters than prefix-tuning, prefix-tuning still achieves results comparable or better than those two systems. This demonstrates that prefix-tuning is more Pareto efficient than adapter-tuning, significantly reducing parameters while improving generation quality.
|
| 154 |
+
|
| 155 |
+
Additionally, attaining good performance on DART suggests that prefix-tuning can generalize to tables with diverse domains and a large pool of relations. We will delve deeper into extrapolation performance (i.e. generalization to unseen categories or topics) in §6.4.
|
| 156 |
+
|
| 157 |
+
Overall, prefix-tuning is an effective and space-efficient method to adapt GPT-2 to table-to-text generation. The learned prefix is expressive enough to steer GPT-2 in order to correctly extract contents from an unnatural format and generate a textual description. Prefix-tuning also scales well from GPT-2<sub>MEDIUM</sub> to GPT-2<sub>LARGE</sub>, suggesting it has the potential to scale to even larger models with a similar architecture, like GPT-3.
|
| 158 |
+
|
| 159 |
+
# 6.2 Summarization
|
| 160 |
+
|
| 161 |
+
As shown in Table 2, with $2\%$ parameters, prefix-tuning obtains slightly lower performance than fine
|
| 162 |
+
|
| 163 |
+
tuning (36.05 vs. 37.25 in ROUGE-L). With only $0.1\%$ parameters, prefix-tuning underperforms full fine-tuning (35.05 vs. 37.25). There are several differences between XSUM and the three table-to-text datasets which could account for why prefix-tuning has comparative advantage in table-to-text: (1) XSUM contains 4x more examples than the three table-to-text datasets on average; (2) the input articles are 17x longer than the linearized table input of table-to-text datasets on average; (3) summarization might be more complex than table-to-text because it requires reading comprehension and identifying key contents from an article.
|
| 164 |
+
|
| 165 |
+
# 6.3 Low-data Setting
|
| 166 |
+
|
| 167 |
+
Based on the results from table-to-text (§6.1) and summarization (§6.2), we observe that prefix-tuning has a comparative advantage when the number of training examples is smaller. To construct low-data settings, we subsample the full dataset (E2E for table-to-text and XSUM for summarization) to obtain small datasets of size {50, 100, 200, 500}. For each size, we sample 5 different datasets and average over 2 training random seeds. Thus, we average over 10 models to get an estimate for each low-data setting.[11]
|
| 168 |
+
|
| 169 |
+
Figure 3 (right) shows that prefix-tuning outperforms fine-tuning in low-data regimes by 2.9 BLEU on average, in addition to requiring many fewer parameters, but the gap narrows as the dataset size increases.
|
| 170 |
+
|
| 171 |
+
Qualitatively, Figure 3 (left) shows 8 examples generated by both prefix-tuning and fine-tuning models trained on different data levels. While both methods tend to undergenerate (missing table contents) in low data regimes, prefix-tuning tends to be more faithful than fine-tuning. For example, finetuning $(100,200)^{12}$ falsely claims a low customer rating while the true rating is average, whereas prefix-tuning (100, 200) generates a description that is faithful to the table.
|
| 172 |
+
|
| 173 |
+
# 6.4 Extrapolation
|
| 174 |
+
|
| 175 |
+
We now investigate extrapolation performance to unseen topics for both table-to-text and summarization. In order to construct an extrapolation setting, we split the existing datasets so that training and test cover different topics. For table-to-text, the
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="3"></td><td rowspan="3">BLEU</td><td rowspan="3">NIST</td><td rowspan="3">E2EMET</td><td rowspan="3">R-L</td><td rowspan="3">CIDEr</td><td colspan="9">WebNLG</td><td colspan="6">DART</td></tr><tr><td colspan="3">BLEU</td><td colspan="3">MET</td><td colspan="3">TER↓</td><td rowspan="2">BLEU</td><td rowspan="2">MET</td><td rowspan="2">TER↓</td><td rowspan="2">Mover</td><td rowspan="2">BERT</td><td rowspan="2">BLEURT</td></tr><tr><td>S</td><td>U</td><td>A</td><td>S</td><td>U</td><td>A</td><td>S</td><td>U</td><td>A</td></tr><tr><td colspan="21">GPT-2MEDIUM</td></tr><tr><td>FINE-TUNE</td><td>68.2</td><td>8.62</td><td>46.2</td><td>71.0</td><td>2.47</td><td>64.2</td><td>27.7</td><td>46.5</td><td>0.45</td><td>0.30</td><td>0.38</td><td>0.33</td><td>0.76</td><td>0.53</td><td>46.2</td><td>0.39</td><td>0.46</td><td>0.50</td><td>0.94</td><td>0.39</td></tr><tr><td>FT-TOP2</td><td>68.1</td><td>8.59</td><td>46.0</td><td>70.8</td><td>2.41</td><td>53.6</td><td>18.9</td><td>36.0</td><td>0.38</td><td>0.23</td><td>0.31</td><td>0.49</td><td>0.99</td><td>0.72</td><td>41.0</td><td>0.34</td><td>0.56</td><td>0.43</td><td>0.93</td><td>0.21</td></tr><tr><td>ADAPTER(3%)</td><td>68.9</td><td>8.71</td><td>46.1</td><td>71.3</td><td>2.47</td><td>60.4</td><td>48.3</td><td>54.9</td><td>0.43</td><td>0.38</td><td>0.41</td><td>0.35</td><td>0.45</td><td>0.39</td><td>45.2</td><td>0.38</td><td>0.46</td><td>0.50</td><td>0.94</td><td>0.39</td></tr><tr><td>ADAPTER(0.1%)</td><td>66.3</td><td>8.41</td><td>45.0</td><td>69.8</td><td>2.40</td><td>54.5</td><td>45.1</td><td>50.2</td><td>0.39</td><td>0.36</td><td>0.38</td><td>0.40</td><td>0.46</td><td>0.43</td><td>42.4</td><td>0.36</td><td>0.48</td><td>0.47</td><td>0.94</td><td>0.33</td></tr><tr><td>PREFIX(0.1%)</td><td>69.7</td><td>8.81</td><td>46.1</td><td>71.4</td><td>2.49</td><td>62.9</td><td>45.6</td><td>55.1</td><td>0.44</td><td>0.38</td><td>0.41</td><td>0.35</td><td>0.49</td><td>0.41</td><td>46.4</td><td>0.38</td><td>0.46</td><td>0.50</td><td>0.94</td><td>0.39</td></tr><tr><td colspan="21">GPT-2LARGE</td></tr><tr><td>FINE-TUNE</td><td>68.5</td><td>8.78</td><td>46.0</td><td>69.9</td><td>2.45</td><td>65.3</td><td>43.1</td><td>55.5</td><td>0.46</td><td>0.38</td><td>0.42</td><td>0.33</td><td>0.53</td><td>0.42</td><td>47.0</td><td>0.39</td><td>0.46</td><td>0.51</td><td>0.94</td><td>0.40</td></tr><tr><td>Prefix</td><td>70.3</td><td>8.85</td><td>46.2</td><td>71.7</td><td>2.47</td><td>63.4</td><td>47.7</td><td>56.3</td><td>0.45</td><td>0.39</td><td>0.42</td><td>0.34</td><td>0.48</td><td>0.40</td><td>46.7</td><td>0.39</td><td>0.45</td><td>0.51</td><td>0.94</td><td>0.40</td></tr><tr><td>SOTA</td><td>68.6</td><td>8.70</td><td>45.3</td><td>70.8</td><td>2.37</td><td>63.9</td><td>52.8</td><td>57.1</td><td>0.46</td><td>0.41</td><td>0.44</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Table 1: Metrics (higher is better, except for TER) for table-to-text generation on E2E (left), WebNLG (middle) and DART (right). With only $0.1\%$ parameters, Prefix-tuning outperforms other lightweight baselines and achieves a comparable performance with fine-tuning. The best score is boldfaced for both GPT-2_MEDIUM and GPT-2 LARGE.
|
| 180 |
+
|
| 181 |
+
<table><tr><td>Source</td><td>name: The Eagle | type: coffee shop | food: Chinese | price: cheap | customer rating: average | area: riverside | family friendly: no | near: Burger King</td></tr><tr><td>Prefix (50)</td><td>The Eagle is a cheap Chinese coffee shop located near Burger King.</td></tr><tr><td>Prefix (100)</td><td>The Eagle is a cheap coffee shop located in the riverside near Burger King. It has average customer ratings.</td></tr><tr><td>Prefix (200)</td><td>The Eagle is a cheap Chinese coffee shop located in the riverside area near Burger King. It has average customer ratings.</td></tr><tr><td>Prefix (500)</td><td>The Eagle is a coffee shop that serves Chinese food. It is located in the riverside area near Burger King. It has an average customer rating and is not family friendly.</td></tr><tr><td>FT (50)</td><td>The Eagle coffee shop is located in the riverside area near Burger King.</td></tr><tr><td>FT (100)</td><td>The Eagle is a cheap coffee shop near Burger King in the riverside area. It has a low customer rating and is not family friendly.</td></tr><tr><td>FT (200)</td><td>The Eagle is a cheap Chinese coffee shop with a low customer rating. It is located near Burger King in the riverside area.</td></tr><tr><td>FT (500)</td><td>The Eagle is a cheap Chinese coffee shop with average customer ratings. It is located in the riverside area near Burger King.</td></tr></table>
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
Figure 3: (Left) qualitative examples in lowdata settings. (Right) prefix-tuning (orange) outperforms fine-tuning (blue) in low-data regimes in addition to requiring many fewer parameters. The top two plots correspond to summarization, measured by ROUGE-1 and ROUGE-2. The bottom two plots correspond to table-to-text, measured by BLEU and ROUGE-L. The x-axis is the training size and the y-axis is the evaluation metric (higher is better).
|
| 187 |
+
|
| 188 |
+

|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
|
| 192 |
+
<table><tr><td></td><td>R-1 ↑</td><td>R-2 ↑</td><td>R-L ↑</td></tr><tr><td>FINE-TUNE(Lewis et al., 2020)</td><td>45.14</td><td>22.27</td><td>37.25</td></tr><tr><td>PREFIX(2%)</td><td>43.80</td><td>20.93</td><td>36.05</td></tr><tr><td>PREFIX(0.1%)</td><td>42.92</td><td>20.03</td><td>35.05</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 2: Metrics for summarization on XSUM. Prefix-tuning slightly underperforms fine-tuning.
|
| 195 |
+
|
| 196 |
+
<table><tr><td></td><td colspan="3">news-to-sports</td><td colspan="3">within-news</td></tr><tr><td></td><td>R-1↑</td><td>R-2↑</td><td>R-L↑</td><td>R-1↑</td><td>R-2↑</td><td>R-L↑</td></tr><tr><td>FINE-TUNE</td><td>38.15</td><td>15.51</td><td>30.26</td><td>39.20</td><td>16.35</td><td>31.15</td></tr><tr><td>PREFIX</td><td>39.23</td><td>16.74</td><td>31.51</td><td>39.41</td><td>16.87</td><td>31.47</td></tr></table>
|
| 197 |
+
|
| 198 |
+
Table 3: Extrapolation performance on XSUM. Prefix-tuning outperforms fine-tuning on both news-to-sports and within-news splits.
|
| 199 |
+
|
| 200 |
+
WebNLG dataset is labeled with table topics. There are 9 categories that appear in training and dev, denoted as SEEN and 5 categories that only appear at test time, denoted as UNSEEN. So we evaluate extrapolation by training on the SEEN categories and testing on the UNSEEN categories. For summarization, we construct two extrapolation data splits<sup>13</sup>: In news-to-sports, we train on news articles,
|
| 201 |
+
|
| 202 |
+
and test on sports articles. In within-news, we train on {world, UK, business} news, and test on the remaining news categories (e.g., health, technology).
|
| 203 |
+
|
| 204 |
+
On both table-to-text and summarization, prefix-tuning has better extrapolation than fine-tuning under all metrics, as shown in Table 3 and the 'U' columns of Table 1 (middle).
|
| 205 |
+
|
| 206 |
+
We also find that adapter-tuning achieves good extrapolation performance, comparable with prefix-tuning, as shown in Table 1. This shared trend suggests that preserving LM parameters indeed has a positive impact on extrapolation. However, the
|
| 207 |
+
|
| 208 |
+

|
| 209 |
+
Figure 4: Prefix length vs. performance on summerization (left) and table-to-text (right). Performance increases as the prefix length increases up to a threshold (200 for summarization and 10 for table-to-text) and then a slight performance drop occurs. Each plot reports two metrics (on two vertical axes).
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
reason for such gains is an open question and we will discuss further in $\S 8$
|
| 214 |
+
|
| 215 |
+
# 7 Intrinsic Evaluation
|
| 216 |
+
|
| 217 |
+
We compare different variants of prefix-tuning. §7.1 studies the impact of the prefix length. §7.2 studies tuning only the embedding layer, which is more akin to tuning a discrete prompt. §7.3 compares prefixing and infixing, which inserts trainable activations between $x$ and $y$ . §7.4 studies the impact of various prefix initialization strategies.
|
| 218 |
+
|
| 219 |
+
# 7.1 Prefix Length
|
| 220 |
+
|
| 221 |
+
A longer prefix means more trainable parameters, and therefore more expressive power. Figure 4 shows that performance increases as the prefix length increases up to a threshold (200 for summarization, 10 for table-to-text) and then a slight performance drop occurs.[14]
|
| 222 |
+
|
| 223 |
+
Empirically, longer prefixes have a negligible impact on inference speed, because attention computation over the entire prefix is parallelized on GPUs.
|
| 224 |
+
|
| 225 |
+
# 7.2 Full vs Embedding-only
|
| 226 |
+
|
| 227 |
+
Recall in §4.1, we discuss the option of optimizing the continuous embeddings of the "virtual tokens." We instantiate that idea and call it embedding-only ablation. The word embeddings are free parameters, and the upper activation layers are computed by the Transformer. Table 4 (top) shows that the performance drops significantly, suggesting that tuning only the embedding layer is not sufficiently expressive.
|
| 228 |
+
|
| 229 |
+
The embedding-only ablation upper bounds the performance of discrete prompt optimization (Shin
|
| 230 |
+
|
| 231 |
+
<table><tr><td></td><td>BLEU</td><td>NIST</td><td>E2EMET</td><td>ROUGE</td><td>CIDEr</td></tr><tr><td>PREFIX</td><td>69.7</td><td>8.81</td><td>46.1</td><td>71.4</td><td>2.49</td></tr><tr><td colspan="6">Embedding-only: EMB-{PrefixLength}</td></tr><tr><td>EMB-1</td><td>48.1</td><td>3.33</td><td>32.1</td><td>60.2</td><td>1.10</td></tr><tr><td>EMB-10</td><td>62.2</td><td>6.70</td><td>38.6</td><td>66.4</td><td>1.75</td></tr><tr><td>EMB-20</td><td>61.9</td><td>7.11</td><td>39.3</td><td>65.6</td><td>1.85</td></tr><tr><td colspan="6">Infix-tuning: INFIX-{PrefixLength}</td></tr><tr><td>INFIX-1</td><td>67.9</td><td>8.63</td><td>45.8</td><td>69.4</td><td>2.42</td></tr><tr><td>INFIX-10</td><td>67.2</td><td>8.48</td><td>45.8</td><td>69.9</td><td>2.40</td></tr><tr><td>INFIX-20</td><td>66.7</td><td>8.47</td><td>45.8</td><td>70.0</td><td>2.42</td></tr></table>
|
| 232 |
+
|
| 233 |
+
Table 4: Intrinsic evaluation of Embedding-only (§7.2) and Infixing (§7.3). Both Embedding-only ablation and Infix-tuning underperforms full prefix-tuning.
|
| 234 |
+
|
| 235 |
+

|
| 236 |
+
Figure 5: Initializing the prefix with activations of real words significantly outperforms random initialization, in low-data settings.
|
| 237 |
+
|
| 238 |
+
et al., 2020), because discrete prompt restricts the embedding layer to exactly match the embedding of a real word. Consequently, we have this chain of increasing expressive power: discrete prompting $<$ embedding-only ablation $<$ prefix-tuning.
|
| 239 |
+
|
| 240 |
+
# 7.3 Prefixing vs Infixing
|
| 241 |
+
|
| 242 |
+
We also investigate how the trainable activations' position in the sequence affects performance. In prefix-tuning, we place them at the beginning [PREFIX; $x;y$ ]. We can also place the trainable activations between $x$ and $y$ (i.e. $[x;\mathrm{INFIX};y]$ ) and call this infix-tuning. Table 4 (bottom) shows that infix-tuning slightly underperforms prefix-tuning. We believe this is because prefix-tuning can affect the activations of $x$ and $y$ whereas infix-tuning can only influence the activations of $y$ .
|
| 243 |
+
|
| 244 |
+
# 7.4 Initialization
|
| 245 |
+
|
| 246 |
+
We find that how the prefix is initialized has a large impact in low-data settings. Random initialization leads to low performance with high variance. Initializing the prefix with activations of real words
|
| 247 |
+
|
| 248 |
+
significantly improves generation, as shown in Figure 5. In particular, initializing with task relevant words such as "summarization" and "table-to-text" obtains slightly better performance than task irrelevant words such as "elephant" and "divide", but using real words is still better than random.
|
| 249 |
+
|
| 250 |
+
Since we initialize the prefix with activations of real words computed by the LM, this initialization strategy is concordant with preserving the pretrained LM as much as possible.
|
| 251 |
+
|
| 252 |
+
# 8 Discussion
|
| 253 |
+
|
| 254 |
+
In this section, we will discuss several favorable properties of prefix-tuning and some open problems.
|
| 255 |
+
|
| 256 |
+
# 8.1 Personalization
|
| 257 |
+
|
| 258 |
+
As we note in §1, prefix-tuning is advantageous when there are a large number of tasks that needs to be trained independently. One practical setting is user privacy (Shokri and Shmatikov, 2015; McMahan et al., 2016). In order to preserve user privacy, each user's data needs to be separated and a personalized model needs to be trained independently for each user. Consequently, each user can be regarded as an independent task. If there are millions of users, prefix-tuning can scale to this setting and maintain modularity, enabling flexible addition or deletion of users by adding or deleting their prefixes without cross-contamination.
|
| 259 |
+
|
| 260 |
+
# 8.2 Batching Across Users
|
| 261 |
+
|
| 262 |
+
Under the same personalization setting, prefix-tuning allows batching different users' queries even though they are backed by different prefixes. When multiple users query a cloud GPU device with their inputs, it is computationally efficient to put these users in the same batch. Prefix-tuning keeps the shared LM intact; consequently, batching requires a simple step of prepending the personalized prefix to user input, and all the remaining computation is unchanged. In contrast, we can't batch across different users in adapter-tuning, which has personalized adapters between shared Transformer layers.
|
| 263 |
+
|
| 264 |
+
# 8.3 Inductive Bias of Prefix-tuning
|
| 265 |
+
|
| 266 |
+
Recall that fine-tuning updates all pretrained parameters, whereas prefix-tuning and adapter-tuning preserve them. Since the language models are pretrained on general purpose corpus, preserving the LM parameters might help generalization to domains unseen during training. In concordance with
|
| 267 |
+
|
| 268 |
+
this intuition, we observe that both prefix-tuning and adapter-tuning have significant performance gain in extrapolation settings (§6.4); however, the reason for such gain is an open question.
|
| 269 |
+
|
| 270 |
+
While prefix-tuning and adapter-tuning both freeze the pretrained parameters, they tune different sets of parameters to affect the activation layers of the Transformer. Recall that prefix-tuning keeps the LM intact and uses the prefix and the pretrained attention blocks to affect the subsequent activations; adapter-tuning inserts trainable modules between LM layers, which directly add residual vectors to the activations. Moreover, we observe that prefix-tuning requires vastly fewer parameters compared to adapter-tuning while maintaining comparable performance. We think this gain in parameter efficiency is because prefix-tuning keeps the pretrained LM intact as much as possible, and therefore exploits the LM more than adapter-tuning.
|
| 271 |
+
|
| 272 |
+
Concurrent work by Aghajanyan et al. (2020) uses intrinsic dimension to show that there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space. This explains why good accuracy on downstream task can be obtained by updating only a small number of parameters. Our work echoes the finding by showing that good generation performance can be attained by updating a very small prefix.
|
| 273 |
+
|
| 274 |
+
# 9 Conclusion
|
| 275 |
+
|
| 276 |
+
We have proposed prefix-tuning, a lightweight alternative to fine-tuning that prepends a trainable continuous prefix for NLG tasks. We discover that despite learning $1000\mathrm{x}$ fewer parameters than fine-tuning, prefix-tuning can maintain a comparable performance in a full data setting and outperforms fine-tuning in both low-data and extrapolation settings.
|
| 277 |
+
|
| 278 |
+
# References
|
| 279 |
+
|
| 280 |
+
Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning.
|
| 281 |
+
Anja Belz and Ehud Reiter. 2006. Comparing automatic and human evaluation of NLG systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, Italy. Association for Computational Linguistics.
|
| 282 |
+
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda
|
| 283 |
+
|
| 284 |
+
Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
|
| 285 |
+
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In International Conference on Learning Representations.
|
| 286 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 287 |
+
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG challenge: Generating text from RDF data. In Proceedings of the 10th International Conference on Natural Language Generation, pages 124-133, Santiago de Compostela, Spain. Association for Computational Linguistics.
|
| 288 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 2790-2799, Long Beach, California, USA. PMLR.
|
| 289 |
+
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423-438.
|
| 290 |
+
Mihir Kale. 2020. Text-to-text pre-training for data-to-text tasks.
|
| 291 |
+
N. Keskar, B. McCann, L. R. Varshney, Caiming Xiong, and R. Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. ArXiv, abs/1909.05858.
|
| 292 |
+
Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, and Nazneen Fatema Rajani. 2020. GeDi: Generative Discriminator Guided Sequence Generation. arXiv preprint arXiv:2009.06367.
|
| 293 |
+
Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels
|
| 294 |
+
|
| 295 |
+
of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07, pages 228-231, Stroudsburg, PA, USA. Association for Computational Linguistics.
|
| 296 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
|
| 297 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 298 |
+
Zhaojiang Lin, Andrea Madotto, and Pascale Fung. 2020. Exploring versatile generative language model via parameter-efficient transfer learning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 441-459, Online. Association for Computational Linguistics.
|
| 299 |
+
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.
|
| 300 |
+
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation.
|
| 301 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
|
| 302 |
+
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations.
|
| 303 |
+
H. Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Agüera y Arcas. 2016. Federated learning of deep networks using model averaging. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) 2017, abs/1602.05629.
|
| 304 |
+
Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium.
|
| 305 |
+
|
| 306 |
+
Jekaterina Novikova, Ondrej Dusek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to-end generation. CoRR, abs/1706.09254.
|
| 307 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Stroudsburg, PA, USA. Association for Computational Linguistics.
|
| 308 |
+
Jonas Pfeiffer, Aishwarya Kamath, Andreas Rückle, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterfusion: Non-destructive task composition for transfer learning.
|
| 309 |
+
Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Nazneen Fatema Rajani, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Murori Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, and Richard Socher. 2020. Dart: Open-domain structured data record to text generation.
|
| 310 |
+
A. Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
|
| 311 |
+
Evani Radiya-Dixit and Xin Wang. 2020. How fine can fine-tuning be? learning efficient language models. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pages 2435-2443, Online. PMLR.
|
| 312 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 313 |
+
Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2017. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems, volume 30, pages 506-516. Curran Associates, Inc.
|
| 314 |
+
Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference.
|
| 315 |
+
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computational Linguistics.
|
| 316 |
+
Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically informative text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association
|
| 317 |
+
|
| 318 |
+
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4060-4067, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 319 |
+
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV au2, Eric Wallace, and Sameer Singh. 2020. Autoprompt: Eliciting knowledge from language models with automatically generated prompts.
|
| 320 |
+
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS '15, page 1310-1321, New York, NY, USA. Association for Computing Machinery.
|
| 321 |
+
Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and Ralph Weischedel. 2006. A study of translation error rate with targeted human annotation. In *In Proceedings of the Association for Machine Translation in the Americas* (AMTA 2006).
|
| 322 |
+
Asa Cooper Stickland, Xian Li, and Marjan Ghazvininejad. 2020. Recipes for adapting pre-trained monolingual and multilingual models to machine translation.
|
| 323 |
+
Nishant Subramani, Samuel R. Bowman, and Kyunghyun Cho. 2020. Can unconditional language models recover arbitrary sentences?
|
| 324 |
+
Fan-Keng Sun and Cheng-I Lai. 2020. Conditioned natural language generation using only unconditioned language model: An exploration.
|
| 325 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, pages 5998-6008. Curran Associates, Inc.
|
| 326 |
+
Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In CVPR, pages 4566-4575. IEEE Computer Society.
|
| 327 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 328 |
+
Jeffrey O Zhang, Alexander Sax, Amir Zamir, Leonidas Guibas, and Jitendra Malik. 2020a. Sidetuning: A baseline for network adaptation via additive side networks.
|
| 329 |
+
|
| 330 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. BERTScore: Evaluating text generation with bert. In International Conference on Learning Representations.
|
| 331 |
+
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020c. DIALOGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online. Association for Computational Linguistics.
|
| 332 |
+
Mengjie Zhao, Tao Lin, Fei Mi, Martin Jaggi, and Hinrich Schütze. 2020. Masking as an efficient alternative to finetuning for pretrained language models.
|
| 333 |
+
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Linguistics.
|
| 334 |
+
Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuanjing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197-6208, Online. Association for Computational Linguistics.
|
| 335 |
+
Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating bert into neural machine translation. In International Conference on Learning Representations.
|
| 336 |
+
|
| 337 |
+
<table><tr><td></td><td>learning rate</td><td># epoch</td><td>batch size</td><td>prefix length</td></tr><tr><td colspan="5">Prefix:</td></tr><tr><td>E2E</td><td>8e-05</td><td>5</td><td>10</td><td>5</td></tr><tr><td>WebNLG</td><td>5e-05</td><td>5</td><td>5</td><td>5</td></tr><tr><td>DART</td><td>5e-05</td><td>10</td><td>5</td><td>10</td></tr><tr><td>XSUM</td><td>5e-05</td><td>30</td><td>14</td><td>100</td></tr><tr><td colspan="5">Adapter:</td></tr><tr><td>E2E (3%)</td><td>5e-05</td><td>5</td><td>5</td><td>-</td></tr><tr><td>E2E (0.1%)</td><td>8e-05</td><td>10</td><td>5</td><td></td></tr><tr><td>WebNLG (3%)</td><td>5e-05</td><td>5</td><td>5</td><td>-</td></tr><tr><td>WebNLG (0.1%)</td><td>5e-05</td><td>10</td><td>5</td><td>-</td></tr><tr><td>DART (3%)</td><td>5e-05</td><td>5</td><td>5</td><td>-</td></tr><tr><td>DART (0.1%)</td><td>8e-05</td><td>5</td><td>5</td><td>-</td></tr><tr><td colspan="5">Fine-tune:</td></tr><tr><td>E2E</td><td>5e-05</td><td>5</td><td>10</td><td>-</td></tr><tr><td>WebNLG</td><td>1e-05</td><td>10</td><td>6</td><td>-</td></tr><tr><td>DART</td><td>1e-05</td><td>10</td><td>6</td><td>-</td></tr><tr><td colspan="5">FT-top2:</td></tr><tr><td>E2E</td><td>5e-05</td><td>5</td><td>10</td><td>-</td></tr><tr><td>WebNLG</td><td>5e-05</td><td>10</td><td>9</td><td>-</td></tr><tr><td>DART</td><td>5e-05</td><td>5</td><td>5</td><td>-</td></tr></table>
|
| 338 |
+
|
| 339 |
+
Table 5: Hyperparameter settings for our method and baseline methods.
|
| 340 |
+
|
| 341 |
+
# A Supplementary Material
|
| 342 |
+
|
| 343 |
+
# A.1 Hyperparameters
|
| 344 |
+
|
| 345 |
+
In Table 5, we report the hyperparameters used to train the models documented in the experiment section.
|
| 346 |
+
|
| 347 |
+
# A.2 Additional Results for Low-data Settings
|
| 348 |
+
|
| 349 |
+
Figure 6 supplements the low-data performance curves in Figure 3 by plotting the relationship between training size and generation metrics for both prefix-tuning and fine-tuning.
|
| 350 |
+
|
| 351 |
+
# A.3 Additional Results for the Initialization Experiment
|
| 352 |
+
|
| 353 |
+
Figure 7 supplements Figure 3 by plotting additional metrics for our initialization technique §7.4. It validates that random initialization (from a uniform (0,1) distribution) significantly underperforms initializing with real words; Additionally, initializing with task-relevant words (e.g., "summarization" and "table-to-text") attains slightly better generation scores than initializing with task-irrelevant words (e.g., "elephant" and "banana").
|
| 354 |
+
|
| 355 |
+
# A.4 Qualitative Examples for Extrapolation
|
| 356 |
+
|
| 357 |
+
Table 6 contains qualitative examples from both seen and unseen categories in WebNLG. We find that for unseen categories, both prefix-tuning and fine-tuning tend to undergenerate (generated output do not cover full table contents) or generate untruthfully (generated output is inconsistent with
|
| 358 |
+
|
| 359 |
+
table contents). In particular, prefix-tuning tends to undergenerate whereas fine-tuning tends to generate untruthfully. For seen categories, both perform fairly well in terms of coverage and truthfulness.
|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
Figure 6: Prefix-tuning (orange) outperforms fine-tuning (blue) in low-data regimes in addition to requiring many fewer parameters. The top three plots correspond to summarization, measured by ROUGE-1, ROUGE-2, and ROUGE-L. The bottom three plots correspond to table-to-text, measured by NIST, METEOR, and CIDEr. The x-axis is the training size and the y-axis is the evaluation metric (higher is better).
|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
|
| 372 |
+

|
| 373 |
+
|
| 374 |
+

|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
|
| 378 |
+

|
| 379 |
+
Figure 7: Initializing the prefix with activations of real words significantly outperforms random initialization, in a low-data setting with 100 training data.
|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+
<table><tr><td>Source [Unseen, Athelete]</td><td>(Al Kharaitiyat SC, club, Alaa Abdul-Zahra), (Al Khor, ground, Al Kharaitiyat SC), (Shabab Al-Ordon Club, club, Alaa Abdul-Zahra) (Amar Osim, manager, Al Kharaitiyat SC)</td></tr><tr><td>Prefix-tuning</td><td>Al Kharaitiyat SC are managed by Amar Osim and play at their ground at Al Khor. Al Kharaitiyat SC are also the club for which Alaa Abdul-Zahra is a player.</td></tr><tr><td>Fine-tuning</td><td>Alaa Abdul-Zahra plays for Al-Kharaitiyat SC and Shabab Al-Ordon Club. He also plays for Al-Khor and manages Al-Kharaitiyat SC.</td></tr><tr><td>Reference</td><td>Alaa Abdul Zahr plays for Al Kharaitiyat SC which is located at Al Khor and managed by Amar Osim. The Shabab Al-Ordon club is associated with Alaa Abdul-Zahr.</td></tr><tr><td>Source [Unseen, Transportation]</td><td>(Genoa, location, Costa Crociere), (AIDA Cruises, operator, AIDAstella), (Costa Crociere, owner, AIDAstella)</td></tr><tr><td>Prefix-tuning</td><td>AID Astella is operated by Aida Cruises and is owned by the Costa Rican tourist resort of Genoa.</td></tr><tr><td>Fine-tuning</td><td>AID Astella, operated by AIDA-Cruises, is located in Genoa and is owned by the Costa Rican government.</td></tr><tr><td>Reference</td><td>Costa Crociere is the owner of the AIDAstella and are based in Genoa. The operator of AIDAstella is AIDA Cruises.</td></tr><tr><td>Source [Unseen, Politician]</td><td>(Euro, currency, Netherlands), (Stellendam, birthPlace, Ab Klink), (Netherlands, nationality, Ab Klink)</td></tr><tr><td>Prefix-tuning</td><td>Ab Klink was born in Stellendam and is a national of the Netherlands where the currency is the Euro.</td></tr><tr><td>Fine-tuning</td><td>Ab Klink is a national of the Netherlands where the currency is the Euro. He was born in Stellendam.</td></tr><tr><td>Reference</td><td>Ab Klink was born in Stellendam in the Netherlands, where the national currency is the euro.</td></tr><tr><td>Source [Unseen, Politician]</td><td>(Robert E, Lee, commander, Battle of Salem Church), (American Civil War, isPartOfMilitaryConflict, Battle of Salem Church), (Battle of Salem Church, battles, Aaron S. Daggett)</td></tr><tr><td>Prefix-tuning</td><td>Robert E. Lee was the commander of the Battle of Salem Church which was part of the military conflict in the American Civil war.</td></tr><tr><td>Fine-tuning</td><td>The Battle of Salem Church is part of the American Civil War and was commanded by Robert E. Lee.</td></tr><tr><td>Reference</td><td>Robert E Lee was a commander in the Battle of Salem Church, which was one of the military conflicts in the American Civil War. Aaron S Daggett fought in the same battle.</td></tr><tr><td>Source [Unseen, Artist]</td><td>(Christian alternative rock, musicSubgenre, Alternative rock), (Alternative rock, genre, Andrew White (musician))</td></tr><tr><td>Prefix-tuning</td><td>Andrew White is a Christian alternative rock musician.</td></tr><tr><td>Fine-tuning</td><td>Andrew White, a Christian alternative rocker, performs.</td></tr><tr><td>Reference</td><td>The musician Andrew White's genre is alternative rock, the genre which has the sub genre Christian alternative rock.</td></tr><tr><td>Source [Unseen, Artist]</td><td>(Hip hop music, genre, Allen Forrest), (solo singer, background, Allen Forrest)</td></tr><tr><td>Prefix-tuning</td><td>Allen Forrest is a solo singer.</td></tr><tr><td>Fine-tuning</td><td>Born in</td></tr><tr><td>Reference</td><td>Allen Forrest is a solo singer whose genre is Hip Hop music.</td></tr><tr><td>Source [Seen, ComicsCharacter]</td><td>(Americans, nationality, Ducan Rouleau), (Ducan Rouleau, creator, Baymax), (Alan Tudyk, starring, Big Hero 6 (film)), (Steven T Segle, creator, Baymax), (Big Hero 6 (film), serries, Baymax)</td></tr><tr><td>Prefix-tuning</td><td>Baymax is a character in Big Hero 6 which stars Alan Tudyk. He was created by Steven T. Seagle and the American, Duncan Rouleau.</td></tr><tr><td>Fine-tuning</td><td>Alan Tudyk stars in the film Big Hero 6 in which Baymax is a character created by Steven T. Seagle and the American, Duncan Rouleau.</td></tr><tr><td>Reference</td><td>Baymax is a character who appeared in Big Hero 6 starring Alan Tudyk. It was created by Steven T Seagle and the American, Duncan Rouleau.</td></tr><tr><td>Source [Seen, City]</td><td>(Washington, D.C., capital, United States), (White Americans, ethnicGroup, United States), (United States, country, New Jersey), (New York City, largest City, United States), (New Jersey, isPartOf, Atlantic City)</td></tr><tr><td>Prefix-tuning</td><td>Washington D.C. is the capital of the United States where the largest city is New York City and the White Americans are an ethnic group. Atlantic City, New Jersey is also part of the United States.</td></tr><tr><td>Fine-tuning</td><td>Atlantic City, New Jersey is part of New Jersey in the United States. The capital city is Washington D.C. and one of the ethnic groups is White Americans.</td></tr><tr><td>Reference</td><td>New York City (NYC) is the largest U.S. city. Atlantic City, New Jersey are also part of the United States with its capital as Washington, DC and home to White Americans.</td></tr></table>
|
| 384 |
+
|
| 385 |
+
Table 6: Qualitative examples from WebNLG. The first 6 examples are from the unseen categories, labeled next to source; the last two examples are from the seen categories. For unseen categories, both prefix-tuning and fine-tuning tend to undergenerate (generated output do not cover full table contents) or generate untruthfully (generated output is inconsistent with table contents). In particular, prefix-tuning tends to undergenerate more often than generate untruthfully whereas fine-tuning tends to generate untruthfully. For seen categories, both perform fairly well in terms of coverage and truthfulness.
|
data/2021/2101_00xxx/2101.00190/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00204/2a8c286c-88d1-4769-975d-642826d5ce5d_content_list.json
CHANGED
|
@@ -1,3 +1,1374 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
164,
|
| 8 |
+
85,
|
| 9 |
+
831,
|
| 10 |
+
124
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Abhik Bhattacharjee $^{1*}$ , Tahmid Hasan $^{1*}$ , Wasi Uddin Ahmad $^{2†}$ , Kazi Samin $^{1}$ , Md Saiful Islam $^{3}$ , Anindya Iqbal $^{1}$ , M. Sohel Rahman $^{1}$ , Rifat Shahriyar $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
174,
|
| 19 |
+
137,
|
| 20 |
+
833,
|
| 21 |
+
172
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Bangladesh University of Engineering and Technology (BUET)<sup>1</sup>, AWS AI Labs<sup>2</sup>, University of Rochester<sup>3</sup>",
|
| 28 |
+
"bbox": [
|
| 29 |
+
235,
|
| 30 |
+
174,
|
| 31 |
+
764,
|
| 32 |
+
208
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "abhik@ra.cse.buet.ac.bd, {tahmidhasan, rifat} @cse.buet.ac.bd",
|
| 39 |
+
"bbox": [
|
| 40 |
+
149,
|
| 41 |
+
212,
|
| 42 |
+
850,
|
| 43 |
+
227
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
260,
|
| 53 |
+
252,
|
| 54 |
+
339,
|
| 55 |
+
267
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature. To pretrain BanglaBERT, we collect 27.5 GB of Bangla pretraining data (dubbed 'Bangla2B+') by crawling 110 popular Bangla sites. We introduce two downstream task datasets on natural language inference and question answering and benchmark on four diverse NLU tasks covering text classification, sequence labeling, and span prediction. In the process, we bring them under the first-ever Bangla Language Understanding Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming multilingual and monolingual models. We are making the models, datasets, and a leaderboard publicly available at https://github.com/csebuetnlp/banglabert to advance Bangla NLP.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
141,
|
| 64 |
+
279,
|
| 65 |
+
460,
|
| 66 |
+
577
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
589,
|
| 77 |
+
258,
|
| 78 |
+
604
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Despite being the sixth most spoken language in the world with over 300 million native speakers constituting $4\\%$ of the world's total population, $^{1}$ Bangla is considered a resource-scarce language. Joshi et al. (2020b) categorized Bangla in the language group that lacks efforts in labeled data collection and relies on self-supervised pretraining (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019) to boost the natural language understanding (NLU) task performances. To date, the Bangla language has been continuing to rely on fine-tuning multilingual pretrained language models (PLMs) (Ashrafi et al., 2020; Das et al., 2021; Islam et al., 2021). However, since multilingual PLMs cover a wide range of languages (Conneau and Lample, 2019; Conneau et al., 2020), they are large (have",
|
| 85 |
+
"bbox": [
|
| 86 |
+
112,
|
| 87 |
+
614,
|
| 88 |
+
489,
|
| 89 |
+
872
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "hundreds of millions of parameters) and require substantial computational resources for fine-tuning. They also tend to show degraded performance for low-resource languages (Wu and Dredze, 2020) on downstream NLU tasks. Motivated by the triumph of language-specific models (Martin et al. (2020); Polignano et al. (2019); Canete et al. (2020); Antoun et al. (2020), inter alia) over multilingual models in many other languages, in this work, we present BanglaBERT – a BERT-based (Devlin et al., 2019) Bangla NLU model pretrained on 27.5 GB data (which we name ‘Bangla2B+’) we meticulously crawled 110 popular Bangla websites to facilitate NLU applications in Bangla. Since most of the downstream task datasets for NLP applications are in the English language, to facilitate zero-shot transfer learning between English and Bangla, we additionally pretrain a model in both languages; we name the model BanglishBERT.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
252,
|
| 99 |
+
884,
|
| 100 |
+
558
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "We also introduce two datasets on Bangla Natural Language Inference (NLI) and Question Answering (QA), tasks previously unexplored in Bangla, and evaluate both pretrained models on four diverse downstream tasks on sentiment classification, NLI, named entity recognition, and QA. We bring these tasks together to establish the first-ever Bangla Language Understanding Benchmark (BLUB). We compare widely used multilingual models to BanglaBERT using BLUB and find that both models excel on all the tasks.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
560,
|
| 110 |
+
885,
|
| 111 |
+
736
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "We summarize our contributions as follows:",
|
| 118 |
+
"text_level": 1,
|
| 119 |
+
"bbox": [
|
| 120 |
+
526,
|
| 121 |
+
740,
|
| 122 |
+
855,
|
| 123 |
+
755
|
| 124 |
+
],
|
| 125 |
+
"page_idx": 0
|
| 126 |
+
},
|
| 127 |
+
{
|
| 128 |
+
"type": "list",
|
| 129 |
+
"sub_type": "text",
|
| 130 |
+
"list_items": [
|
| 131 |
+
"1. We present two new pretrained models: BanglaBERT and BanglishBERT, and introduce new Bangla NLI and QA datasets.",
|
| 132 |
+
"2. We introduce the Bangla Language Understanding Benchmark (BLUB) and show that, in the supervised setting, BanglaBERT outperforms mBERT and XLM-R (base) by 6.8 and 4.3 BLUB scores, while in zero-shot crosslingual transfer, BanglishBERT outperforms them by 15.8 and 10.8, respectively."
|
| 133 |
+
],
|
| 134 |
+
"bbox": [
|
| 135 |
+
522,
|
| 136 |
+
758,
|
| 137 |
+
884,
|
| 138 |
+
917
|
| 139 |
+
],
|
| 140 |
+
"page_idx": 0
|
| 141 |
+
},
|
| 142 |
+
{
|
| 143 |
+
"type": "aside_text",
|
| 144 |
+
"text": "arXiv:2101.00204v4 [cs.CL] 10 May 2022",
|
| 145 |
+
"bbox": [
|
| 146 |
+
21,
|
| 147 |
+
300,
|
| 148 |
+
60,
|
| 149 |
+
724
|
| 150 |
+
],
|
| 151 |
+
"page_idx": 0
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"type": "page_footnote",
|
| 155 |
+
"text": "*These authors contributed equally to this work.",
|
| 156 |
+
"bbox": [
|
| 157 |
+
139,
|
| 158 |
+
879,
|
| 159 |
+
436,
|
| 160 |
+
892
|
| 161 |
+
],
|
| 162 |
+
"page_idx": 0
|
| 163 |
+
},
|
| 164 |
+
{
|
| 165 |
+
"type": "page_footnote",
|
| 166 |
+
"text": "$\\dagger$ Work done while at UCLA.",
|
| 167 |
+
"bbox": [
|
| 168 |
+
139,
|
| 169 |
+
892,
|
| 170 |
+
319,
|
| 171 |
+
904
|
| 172 |
+
],
|
| 173 |
+
"page_idx": 0
|
| 174 |
+
},
|
| 175 |
+
{
|
| 176 |
+
"type": "page_footnote",
|
| 177 |
+
"text": "1https://w.wiki/Psq",
|
| 178 |
+
"bbox": [
|
| 179 |
+
137,
|
| 180 |
+
904,
|
| 181 |
+
310,
|
| 182 |
+
917
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 0
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "3. We provide the code, models, and a leaderboard to spur future research on Bangla NLU.",
|
| 189 |
+
"bbox": [
|
| 190 |
+
127,
|
| 191 |
+
84,
|
| 192 |
+
489,
|
| 193 |
+
116
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "2 BanglaBERT",
|
| 200 |
+
"text_level": 1,
|
| 201 |
+
"bbox": [
|
| 202 |
+
112,
|
| 203 |
+
128,
|
| 204 |
+
265,
|
| 205 |
+
143
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 1
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "text",
|
| 211 |
+
"text": "2.1 Pretraining Data",
|
| 212 |
+
"text_level": 1,
|
| 213 |
+
"bbox": [
|
| 214 |
+
112,
|
| 215 |
+
154,
|
| 216 |
+
294,
|
| 217 |
+
168
|
| 218 |
+
],
|
| 219 |
+
"page_idx": 1
|
| 220 |
+
},
|
| 221 |
+
{
|
| 222 |
+
"type": "text",
|
| 223 |
+
"text": "A high volume of good quality text data is a prerequisite for pretraining large language models. For instance, BERT (Devlin et al., 2019) is pretrained on the English Wikipedia and the Books corpus (Zhu et al., 2015) containing 3.3 billion tokens. Subsequent works like RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019) used more extensive web-crawled data with heavy filtering and cleaning.",
|
| 224 |
+
"bbox": [
|
| 225 |
+
112,
|
| 226 |
+
174,
|
| 227 |
+
487,
|
| 228 |
+
303
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "Bangla is a rather resource-constrained language in the web domain; for example, the Bangla Wikipedia dump from July 2021 is only $650\\mathrm{MB}$ , two orders of magnitudes smaller than the English Wikipedia. As a result, we had to crawl the web extensively to collect our pretraining data. We selected 110 Bangla websites by their Amazon Alexa rankings<sup>2</sup> and the volume and quality of extractable texts by inspecting each website. The contents included encyclopedias, news, blogs, e-books, stories, social media/forums, etc.<sup>3</sup> The amount of data totaled around 35 GB.",
|
| 235 |
+
"bbox": [
|
| 236 |
+
112,
|
| 237 |
+
304,
|
| 238 |
+
487,
|
| 239 |
+
495
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "There are noisy sources of Bangla data dumps, a couple of prominent ones being OSCAR (Suárez et al., 2019) and CCNet (Wenzek et al., 2020). They contained many offensive texts; we found them infeasible to clean thoroughly. Fearing their potentially harmful impacts (Luccioni and Viviano, 2021), we opted not to use them. We further discuss ethical considerations at the end of the paper.",
|
| 246 |
+
"bbox": [
|
| 247 |
+
112,
|
| 248 |
+
497,
|
| 249 |
+
487,
|
| 250 |
+
626
|
| 251 |
+
],
|
| 252 |
+
"page_idx": 1
|
| 253 |
+
},
|
| 254 |
+
{
|
| 255 |
+
"type": "text",
|
| 256 |
+
"text": "2.2 Pre-processing",
|
| 257 |
+
"text_level": 1,
|
| 258 |
+
"bbox": [
|
| 259 |
+
112,
|
| 260 |
+
637,
|
| 261 |
+
278,
|
| 262 |
+
653
|
| 263 |
+
],
|
| 264 |
+
"page_idx": 1
|
| 265 |
+
},
|
| 266 |
+
{
|
| 267 |
+
"type": "text",
|
| 268 |
+
"text": "We performed thorough dedduplication on the pretraining data, removed non-textual contents (e.g., HTML/JavaScript tags), and filtered out non-Bangla pages using a language classifier (Joulin et al., 2017). After the processing, the dataset was reduced to $27.5\\mathrm{GB}$ in size containing $5.25\\mathrm{M}$ documents having 306.66 words on average.",
|
| 269 |
+
"bbox": [
|
| 270 |
+
112,
|
| 271 |
+
657,
|
| 272 |
+
487,
|
| 273 |
+
769
|
| 274 |
+
],
|
| 275 |
+
"page_idx": 1
|
| 276 |
+
},
|
| 277 |
+
{
|
| 278 |
+
"type": "text",
|
| 279 |
+
"text": "We trained a Wordpiece (Wu et al., 2016) vocabulary of $32k$ subword tokens on the resulting corpus with a 400 character alphabet, kept larger than the native Bangla alphabet to capture code-switching (Poplack, 1980) and allow romanized Bangla contents for better generalization. We limited the length of a training sample to 512 tokens",
|
| 280 |
+
"bbox": [
|
| 281 |
+
112,
|
| 282 |
+
770,
|
| 283 |
+
487,
|
| 284 |
+
883
|
| 285 |
+
],
|
| 286 |
+
"page_idx": 1
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"type": "text",
|
| 290 |
+
"text": "and did not cross document boundaries (Liu et al., 2019) while creating a data point. After tokenization, we had 7.18M samples with an average length of 304.14 tokens and containing 2.18B tokens in total; hence we named the dataset 'Bangla2B+'.",
|
| 291 |
+
"bbox": [
|
| 292 |
+
507,
|
| 293 |
+
84,
|
| 294 |
+
882,
|
| 295 |
+
164
|
| 296 |
+
],
|
| 297 |
+
"page_idx": 1
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"type": "text",
|
| 301 |
+
"text": "2.3 Pretraining Objective",
|
| 302 |
+
"text_level": 1,
|
| 303 |
+
"bbox": [
|
| 304 |
+
507,
|
| 305 |
+
189,
|
| 306 |
+
726,
|
| 307 |
+
206
|
| 308 |
+
],
|
| 309 |
+
"page_idx": 1
|
| 310 |
+
},
|
| 311 |
+
{
|
| 312 |
+
"type": "text",
|
| 313 |
+
"text": "Self-supervised pretraining objectives leverage unlabeled data. For example, BERT (Devlin et al., 2019) was pretrained with masked language modeling (MLM) and next sentence prediction (NSP). Several works built on top of this, e.g., RoBERTa (Liu et al., 2019) removed NSP and pretrained with longer sequences, SpanBERT (Joshi et al., 2020a) masked contiguous spans of tokens, while works like XLNet (Yang et al., 2019) introduced objectives like factorized language modeling.",
|
| 314 |
+
"bbox": [
|
| 315 |
+
507,
|
| 316 |
+
218,
|
| 317 |
+
884,
|
| 318 |
+
379
|
| 319 |
+
],
|
| 320 |
+
"page_idx": 1
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"type": "text",
|
| 324 |
+
"text": "We pretrained BanglaBERT using ELECTRA (Clark et al., 2020b), pretrained with the Replaced Token Detection (RTD) objective, where a generator and a discriminator model are trained jointly. The generator is fed as input a sequence with a portion of the tokens masked (15% in our case) and is asked to predict them using the rest of the input (i.e., standard MLM). The masked tokens are then replaced by tokens sampled from the generator's output distribution for the corresponding masks, and the discriminator then has to predict whether each token is from the original sequence or not. After pretraining, the discriminator is used for fine-tuning. Clark et al. (2020b) argued that RTD back-propagates loss from all tokens of a sequence, in contrast to 15% tokens of the MLM objective, giving the model more signals to learn from. Evidently, ELECTRA achieves comparable downstream performance to RoBERTa or XLNet with only a quarter of their training time. This computational efficiency motivated us to use ELECTRA for our implementation of BanglaBERT.",
|
| 325 |
+
"bbox": [
|
| 326 |
+
507,
|
| 327 |
+
382,
|
| 328 |
+
884,
|
| 329 |
+
736
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 1
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "text",
|
| 335 |
+
"text": "2.4 Model Architecture & Hyperparameters",
|
| 336 |
+
"text_level": 1,
|
| 337 |
+
"bbox": [
|
| 338 |
+
507,
|
| 339 |
+
760,
|
| 340 |
+
873,
|
| 341 |
+
778
|
| 342 |
+
],
|
| 343 |
+
"page_idx": 1
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"type": "text",
|
| 347 |
+
"text": "We pretrained the base ELECTRA model (a 12-layer Transformer encoder with 768 embedding size, 768 hidden size, 12 attention heads, 3072 feed-forward size, generator-to-discriminator ratio $\\frac{1}{3}$ , 110M parameters) with 256 batch size for 2.5M steps on a v3-8 TPU instance on GCP. We used the Adam (Kingma and Ba, 2015) optimizer with a 2e-4 learning rate and linear warmup of 10k steps.",
|
| 348 |
+
"bbox": [
|
| 349 |
+
507,
|
| 350 |
+
790,
|
| 351 |
+
882,
|
| 352 |
+
919
|
| 353 |
+
],
|
| 354 |
+
"page_idx": 1
|
| 355 |
+
},
|
| 356 |
+
{
|
| 357 |
+
"type": "page_footnote",
|
| 358 |
+
"text": "$^{2}$ www alexa.com/topsites/countries/BD",
|
| 359 |
+
"bbox": [
|
| 360 |
+
134,
|
| 361 |
+
890,
|
| 362 |
+
462,
|
| 363 |
+
904
|
| 364 |
+
],
|
| 365 |
+
"page_idx": 1
|
| 366 |
+
},
|
| 367 |
+
{
|
| 368 |
+
"type": "page_footnote",
|
| 369 |
+
"text": "3The complete list can be found in the Appendix.",
|
| 370 |
+
"bbox": [
|
| 371 |
+
136,
|
| 372 |
+
904,
|
| 373 |
+
436,
|
| 374 |
+
917
|
| 375 |
+
],
|
| 376 |
+
"page_idx": 1
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"type": "table",
|
| 380 |
+
"img_path": "images/01794eb8bcc5c0532d9fe42b823dedf709df312c8432ca7966e6d905a03170e5.jpg",
|
| 381 |
+
"table_caption": [],
|
| 382 |
+
"table_footnote": [],
|
| 383 |
+
"table_body": "<table><tr><td>Task</td><td>Corpus</td><td>|Train|</td><td>|Dev|</td><td>|Test|</td><td>Metric</td><td>Domain</td></tr><tr><td>Sentiment Classification</td><td>SentNoB</td><td>12,575</td><td>1,567</td><td>1,567</td><td>Macro-F1</td><td>Social Media</td></tr><tr><td>Natural Language Inference</td><td>BNLI</td><td>381,449</td><td>2,419</td><td>4,895</td><td>Accuracy</td><td>Miscellaneous</td></tr><tr><td>Named Entity Recognition</td><td>MultiCoNER</td><td>14,500</td><td>800</td><td>800</td><td>Micro-F1</td><td>Miscellaneous</td></tr><tr><td>Question Answering</td><td>BQA, TyDiQA</td><td>127,771</td><td>2,502</td><td>2,504</td><td>EM/F1</td><td>Wikipedia</td></tr></table>",
|
| 384 |
+
"bbox": [
|
| 385 |
+
136,
|
| 386 |
+
80,
|
| 387 |
+
860,
|
| 388 |
+
167
|
| 389 |
+
],
|
| 390 |
+
"page_idx": 2
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"text": "Table 1: Statistics of the Bangla Language Understanding Evaluation (BLUB) benchmark.",
|
| 395 |
+
"bbox": [
|
| 396 |
+
191,
|
| 397 |
+
175,
|
| 398 |
+
801,
|
| 399 |
+
191
|
| 400 |
+
],
|
| 401 |
+
"page_idx": 2
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"text": "2.5 BanglishBERT",
|
| 406 |
+
"text_level": 1,
|
| 407 |
+
"bbox": [
|
| 408 |
+
112,
|
| 409 |
+
216,
|
| 410 |
+
278,
|
| 411 |
+
231
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "Often labeled data in a low-resource language for a task may not be available but be abundant in high-resource languages like English. In these scenarios, zero-shot cross-lingual transfer (Artetxe and Schwenk, 2019) provides an effective way to be still able to train a multilingual model on that task using the high-resource languages and be able to transfer to low-resource ones. To this end, we pretrained a bilingual model, named BanglishBERT, on Bangla and English together using the same set of hyperparameters mentioned earlier. We used the BERT pretraining corpus as the English data and trained a joint bilingual vocabulary (each language having $\\sim 16\\mathrm{k}$ tokens). We upsampled the Bangla data during training to equal the participation of both languages.",
|
| 418 |
+
"bbox": [
|
| 419 |
+
112,
|
| 420 |
+
237,
|
| 421 |
+
489,
|
| 422 |
+
495
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "text",
|
| 428 |
+
"text": "3 The Bangla Language Understanding Benchmark (BLUB)",
|
| 429 |
+
"text_level": 1,
|
| 430 |
+
"bbox": [
|
| 431 |
+
112,
|
| 432 |
+
506,
|
| 433 |
+
473,
|
| 434 |
+
539
|
| 435 |
+
],
|
| 436 |
+
"page_idx": 2
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"type": "text",
|
| 440 |
+
"text": "Many works have studied different Bangla NLU tasks in isolation, e.g., sentiment classification (Das and Bandyopadhyay, 2010; Sharfuddin et al., 2018; Tripto and Ali, 2018), semantic textual similarity (Shajalal and Aono, 2018), parts-of-speech (PoS) tagging (Alam et al., 2016), named entity recognition (NER) (Ashrafi et al., 2020). However, Bangla NLU has not yet had a comprehensive, unified study. Motivated by the surge of NLU research brought about by benchmarks in other languages, e.g., English (Wang et al., 2018), French (Le et al., 2020), Korean (Park et al., 2021), we establish the first-ever Bangla Language Understanding Benchmark (BLUB). NLU generally comprises three types of tasks: text classification, sequence labeling, and text span prediction. Text classification tasks can further be sub-divided into single-sequence and sequence-pair classification. Therefore, we consider a total of four tasks for BLUB. For each task type, we carefully select one downstream task dataset. We emphasize the quality and open availability of the datasets while making the selection. We briefly mention them below.",
|
| 441 |
+
"bbox": [
|
| 442 |
+
112,
|
| 443 |
+
549,
|
| 444 |
+
489,
|
| 445 |
+
917
|
| 446 |
+
],
|
| 447 |
+
"page_idx": 2
|
| 448 |
+
},
|
| 449 |
+
{
|
| 450 |
+
"type": "text",
|
| 451 |
+
"text": "1. Single-Sequence Classification Sentiment classification is perhaps the most-studied Bangla NLU task, with some of the earlier works dating back over a decade (Das and Bandyopadhyay, 2010). Hence, we chose this as the single-sequence classification task. However, most Bangla sentiment classification datasets are not publicly available. We could only find two public datasets: BYSA (Tripto and Ali, 2018) and SentNoB (Islam et al., 2021). We found BYSA to have many duplications. Even worse, many duplicates had different labels. SentNoB had better quality and covered a broader set of domains, making the classification task more challenging. Hence, we opted to use the latter.",
|
| 452 |
+
"bbox": [
|
| 453 |
+
507,
|
| 454 |
+
216,
|
| 455 |
+
885,
|
| 456 |
+
441
|
| 457 |
+
],
|
| 458 |
+
"page_idx": 2
|
| 459 |
+
},
|
| 460 |
+
{
|
| 461 |
+
"type": "text",
|
| 462 |
+
"text": "2. Sequence-pair Classification In contrast to single-sequence classification, there has been a dearth of sequence-pair classification works in Bangla. We found work on semantic textual similarity (Shajalal and Aono, 2018), but the dataset is not publicly available. As such, we curated a new Bangla Natural Language Inference (BNLI) dataset for sequence-pair classification. We chose NLI as the representative task due to its fundamental importance in NLU. Given two sentences, a premise and a hypothesis as input, a model is tasked to predict whether or not the hypothesis is entailment, contradiction, or neutral to the premise. We used the same curation procedure as the XNLI (Conneau et al., 2018) dataset: we translated the MultiNLI (Williams et al., 2018) training data using the English to Bangla translation model by Hasan et al. (2020) and had the evaluation sets translated by expert human translators. Due to the possibility of the incursion of errors during automatic translation, we used the Language-Agnostic BERT Sentence Embeddings (LaBSE) (Feng et al., 2020) of the translations and original sentences to compute their similarity and discarded all sentences below a similarity threshold of 0.70. Moreover, to ensure good-quality human translation, we used similar quality assurance strategies as Guzmán et al. (2019).",
|
| 463 |
+
"bbox": [
|
| 464 |
+
507,
|
| 465 |
+
450,
|
| 466 |
+
884,
|
| 467 |
+
885
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 2
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "page_footnote",
|
| 473 |
+
"text": "4More details are presented in the ethical considerations section.",
|
| 474 |
+
"bbox": [
|
| 475 |
+
507,
|
| 476 |
+
892,
|
| 477 |
+
882,
|
| 478 |
+
917
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 2
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "table",
|
| 484 |
+
"img_path": "images/67b9500b5179f7983c1d95f0caae3f4590025ffbdb786c94ae78c6559e9c9f20.jpg",
|
| 485 |
+
"table_caption": [],
|
| 486 |
+
"table_footnote": [],
|
| 487 |
+
"table_body": "<table><tr><td>Models</td><td>|Params.|</td><td>SC</td><td>NLI</td><td>NER</td><td>QA</td><td>BLUB Score</td></tr><tr><td colspan=\"7\">Zero-shot cross-lingual transfer</td></tr><tr><td>mBERT</td><td>180M</td><td>27.05</td><td>62.22</td><td>39.27</td><td>59.01/64.18</td><td>50.35</td></tr><tr><td>XLM-R (base)</td><td>270M</td><td>42.03</td><td>72.18</td><td>45.37</td><td>55.03/61.83</td><td>55.29</td></tr><tr><td>XLM-R (large)</td><td>550M</td><td>49.49</td><td>78.13</td><td>56.48</td><td>71.13/77.70</td><td>66.59</td></tr><tr><td>BanglishBERT</td><td>110M</td><td>48.39</td><td>75.26</td><td>55.56</td><td>72.87/78.63</td><td>66.14</td></tr><tr><td colspan=\"7\">Supervised fine-tuning</td></tr><tr><td>mBERT</td><td>180M</td><td>67.59</td><td>75.13</td><td>68.97</td><td>67.12/72.64</td><td>70.29</td></tr><tr><td>XLM-R (base)</td><td>270M</td><td>69.54</td><td>78.46</td><td>73.32</td><td>68.09/74.27</td><td>72.82</td></tr><tr><td>XLM-R (large)</td><td>550M</td><td>70.97</td><td>82.40</td><td>78.39</td><td>73.15/79.06</td><td>76.79</td></tr><tr><td>IndicBERT</td><td>18M</td><td>68.41</td><td>77.11</td><td>54.13</td><td>50.84/57.47</td><td>61.59</td></tr><tr><td>sahajBERT</td><td>18M</td><td>71.12</td><td>76.92</td><td>70.94</td><td>65.48/70.69</td><td>71.03</td></tr><tr><td>BanglishBERT</td><td>110M</td><td>70.61</td><td>80.95</td><td>76.28</td><td>72.43/78.40</td><td>75.73</td></tr><tr><td>BanglaBERT</td><td>110M</td><td>72.89</td><td>82.80</td><td>77.78</td><td>72.63/79.34</td><td>77.09</td></tr></table>",
|
| 488 |
+
"bbox": [
|
| 489 |
+
117,
|
| 490 |
+
80,
|
| 491 |
+
878,
|
| 492 |
+
311
|
| 493 |
+
],
|
| 494 |
+
"page_idx": 3
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"type": "text",
|
| 498 |
+
"text": "Table 2: Performance comparison of pretrained models on different downstream tasks. Scores in bold texts have statistically significant $(p < 0.05)$ difference from others with bootstrap sampling (Koehn, 2004).",
|
| 499 |
+
"bbox": [
|
| 500 |
+
115,
|
| 501 |
+
321,
|
| 502 |
+
880,
|
| 503 |
+
350
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 3
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "text",
|
| 509 |
+
"text": "3. Sequence Labeling In this task, all words of a text sequence have to be classified. Named Entity Recognition (NER) and Parts-of-Speech (PoS) tagging are two of the most prominent sequence labeling tasks. We chose the Bangla portion of SemEval 2022 MultiCoNER (Malmasi et al., 2022) dataset for BLUB.",
|
| 510 |
+
"bbox": [
|
| 511 |
+
115,
|
| 512 |
+
375,
|
| 513 |
+
485,
|
| 514 |
+
486
|
| 515 |
+
],
|
| 516 |
+
"page_idx": 3
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"type": "text",
|
| 520 |
+
"text": "4. Span Prediction Extractive question answering is a standard choice for text span prediction. Similar to BNLI, we machine-translated the SQuAD 2.0 (Rajpurkar et al., 2018) dataset and used it as the training set (BQA). For validation and test, We used the Bangla portion of the TyDiQA $^5$ (Clark et al., 2020a) dataset. We posed the task analogous to SQuAD 2.0: presented with a text passage and a question, a model has to predict whether or not it is answerable. If answerable, the model has to find the minimal text span that answers the question.",
|
| 521 |
+
"bbox": [
|
| 522 |
+
115,
|
| 523 |
+
495,
|
| 524 |
+
485,
|
| 525 |
+
687
|
| 526 |
+
],
|
| 527 |
+
"page_idx": 3
|
| 528 |
+
},
|
| 529 |
+
{
|
| 530 |
+
"type": "text",
|
| 531 |
+
"text": "We present detailed statistics of the BLUB benchmark in Table 1.",
|
| 532 |
+
"bbox": [
|
| 533 |
+
115,
|
| 534 |
+
689,
|
| 535 |
+
484,
|
| 536 |
+
719
|
| 537 |
+
],
|
| 538 |
+
"page_idx": 3
|
| 539 |
+
},
|
| 540 |
+
{
|
| 541 |
+
"type": "text",
|
| 542 |
+
"text": "4 Experiments & Results",
|
| 543 |
+
"text_level": 1,
|
| 544 |
+
"bbox": [
|
| 545 |
+
115,
|
| 546 |
+
732,
|
| 547 |
+
346,
|
| 548 |
+
747
|
| 549 |
+
],
|
| 550 |
+
"page_idx": 3
|
| 551 |
+
},
|
| 552 |
+
{
|
| 553 |
+
"type": "text",
|
| 554 |
+
"text": "Setup We fine-tuned BanglaBERT and BanglishBERT on the four downstream tasks and compared them with several multilingual models: mBERT (Devlin et al., 2019), XLM-R base and large (Conneau et al., 2020), and IndicBERT (Kakwani et al., 2020), a multilingual model for Indian languages; and sahajBERT (Diskin et al., 2021), an ALBERT-based (Lan et al., 2020) PLM for Bangla. All pre",
|
| 555 |
+
"bbox": [
|
| 556 |
+
115,
|
| 557 |
+
757,
|
| 558 |
+
485,
|
| 559 |
+
884
|
| 560 |
+
],
|
| 561 |
+
"page_idx": 3
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"type": "text",
|
| 565 |
+
"text": "trained models were fine-tuned for 3-20 epochs with batch size 32, and the learning rate was tuned from $\\{2\\mathrm{e} - 5,3\\mathrm{e} - 5,4\\mathrm{e} - 5,5\\mathrm{e} - 5\\}$ . The final models were selected based on the validation performances after each epoch. We performed fine-tuning with three random seeds and reported their average scores in Table 2. We reported the average performance of all tasks as the BLUB score.",
|
| 566 |
+
"bbox": [
|
| 567 |
+
512,
|
| 568 |
+
375,
|
| 569 |
+
880,
|
| 570 |
+
501
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "Zero-shot Transfer We show the zero-shot cross-lingual transfer results of the multilingual models fine-tuned on the English counterpart of each dataset (SentNob has no English equivalent; hence we used the Stanford Sentiment Treebank (Socher et al., 2013) for the sentiment classification task) in Table 2. In zero-shot transfer setting, BanglishBERT achieves strong cross-lingual performance over similar-sized models and falls marginally short of XLM-R (large). This is an expected outcome since cross-lingual effectiveness depends explicitly on model size (K et al., 2020).",
|
| 577 |
+
"bbox": [
|
| 578 |
+
512,
|
| 579 |
+
518,
|
| 580 |
+
880,
|
| 581 |
+
709
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 3
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "text",
|
| 587 |
+
"text": "Supervised Fine-tuning In the supervised fine-tuning setup, BanglaBERT outperformed multilingual models and monolingual sahajBERT on all the tasks, achieving a BLUE score of 77.09, even coming head-to-head with XLM-R (large). On the other hand, BanglishBERT marginally lags behind BanglaBERT and XLM-R (large). BanglaBERT is not only superior in performance but also substantially compute- and memory-efficient. For instance, it may seem that sahajBERT is more efficient than BanglaBERT due to its smaller size, but it takes 2-3.5x time and 2.4-3.33x memory as BanglaBERT",
|
| 588 |
+
"bbox": [
|
| 589 |
+
512,
|
| 590 |
+
726,
|
| 591 |
+
880,
|
| 592 |
+
917
|
| 593 |
+
],
|
| 594 |
+
"page_idx": 3
|
| 595 |
+
},
|
| 596 |
+
{
|
| 597 |
+
"type": "page_footnote",
|
| 598 |
+
"text": "5We removed the Yes/No questions from TyDiQA and subsampled the unanswerable questions to have equal proportion.",
|
| 599 |
+
"bbox": [
|
| 600 |
+
115,
|
| 601 |
+
892,
|
| 602 |
+
485,
|
| 603 |
+
917
|
| 604 |
+
],
|
| 605 |
+
"page_idx": 3
|
| 606 |
+
},
|
| 607 |
+
{
|
| 608 |
+
"type": "text",
|
| 609 |
+
"text": "to fine-tune. 6",
|
| 610 |
+
"bbox": [
|
| 611 |
+
114,
|
| 612 |
+
83,
|
| 613 |
+
220,
|
| 614 |
+
97
|
| 615 |
+
],
|
| 616 |
+
"page_idx": 4
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"type": "image",
|
| 620 |
+
"img_path": "images/4bde5894401536f1ca7593e449000a0f99e3c3a5344efbd2cda36c7465d2bc81.jpg",
|
| 621 |
+
"image_caption": [],
|
| 622 |
+
"image_footnote": [],
|
| 623 |
+
"bbox": [
|
| 624 |
+
117,
|
| 625 |
+
115,
|
| 626 |
+
475,
|
| 627 |
+
282
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 4
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "image",
|
| 633 |
+
"img_path": "images/56c52033755c51a5d90049b7716e50b40cbda94c5d6e4179b7b8d2acc36bb730.jpg",
|
| 634 |
+
"image_caption": [
|
| 635 |
+
"Figure 1: Sample-efficiency tests with SC and NLI."
|
| 636 |
+
],
|
| 637 |
+
"image_footnote": [],
|
| 638 |
+
"bbox": [
|
| 639 |
+
117,
|
| 640 |
+
291,
|
| 641 |
+
475,
|
| 642 |
+
460
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 4
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "Sample efficiency It is often challenging to annotate training samples in real-world scenarios, especially for low-resource languages like Bangla. So, in addition to compute- and memory-efficiency, sample-efficiency (Howard and Ruder, 2018) is another necessity of PLMs. To assess the sample efficiency of BanglaBERT, we limit the number of training samples and see how it fares against other models. We compare it with XLM-R (large) and plot their performances on the SC and NLI tasks<sup>7</sup> for different sample size in Figure 1.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
112,
|
| 651 |
+
511,
|
| 652 |
+
489,
|
| 653 |
+
687
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 4
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "Results show that when we have fewer number of samples $(\\leq 1k)$ , BanglaBERT has substantially better performance (2-9% on SC and 6-10% on NLI with $p < 0.05$ ) over XLM-R (large), making it more practically applicable for resource-scarce downstream tasks.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
112,
|
| 662 |
+
688,
|
| 663 |
+
487,
|
| 664 |
+
783
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 4
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "text",
|
| 670 |
+
"text": "5 Conclusion & Future Works",
|
| 671 |
+
"text_level": 1,
|
| 672 |
+
"bbox": [
|
| 673 |
+
112,
|
| 674 |
+
796,
|
| 675 |
+
393,
|
| 676 |
+
810
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 4
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "Creating language-specific models is often infeasible for low-resource languages lacking ample data. Hence, researchers are compelled to use multilingual models for languages that do not have",
|
| 683 |
+
"bbox": [
|
| 684 |
+
112,
|
| 685 |
+
820,
|
| 686 |
+
489,
|
| 687 |
+
885
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 4
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "text",
|
| 693 |
+
"text": "strong pretrained models. To this end, we introduced BanglaBERT and BanglishBERT, two NLU models in Bangla, a widely spoken yet low-resource language. We presented new downstream datasets on NLI and QA, and established the BLUB benchmark, setting new state-of-the-art results with BanglaBERT. In future, we will include other Bangla NLU benchmarks (e.g., dependency parsing (de Marneffe et al., 2021)) in BLUB and investigate the benefits of initializing Bangla NLG models from BanglaBERT.",
|
| 694 |
+
"bbox": [
|
| 695 |
+
507,
|
| 696 |
+
84,
|
| 697 |
+
884,
|
| 698 |
+
261
|
| 699 |
+
],
|
| 700 |
+
"page_idx": 4
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "text",
|
| 704 |
+
"text": "Acknowledgements",
|
| 705 |
+
"text_level": 1,
|
| 706 |
+
"bbox": [
|
| 707 |
+
509,
|
| 708 |
+
273,
|
| 709 |
+
680,
|
| 710 |
+
288
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 4
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "We would like to thank the Research and Innovation Centre for Science and Engineering (RISE), BUET, for funding the project, and Intelligent Machines Limited and Google TensorFlow Research Cloud (TRC) Program for providing cloud support.",
|
| 717 |
+
"bbox": [
|
| 718 |
+
507,
|
| 719 |
+
298,
|
| 720 |
+
882,
|
| 721 |
+
379
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 4
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "text",
|
| 727 |
+
"text": "Ethical Considerations",
|
| 728 |
+
"text_level": 1,
|
| 729 |
+
"bbox": [
|
| 730 |
+
507,
|
| 731 |
+
390,
|
| 732 |
+
712,
|
| 733 |
+
405
|
| 734 |
+
],
|
| 735 |
+
"page_idx": 4
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"text": "Dataset and Model Release The Copy Right Act, $2000^{8}$ of Bangladesh allows reproduction and public release of copy-right materials for non-commercial research purposes. As a transformative research work, we will release BanglaBERT under a non-commercial license. Furthermore, we will release only the pretraining data for which we know the distribution will not cause any copyright infringement. The downstream task datasets can all be made publicly available under a similar non-commercial license.",
|
| 740 |
+
"bbox": [
|
| 741 |
+
505,
|
| 742 |
+
414,
|
| 743 |
+
882,
|
| 744 |
+
592
|
| 745 |
+
],
|
| 746 |
+
"page_idx": 4
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"text": "Quality Control in Human Translation Translations were done by expert translators who provide translation services for renowned Bangla newspapers. Each translated sentence was further assessed for quality by another expert. If found to be of low quality, it was again translated by the original translator. The sample was then discarded altogether if found to be of low quality again. Fewer than 100 samples were discarded in this process. Translators were paid as per standard rates in local currencies.",
|
| 751 |
+
"bbox": [
|
| 752 |
+
507,
|
| 753 |
+
601,
|
| 754 |
+
882,
|
| 755 |
+
763
|
| 756 |
+
],
|
| 757 |
+
"page_idx": 4
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"text": "Text Content We tried to minimize offensive texts in the pretraining data by explicitly crawling the sites where such contents would be nominal. However, we cannot guarantee that there is absolutely no objectionable content present and therefore recommend using the model carefully, especially for text generation purposes.",
|
| 762 |
+
"bbox": [
|
| 763 |
+
507,
|
| 764 |
+
771,
|
| 765 |
+
884,
|
| 766 |
+
883
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 4
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "page_footnote",
|
| 772 |
+
"text": "<sup>6</sup>We present a detailed comparison in the Appendix.",
|
| 773 |
+
"bbox": [
|
| 774 |
+
134,
|
| 775 |
+
891,
|
| 776 |
+
453,
|
| 777 |
+
904
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 4
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "page_footnote",
|
| 783 |
+
"text": "Results for the other tasks can be found in the Appendix.",
|
| 784 |
+
"bbox": [
|
| 785 |
+
136,
|
| 786 |
+
904,
|
| 787 |
+
485,
|
| 788 |
+
917
|
| 789 |
+
],
|
| 790 |
+
"page_idx": 4
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "page_footnote",
|
| 794 |
+
"text": "<sup>8</sup>http://bdlaws.minlaw.gov.bd/act-details-846.html",
|
| 795 |
+
"bbox": [
|
| 796 |
+
507,
|
| 797 |
+
891,
|
| 798 |
+
794,
|
| 799 |
+
917
|
| 800 |
+
],
|
| 801 |
+
"page_idx": 4
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"text": "References",
|
| 806 |
+
"text_level": 1,
|
| 807 |
+
"bbox": [
|
| 808 |
+
115,
|
| 809 |
+
83,
|
| 810 |
+
213,
|
| 811 |
+
98
|
| 812 |
+
],
|
| 813 |
+
"page_idx": 5
|
| 814 |
+
},
|
| 815 |
+
{
|
| 816 |
+
"type": "list",
|
| 817 |
+
"sub_type": "ref_text",
|
| 818 |
+
"list_items": [
|
| 819 |
+
"Firoj Alam, Shammur Absar Chowdhury, and Sheak Rashed Haider Noori. 2016. Bidirectional LSTMs - CRFs networks for bangla POS tagging. In 2016 19th International Conference on Computer and Information Technology (ICCIIT), pages 377-382. IEEE.",
|
| 820 |
+
"Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9-15, Marseille, France. European Language Resource Association.",
|
| 821 |
+
"Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.",
|
| 822 |
+
"I. Ashrafi, M. Mohammad, A. S. Mauree, G. M. A. Nijhum, R. Karim, N. Mohammed, and S. Mo men. 2020. Banner: A cost-sensitive contextualized model for bangla named entity recognition. IEEE Access, 8:58206-58226.",
|
| 823 |
+
"Jose Canete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge Pérez. 2020. Spanish pre-trained BERT model and evaluation data. In Proceedings of the Practical ML for Developing Countries Workshop at ICLR 2020, PML4DC.",
|
| 824 |
+
"Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454-470.",
|
| 825 |
+
"Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020b. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.",
|
| 826 |
+
"Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.",
|
| 827 |
+
"Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32, pages 7059–7069. Curran Associates, Inc."
|
| 828 |
+
],
|
| 829 |
+
"bbox": [
|
| 830 |
+
115,
|
| 831 |
+
108,
|
| 832 |
+
489,
|
| 833 |
+
917
|
| 834 |
+
],
|
| 835 |
+
"page_idx": 5
|
| 836 |
+
},
|
| 837 |
+
{
|
| 838 |
+
"type": "list",
|
| 839 |
+
"sub_type": "ref_text",
|
| 840 |
+
"list_items": [
|
| 841 |
+
"Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.",
|
| 842 |
+
"Amitava Das and Sivaji Bandyopadhyay. 2010. Phrase-level polarity identification for bangla. Int. J. Comput. Linguist. Appl.(IJCLA), 1(1-2):169-182.",
|
| 843 |
+
"Avishek Das, Omar Sharif, Mohammed Moshiul Hoque, and Iqbal H. Sarker. 2021. Emotion classification in a resource constrained language using transformer-based approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 150-158, Online. Association for Computational Linguistics.",
|
| 844 |
+
"Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255-308.",
|
| 845 |
+
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
|
| 846 |
+
"Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, and Gennady Pekhimenko. 2021. Distributed deep learning in open collaborations. arXiv:2106.10207.",
|
| 847 |
+
"Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. arXiv:2007.01852.",
|
| 848 |
+
"Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.",
|
| 849 |
+
"Tahmid Hasan, Abhik Bhattacharjee, Kazi Samin, Musum Hasan, Madhusudan Basak, M. Sohel Rahman,"
|
| 850 |
+
],
|
| 851 |
+
"bbox": [
|
| 852 |
+
510,
|
| 853 |
+
85,
|
| 854 |
+
884,
|
| 855 |
+
917
|
| 856 |
+
],
|
| 857 |
+
"page_idx": 5
|
| 858 |
+
},
|
| 859 |
+
{
|
| 860 |
+
"type": "list",
|
| 861 |
+
"sub_type": "ref_text",
|
| 862 |
+
"list_items": [
|
| 863 |
+
"and Rifat Shahriyar. 2020. Not low-resource anymore: Aligner ensembling, batch filtering, and new datasets for Bengali-English machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2612–2623, Online. Association for Computational Linguistics.",
|
| 864 |
+
"Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
|
| 865 |
+
"Khondoker Ittehadul Islam, Sadipta Kar, Md Saiful Islam, and Mohammad Ruhul Amin. 2021. SentNoB: A dataset for analysing sentiment on noisy Bangla texts. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3265-3271, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
|
| 866 |
+
"Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020a. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
|
| 867 |
+
"Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020b. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.",
|
| 868 |
+
"Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.",
|
| 869 |
+
"Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.",
|
| 870 |
+
"Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4948-4961, Online. Association for Computational Linguistics.",
|
| 871 |
+
"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015."
|
| 872 |
+
],
|
| 873 |
+
"bbox": [
|
| 874 |
+
115,
|
| 875 |
+
85,
|
| 876 |
+
489,
|
| 877 |
+
917
|
| 878 |
+
],
|
| 879 |
+
"page_idx": 6
|
| 880 |
+
},
|
| 881 |
+
{
|
| 882 |
+
"type": "list",
|
| 883 |
+
"sub_type": "ref_text",
|
| 884 |
+
"list_items": [
|
| 885 |
+
"Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.",
|
| 886 |
+
"Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.",
|
| 887 |
+
"Hang Le, Loic Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoit Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised language model pre-training for french. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2479-2490, Marseille, France. European Language Resources Association.",
|
| 888 |
+
"Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692.",
|
| 889 |
+
"Alexandra Luccioni and Joseph Viviano. 2021. What's in the box? an analysis of undesirable content in the Common Crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 182-189, Online. Association for Computational Linguistics.",
|
| 890 |
+
"Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. MultiCoNER: a Large-scale Multilingual dataset for Complex Named Entity Recognition.",
|
| 891 |
+
"Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoit Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
|
| 892 |
+
"Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Ji Yoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, JungWoo Ha, and Kyunghyun Cho. 2021. KLUE: Korean language understanding evaluation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)."
|
| 893 |
+
],
|
| 894 |
+
"bbox": [
|
| 895 |
+
510,
|
| 896 |
+
85,
|
| 897 |
+
884,
|
| 898 |
+
917
|
| 899 |
+
],
|
| 900 |
+
"page_idx": 6
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "list",
|
| 904 |
+
"sub_type": "ref_text",
|
| 905 |
+
"list_items": [
|
| 906 |
+
"Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro, and Valerio Basile. 2019. *Alberto: Italian BERT language understanding model for NLP challenging tasks based on tweets*. In *Proceedings of the Sixth Italian Conference on Computational Linguistics*, Bari, Italy, November 13-15, 2019, CEUR Workshop Proceedings.",
|
| 907 |
+
"Shana Poplack. 1980. Sometimes I'll start a sentence in Spanish Y TERMINO EN ESPANOL: toward a typology of code-switching. Linguistics, 18(7-8):581-618.",
|
| 908 |
+
"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8).",
|
| 909 |
+
"Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics.",
|
| 910 |
+
"Md Shajalal and Masaki Aono. 2018. Semantic textual similarity in Bengali text. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-5. IEEE.",
|
| 911 |
+
"Abdullah Aziz Sharfuddin, Md Nafis Tihami, and Md Saiful Islam. 2018. A deep recurrent neural network with bilstm model for sentiment classification. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-4. IEEE.",
|
| 912 |
+
"Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.",
|
| 913 |
+
"Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7), Cardiff, United Kingdom. Leibniz-Institut für Deutsche Sprache.",
|
| 914 |
+
"Nafis Irtiza Tripto and Mohammed Eunus Ali. 2018. Detecting multilabel sentiment and emotions from bangla youtube comments. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-6. IEEE.",
|
| 915 |
+
"Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018."
|
| 916 |
+
],
|
| 917 |
+
"bbox": [
|
| 918 |
+
115,
|
| 919 |
+
85,
|
| 920 |
+
489,
|
| 921 |
+
917
|
| 922 |
+
],
|
| 923 |
+
"page_idx": 7
|
| 924 |
+
},
|
| 925 |
+
{
|
| 926 |
+
"type": "list",
|
| 927 |
+
"sub_type": "ref_text",
|
| 928 |
+
"list_items": [
|
| 929 |
+
"GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.",
|
| 930 |
+
"Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.",
|
| 931 |
+
"Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.",
|
| 932 |
+
"Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics.",
|
| 933 |
+
"Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.",
|
| 934 |
+
"Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.",
|
| 935 |
+
"Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, page 19-27, USA. IEEE Computer Society."
|
| 936 |
+
],
|
| 937 |
+
"bbox": [
|
| 938 |
+
510,
|
| 939 |
+
85,
|
| 940 |
+
882,
|
| 941 |
+
775
|
| 942 |
+
],
|
| 943 |
+
"page_idx": 7
|
| 944 |
+
},
|
| 945 |
+
{
|
| 946 |
+
"type": "text",
|
| 947 |
+
"text": "Appendix",
|
| 948 |
+
"text_level": 1,
|
| 949 |
+
"bbox": [
|
| 950 |
+
115,
|
| 951 |
+
84,
|
| 952 |
+
203,
|
| 953 |
+
99
|
| 954 |
+
],
|
| 955 |
+
"page_idx": 8
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"text": "Pretraining Data Sources",
|
| 960 |
+
"text_level": 1,
|
| 961 |
+
"bbox": [
|
| 962 |
+
115,
|
| 963 |
+
109,
|
| 964 |
+
317,
|
| 965 |
+
124
|
| 966 |
+
],
|
| 967 |
+
"page_idx": 8
|
| 968 |
+
},
|
| 969 |
+
{
|
| 970 |
+
"type": "text",
|
| 971 |
+
"text": "We used the following sites for data collection. We categorize the sites into six types:",
|
| 972 |
+
"bbox": [
|
| 973 |
+
112,
|
| 974 |
+
130,
|
| 975 |
+
485,
|
| 976 |
+
162
|
| 977 |
+
],
|
| 978 |
+
"page_idx": 8
|
| 979 |
+
},
|
| 980 |
+
{
|
| 981 |
+
"type": "text",
|
| 982 |
+
"text": "Encyclopedia:",
|
| 983 |
+
"text_level": 1,
|
| 984 |
+
"bbox": [
|
| 985 |
+
115,
|
| 986 |
+
178,
|
| 987 |
+
228,
|
| 988 |
+
193
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 8
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "list",
|
| 994 |
+
"sub_type": "text",
|
| 995 |
+
"list_items": [
|
| 996 |
+
"- bn.banglapedia.org",
|
| 997 |
+
"- bn.wikipedia.org",
|
| 998 |
+
"- songgramernotepad.com"
|
| 999 |
+
],
|
| 1000 |
+
"bbox": [
|
| 1001 |
+
137,
|
| 1002 |
+
206,
|
| 1003 |
+
337,
|
| 1004 |
+
250
|
| 1005 |
+
],
|
| 1006 |
+
"page_idx": 8
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"type": "text",
|
| 1010 |
+
"text": "News:",
|
| 1011 |
+
"text_level": 1,
|
| 1012 |
+
"bbox": [
|
| 1013 |
+
115,
|
| 1014 |
+
263,
|
| 1015 |
+
166,
|
| 1016 |
+
275
|
| 1017 |
+
],
|
| 1018 |
+
"page_idx": 8
|
| 1019 |
+
},
|
| 1020 |
+
{
|
| 1021 |
+
"type": "list",
|
| 1022 |
+
"sub_type": "text",
|
| 1023 |
+
"list_items": [
|
| 1024 |
+
"- anandabazar.com",
|
| 1025 |
+
"- arthoniteerkagoj.com",
|
| 1026 |
+
"- bangla.24livenewpaper.com",
|
| 1027 |
+
"- bangla.bdnews24.com",
|
| 1028 |
+
"- bangla.dhakatribune.com",
|
| 1029 |
+
"- bangla.hindustantimes.com",
|
| 1030 |
+
"- bangladeshershkela.com",
|
| 1031 |
+
"- banglanews24.com",
|
| 1032 |
+
"- banglatribune.com",
|
| 1033 |
+
"-bbc.com",
|
| 1034 |
+
"- bd-journal.com",
|
| 1035 |
+
"- bd-pratidin.com",
|
| 1036 |
+
"- bd24live.com",
|
| 1037 |
+
"- bengali.indianexpress.com",
|
| 1038 |
+
"bigganprojukti.com",
|
| 1039 |
+
"- bonikbarta.net",
|
| 1040 |
+
"- chakarianews.com",
|
| 1041 |
+
"- channelonline.com",
|
| 1042 |
+
"- ctgtimes.com",
|
| 1043 |
+
"- ctn24.com",
|
| 1044 |
+
"daily-bangladesh.com",
|
| 1045 |
+
"dailyagnishikha.com",
|
| 1046 |
+
"- dainikazadi.net",
|
| 1047 |
+
"-dainikdinkal.net",
|
| 1048 |
+
"dailyfulki.com",
|
| 1049 |
+
"dailyinqilab.com",
|
| 1050 |
+
"- dailynayadiganta.com",
|
| 1051 |
+
"dailysangram.com",
|
| 1052 |
+
"dailysylhet.com",
|
| 1053 |
+
"dainikamadershomoy.com",
|
| 1054 |
+
"dainikshiksha.com",
|
| 1055 |
+
"- dhakardak-bd.com",
|
| 1056 |
+
"- dmpnews.org",
|
| 1057 |
+
"dw.com",
|
| 1058 |
+
"eisamay.indiatimes.com",
|
| 1059 |
+
"- ittefaq.com.bd",
|
| 1060 |
+
"jagonews24.com",
|
| 1061 |
+
"- jugantor.com",
|
| 1062 |
+
"- kalerkantho.com",
|
| 1063 |
+
"- manobkantha.com.bd",
|
| 1064 |
+
"- mzamin.com",
|
| 1065 |
+
"- ntvbd.com",
|
| 1066 |
+
"- onnodristy.com"
|
| 1067 |
+
],
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
137,
|
| 1070 |
+
291,
|
| 1071 |
+
366,
|
| 1072 |
+
917
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 8
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "list",
|
| 1078 |
+
"sub_type": "text",
|
| 1079 |
+
"list_items": [
|
| 1080 |
+
"- pavilion.com.bd",
|
| 1081 |
+
"- prothomalo.com",
|
| 1082 |
+
"protidinersangbad.com",
|
| 1083 |
+
"- risingbd.com",
|
| 1084 |
+
"- rtvonline.com",
|
| 1085 |
+
"samakal.com",
|
| 1086 |
+
"- sangbadpratidin.in",
|
| 1087 |
+
"- somoyerkonthosor.com",
|
| 1088 |
+
"somoynews.tv",
|
| 1089 |
+
"-tbsnews.net",
|
| 1090 |
+
"- teknfnews.com",
|
| 1091 |
+
"- thedailystar.net",
|
| 1092 |
+
"- voabangla.com",
|
| 1093 |
+
"- zeenews.india.com",
|
| 1094 |
+
"- zoombangla.com"
|
| 1095 |
+
],
|
| 1096 |
+
"bbox": [
|
| 1097 |
+
532,
|
| 1098 |
+
84,
|
| 1099 |
+
722,
|
| 1100 |
+
306
|
| 1101 |
+
],
|
| 1102 |
+
"page_idx": 8
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "text",
|
| 1106 |
+
"text": "Blogs:",
|
| 1107 |
+
"text_level": 1,
|
| 1108 |
+
"bbox": [
|
| 1109 |
+
510,
|
| 1110 |
+
319,
|
| 1111 |
+
563,
|
| 1112 |
+
335
|
| 1113 |
+
],
|
| 1114 |
+
"page_idx": 8
|
| 1115 |
+
},
|
| 1116 |
+
{
|
| 1117 |
+
"type": "list",
|
| 1118 |
+
"sub_type": "text",
|
| 1119 |
+
"list_items": [
|
| 1120 |
+
"- amrabondhu.com",
|
| 1121 |
+
"- banglablog.in",
|
| 1122 |
+
"bigganblog.org",
|
| 1123 |
+
"biggani.org",
|
| 1124 |
+
"bigyan.org.in",
|
| 1125 |
+
"- bishorgo.com",
|
| 1126 |
+
"- cadetcollegeblog.com",
|
| 1127 |
+
"- choturmatrik.com",
|
| 1128 |
+
"- horoppa.wordpress.com",
|
| 1129 |
+
"muktangon.blogspot.com",
|
| 1130 |
+
"- roar.media/bangla",
|
| 1131 |
+
"- sachalayatan.com",
|
| 1132 |
+
"- shodalap.org",
|
| 1133 |
+
"- shopnobaz.net",
|
| 1134 |
+
"somewhereinblog.net",
|
| 1135 |
+
"- subeen.com",
|
| 1136 |
+
"tunerpage.com",
|
| 1137 |
+
"tutobd.com"
|
| 1138 |
+
],
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
532,
|
| 1141 |
+
348,
|
| 1142 |
+
726,
|
| 1143 |
+
612
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 8
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "text",
|
| 1149 |
+
"text": "E-books/Stories:",
|
| 1150 |
+
"text_level": 1,
|
| 1151 |
+
"bbox": [
|
| 1152 |
+
510,
|
| 1153 |
+
627,
|
| 1154 |
+
643,
|
| 1155 |
+
640
|
| 1156 |
+
],
|
| 1157 |
+
"page_idx": 8
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "list",
|
| 1161 |
+
"sub_type": "text",
|
| 1162 |
+
"list_items": [
|
| 1163 |
+
"- banglaepub.github.io",
|
| 1164 |
+
"- bengali.pratilipi.com",
|
| 1165 |
+
"- bn.wikisource.org",
|
| 1166 |
+
"ebangalibrary.com",
|
| 1167 |
+
"- eboipotro.github.io",
|
| 1168 |
+
"- golpokobita.com",
|
| 1169 |
+
"- kaliokalam.com",
|
| 1170 |
+
"- shirisherdalpala.net",
|
| 1171 |
+
"- tagoreweb.in"
|
| 1172 |
+
],
|
| 1173 |
+
"bbox": [
|
| 1174 |
+
532,
|
| 1175 |
+
655,
|
| 1176 |
+
704,
|
| 1177 |
+
788
|
| 1178 |
+
],
|
| 1179 |
+
"page_idx": 8
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"text": "Social Media/Forums:",
|
| 1184 |
+
"text_level": 1,
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
510,
|
| 1187 |
+
801,
|
| 1188 |
+
685,
|
| 1189 |
+
816
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 8
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "list",
|
| 1195 |
+
"sub_type": "text",
|
| 1196 |
+
"list_items": [
|
| 1197 |
+
"- banglacricket.com",
|
| 1198 |
+
"- bn.globalvoices.org",
|
| 1199 |
+
"- helpfulhub.com",
|
| 1200 |
+
"- nirbik.com",
|
| 1201 |
+
"- pchelplinebd.com",
|
| 1202 |
+
"- techtunes.io"
|
| 1203 |
+
],
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
532,
|
| 1206 |
+
829,
|
| 1207 |
+
694,
|
| 1208 |
+
917
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 8
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "text",
|
| 1214 |
+
"text": "Miscellaneous:",
|
| 1215 |
+
"text_level": 1,
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
114,
|
| 1218 |
+
84,
|
| 1219 |
+
235,
|
| 1220 |
+
98
|
| 1221 |
+
],
|
| 1222 |
+
"page_idx": 9
|
| 1223 |
+
},
|
| 1224 |
+
{
|
| 1225 |
+
"type": "list",
|
| 1226 |
+
"sub_type": "text",
|
| 1227 |
+
"list_items": [
|
| 1228 |
+
"- banglasonglyric.com",
|
| 1229 |
+
"- bdlaws.minlaw.gov.bd",
|
| 1230 |
+
"bdup24.com",
|
| 1231 |
+
"- bengalisongslyrics.com",
|
| 1232 |
+
"dakghar.org",
|
| 1233 |
+
"-gdn8.com",
|
| 1234 |
+
"- gunijan.org.bd",
|
| 1235 |
+
"hwr.org",
|
| 1236 |
+
"- jakir.me",
|
| 1237 |
+
"- jhankarmahbub.com",
|
| 1238 |
+
"- JW.org",
|
| 1239 |
+
"- lyricsbangla.com",
|
| 1240 |
+
"- neonaloy.com",
|
| 1241 |
+
"porjotonlipi.com",
|
| 1242 |
+
"sasthabangla.com",
|
| 1243 |
+
"tanzil.net"
|
| 1244 |
+
],
|
| 1245 |
+
"bbox": [
|
| 1246 |
+
137,
|
| 1247 |
+
108,
|
| 1248 |
+
327,
|
| 1249 |
+
325
|
| 1250 |
+
],
|
| 1251 |
+
"page_idx": 9
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "text",
|
| 1255 |
+
"text": "We wrote custom crawlers for each site above (except the Wikipedia dumps).",
|
| 1256 |
+
"bbox": [
|
| 1257 |
+
112,
|
| 1258 |
+
334,
|
| 1259 |
+
485,
|
| 1260 |
+
366
|
| 1261 |
+
],
|
| 1262 |
+
"page_idx": 9
|
| 1263 |
+
},
|
| 1264 |
+
{
|
| 1265 |
+
"type": "text",
|
| 1266 |
+
"text": "Additional Sample Efficiency Tests",
|
| 1267 |
+
"text_level": 1,
|
| 1268 |
+
"bbox": [
|
| 1269 |
+
114,
|
| 1270 |
+
375,
|
| 1271 |
+
391,
|
| 1272 |
+
391
|
| 1273 |
+
],
|
| 1274 |
+
"page_idx": 9
|
| 1275 |
+
},
|
| 1276 |
+
{
|
| 1277 |
+
"type": "text",
|
| 1278 |
+
"text": "We plot the sample efficiency results of the NER and QA tasks in Figure 2.",
|
| 1279 |
+
"bbox": [
|
| 1280 |
+
112,
|
| 1281 |
+
395,
|
| 1282 |
+
485,
|
| 1283 |
+
428
|
| 1284 |
+
],
|
| 1285 |
+
"page_idx": 9
|
| 1286 |
+
},
|
| 1287 |
+
{
|
| 1288 |
+
"type": "image",
|
| 1289 |
+
"img_path": "images/d8d83f68b7919cddb400e3d3977ea61eb24124d0972420eb239aafffeb81cc35.jpg",
|
| 1290 |
+
"image_caption": [],
|
| 1291 |
+
"image_footnote": [],
|
| 1292 |
+
"bbox": [
|
| 1293 |
+
114,
|
| 1294 |
+
445,
|
| 1295 |
+
478,
|
| 1296 |
+
618
|
| 1297 |
+
],
|
| 1298 |
+
"page_idx": 9
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "image",
|
| 1302 |
+
"img_path": "images/12709fdf4e271e916ce9554b2f39fd3a54fccbbaa24db743ed04560a8e061009.jpg",
|
| 1303 |
+
"image_caption": [
|
| 1304 |
+
"Figure 2: Sample-efficiency tests with NER and QA."
|
| 1305 |
+
],
|
| 1306 |
+
"image_footnote": [],
|
| 1307 |
+
"bbox": [
|
| 1308 |
+
114,
|
| 1309 |
+
627,
|
| 1310 |
+
478,
|
| 1311 |
+
797
|
| 1312 |
+
],
|
| 1313 |
+
"page_idx": 9
|
| 1314 |
+
},
|
| 1315 |
+
{
|
| 1316 |
+
"type": "text",
|
| 1317 |
+
"text": "Similar results are also observed here for the NER task, where BanglaBERT is more sample-efficient when we have $\\leq 1k$ training samples. In the QA task however, both models have identical performance for all sample counts.",
|
| 1318 |
+
"bbox": [
|
| 1319 |
+
112,
|
| 1320 |
+
839,
|
| 1321 |
+
487,
|
| 1322 |
+
917
|
| 1323 |
+
],
|
| 1324 |
+
"page_idx": 9
|
| 1325 |
+
},
|
| 1326 |
+
{
|
| 1327 |
+
"type": "text",
|
| 1328 |
+
"text": "Compute and Memory Efficiency Tests",
|
| 1329 |
+
"text_level": 1,
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
509,
|
| 1332 |
+
84,
|
| 1333 |
+
818,
|
| 1334 |
+
99
|
| 1335 |
+
],
|
| 1336 |
+
"page_idx": 9
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"type": "text",
|
| 1340 |
+
"text": "To validate that BanglaBERT is more efficient in terms of memory and compute, we measured each model's training time and memory usage during the fine-tuning of each task. All tests were done on a desktop machine with an 8-core Intel Core-i7 11700k CPU and NVIDIA RTX 3090 GPU. We used the same batch size, gradient accumulation steps, and sequence length for all models and tasks for a fair comparison. We use relative time and memory (GPU VRAM) usage considering those of BanglaBERT as units. The results are shown in Table 3. (We mention the upper and lower values of the different tasks for each model)",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
507,
|
| 1343 |
+
105,
|
| 1344 |
+
882,
|
| 1345 |
+
313
|
| 1346 |
+
],
|
| 1347 |
+
"page_idx": 9
|
| 1348 |
+
},
|
| 1349 |
+
{
|
| 1350 |
+
"type": "table",
|
| 1351 |
+
"img_path": "images/0164411f9ba6f2783d560307fd516eb8278d70f10a7e6daf7474396e9d4b44e8.jpg",
|
| 1352 |
+
"table_caption": [],
|
| 1353 |
+
"table_footnote": [],
|
| 1354 |
+
"table_body": "<table><tr><td>Model</td><td>Time</td><td>Memory Usage</td></tr><tr><td>mBERT</td><td>1.14x-1.92x</td><td>1.12x-2.04x</td></tr><tr><td>XLM-R (base)</td><td>1.29-1.81x</td><td>1.04-1.63x</td></tr><tr><td>XLM-R (large)</td><td>3.81-4.49x</td><td>4.44-5.55x</td></tr><tr><td>SahajBERT</td><td>2.40-3.33x</td><td>2.07-3.54x</td></tr><tr><td>BanglaBERT</td><td>1.00x</td><td>1.00x</td></tr></table>",
|
| 1355 |
+
"bbox": [
|
| 1356 |
+
510,
|
| 1357 |
+
324,
|
| 1358 |
+
877,
|
| 1359 |
+
420
|
| 1360 |
+
],
|
| 1361 |
+
"page_idx": 9
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"type": "text",
|
| 1365 |
+
"text": "Table 3: Compute and memory efficiency tests",
|
| 1366 |
+
"bbox": [
|
| 1367 |
+
536,
|
| 1368 |
+
431,
|
| 1369 |
+
853,
|
| 1370 |
+
445
|
| 1371 |
+
],
|
| 1372 |
+
"page_idx": 9
|
| 1373 |
+
}
|
| 1374 |
+
]
|
data/2021/2101_00xxx/2101.00204/2a8c286c-88d1-4769-975d-642826d5ce5d_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00204/full.md
CHANGED
|
@@ -1,3 +1,335 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BanglaBERT: Language Model Pretraining and Benchmarks for Low-Resource Language Understanding Evaluation in Bangla
|
| 2 |
+
|
| 3 |
+
Abhik Bhattacharjee $^{1*}$ , Tahmid Hasan $^{1*}$ , Wasi Uddin Ahmad $^{2†}$ , Kazi Samin $^{1}$ , Md Saiful Islam $^{3}$ , Anindya Iqbal $^{1}$ , M. Sohel Rahman $^{1}$ , Rifat Shahriyar $^{1}$
|
| 4 |
+
|
| 5 |
+
Bangladesh University of Engineering and Technology (BUET)<sup>1</sup>, AWS AI Labs<sup>2</sup>, University of Rochester<sup>3</sup>
|
| 6 |
+
|
| 7 |
+
abhik@ra.cse.buet.ac.bd, {tahmidhasan, rifat} @cse.buet.ac.bd
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
In this work, we introduce BanglaBERT, a BERT-based Natural Language Understanding (NLU) model pretrained in Bangla, a widely spoken yet low-resource language in the NLP literature. To pretrain BanglaBERT, we collect 27.5 GB of Bangla pretraining data (dubbed 'Bangla2B+') by crawling 110 popular Bangla sites. We introduce two downstream task datasets on natural language inference and question answering and benchmark on four diverse NLU tasks covering text classification, sequence labeling, and span prediction. In the process, we bring them under the first-ever Bangla Language Understanding Benchmark (BLUB). BanglaBERT achieves state-of-the-art results outperforming multilingual and monolingual models. We are making the models, datasets, and a leaderboard publicly available at https://github.com/csebuetnlp/banglabert to advance Bangla NLP.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Despite being the sixth most spoken language in the world with over 300 million native speakers constituting $4\%$ of the world's total population, $^{1}$ Bangla is considered a resource-scarce language. Joshi et al. (2020b) categorized Bangla in the language group that lacks efforts in labeled data collection and relies on self-supervised pretraining (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019) to boost the natural language understanding (NLU) task performances. To date, the Bangla language has been continuing to rely on fine-tuning multilingual pretrained language models (PLMs) (Ashrafi et al., 2020; Das et al., 2021; Islam et al., 2021). However, since multilingual PLMs cover a wide range of languages (Conneau and Lample, 2019; Conneau et al., 2020), they are large (have
|
| 16 |
+
|
| 17 |
+
hundreds of millions of parameters) and require substantial computational resources for fine-tuning. They also tend to show degraded performance for low-resource languages (Wu and Dredze, 2020) on downstream NLU tasks. Motivated by the triumph of language-specific models (Martin et al. (2020); Polignano et al. (2019); Canete et al. (2020); Antoun et al. (2020), inter alia) over multilingual models in many other languages, in this work, we present BanglaBERT – a BERT-based (Devlin et al., 2019) Bangla NLU model pretrained on 27.5 GB data (which we name ‘Bangla2B+’) we meticulously crawled 110 popular Bangla websites to facilitate NLU applications in Bangla. Since most of the downstream task datasets for NLP applications are in the English language, to facilitate zero-shot transfer learning between English and Bangla, we additionally pretrain a model in both languages; we name the model BanglishBERT.
|
| 18 |
+
|
| 19 |
+
We also introduce two datasets on Bangla Natural Language Inference (NLI) and Question Answering (QA), tasks previously unexplored in Bangla, and evaluate both pretrained models on four diverse downstream tasks on sentiment classification, NLI, named entity recognition, and QA. We bring these tasks together to establish the first-ever Bangla Language Understanding Benchmark (BLUB). We compare widely used multilingual models to BanglaBERT using BLUB and find that both models excel on all the tasks.
|
| 20 |
+
|
| 21 |
+
# We summarize our contributions as follows:
|
| 22 |
+
|
| 23 |
+
1. We present two new pretrained models: BanglaBERT and BanglishBERT, and introduce new Bangla NLI and QA datasets.
|
| 24 |
+
2. We introduce the Bangla Language Understanding Benchmark (BLUB) and show that, in the supervised setting, BanglaBERT outperforms mBERT and XLM-R (base) by 6.8 and 4.3 BLUB scores, while in zero-shot crosslingual transfer, BanglishBERT outperforms them by 15.8 and 10.8, respectively.
|
| 25 |
+
|
| 26 |
+
3. We provide the code, models, and a leaderboard to spur future research on Bangla NLU.
|
| 27 |
+
|
| 28 |
+
# 2 BanglaBERT
|
| 29 |
+
|
| 30 |
+
# 2.1 Pretraining Data
|
| 31 |
+
|
| 32 |
+
A high volume of good quality text data is a prerequisite for pretraining large language models. For instance, BERT (Devlin et al., 2019) is pretrained on the English Wikipedia and the Books corpus (Zhu et al., 2015) containing 3.3 billion tokens. Subsequent works like RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019) used more extensive web-crawled data with heavy filtering and cleaning.
|
| 33 |
+
|
| 34 |
+
Bangla is a rather resource-constrained language in the web domain; for example, the Bangla Wikipedia dump from July 2021 is only $650\mathrm{MB}$ , two orders of magnitudes smaller than the English Wikipedia. As a result, we had to crawl the web extensively to collect our pretraining data. We selected 110 Bangla websites by their Amazon Alexa rankings<sup>2</sup> and the volume and quality of extractable texts by inspecting each website. The contents included encyclopedias, news, blogs, e-books, stories, social media/forums, etc.<sup>3</sup> The amount of data totaled around 35 GB.
|
| 35 |
+
|
| 36 |
+
There are noisy sources of Bangla data dumps, a couple of prominent ones being OSCAR (Suárez et al., 2019) and CCNet (Wenzek et al., 2020). They contained many offensive texts; we found them infeasible to clean thoroughly. Fearing their potentially harmful impacts (Luccioni and Viviano, 2021), we opted not to use them. We further discuss ethical considerations at the end of the paper.
|
| 37 |
+
|
| 38 |
+
# 2.2 Pre-processing
|
| 39 |
+
|
| 40 |
+
We performed thorough dedduplication on the pretraining data, removed non-textual contents (e.g., HTML/JavaScript tags), and filtered out non-Bangla pages using a language classifier (Joulin et al., 2017). After the processing, the dataset was reduced to $27.5\mathrm{GB}$ in size containing $5.25\mathrm{M}$ documents having 306.66 words on average.
|
| 41 |
+
|
| 42 |
+
We trained a Wordpiece (Wu et al., 2016) vocabulary of $32k$ subword tokens on the resulting corpus with a 400 character alphabet, kept larger than the native Bangla alphabet to capture code-switching (Poplack, 1980) and allow romanized Bangla contents for better generalization. We limited the length of a training sample to 512 tokens
|
| 43 |
+
|
| 44 |
+
and did not cross document boundaries (Liu et al., 2019) while creating a data point. After tokenization, we had 7.18M samples with an average length of 304.14 tokens and containing 2.18B tokens in total; hence we named the dataset 'Bangla2B+'.
|
| 45 |
+
|
| 46 |
+
# 2.3 Pretraining Objective
|
| 47 |
+
|
| 48 |
+
Self-supervised pretraining objectives leverage unlabeled data. For example, BERT (Devlin et al., 2019) was pretrained with masked language modeling (MLM) and next sentence prediction (NSP). Several works built on top of this, e.g., RoBERTa (Liu et al., 2019) removed NSP and pretrained with longer sequences, SpanBERT (Joshi et al., 2020a) masked contiguous spans of tokens, while works like XLNet (Yang et al., 2019) introduced objectives like factorized language modeling.
|
| 49 |
+
|
| 50 |
+
We pretrained BanglaBERT using ELECTRA (Clark et al., 2020b), pretrained with the Replaced Token Detection (RTD) objective, where a generator and a discriminator model are trained jointly. The generator is fed as input a sequence with a portion of the tokens masked (15% in our case) and is asked to predict them using the rest of the input (i.e., standard MLM). The masked tokens are then replaced by tokens sampled from the generator's output distribution for the corresponding masks, and the discriminator then has to predict whether each token is from the original sequence or not. After pretraining, the discriminator is used for fine-tuning. Clark et al. (2020b) argued that RTD back-propagates loss from all tokens of a sequence, in contrast to 15% tokens of the MLM objective, giving the model more signals to learn from. Evidently, ELECTRA achieves comparable downstream performance to RoBERTa or XLNet with only a quarter of their training time. This computational efficiency motivated us to use ELECTRA for our implementation of BanglaBERT.
|
| 51 |
+
|
| 52 |
+
# 2.4 Model Architecture & Hyperparameters
|
| 53 |
+
|
| 54 |
+
We pretrained the base ELECTRA model (a 12-layer Transformer encoder with 768 embedding size, 768 hidden size, 12 attention heads, 3072 feed-forward size, generator-to-discriminator ratio $\frac{1}{3}$ , 110M parameters) with 256 batch size for 2.5M steps on a v3-8 TPU instance on GCP. We used the Adam (Kingma and Ba, 2015) optimizer with a 2e-4 learning rate and linear warmup of 10k steps.
|
| 55 |
+
|
| 56 |
+
<table><tr><td>Task</td><td>Corpus</td><td>|Train|</td><td>|Dev|</td><td>|Test|</td><td>Metric</td><td>Domain</td></tr><tr><td>Sentiment Classification</td><td>SentNoB</td><td>12,575</td><td>1,567</td><td>1,567</td><td>Macro-F1</td><td>Social Media</td></tr><tr><td>Natural Language Inference</td><td>BNLI</td><td>381,449</td><td>2,419</td><td>4,895</td><td>Accuracy</td><td>Miscellaneous</td></tr><tr><td>Named Entity Recognition</td><td>MultiCoNER</td><td>14,500</td><td>800</td><td>800</td><td>Micro-F1</td><td>Miscellaneous</td></tr><tr><td>Question Answering</td><td>BQA, TyDiQA</td><td>127,771</td><td>2,502</td><td>2,504</td><td>EM/F1</td><td>Wikipedia</td></tr></table>
|
| 57 |
+
|
| 58 |
+
Table 1: Statistics of the Bangla Language Understanding Evaluation (BLUB) benchmark.
|
| 59 |
+
|
| 60 |
+
# 2.5 BanglishBERT
|
| 61 |
+
|
| 62 |
+
Often labeled data in a low-resource language for a task may not be available but be abundant in high-resource languages like English. In these scenarios, zero-shot cross-lingual transfer (Artetxe and Schwenk, 2019) provides an effective way to be still able to train a multilingual model on that task using the high-resource languages and be able to transfer to low-resource ones. To this end, we pretrained a bilingual model, named BanglishBERT, on Bangla and English together using the same set of hyperparameters mentioned earlier. We used the BERT pretraining corpus as the English data and trained a joint bilingual vocabulary (each language having $\sim 16\mathrm{k}$ tokens). We upsampled the Bangla data during training to equal the participation of both languages.
|
| 63 |
+
|
| 64 |
+
# 3 The Bangla Language Understanding Benchmark (BLUB)
|
| 65 |
+
|
| 66 |
+
Many works have studied different Bangla NLU tasks in isolation, e.g., sentiment classification (Das and Bandyopadhyay, 2010; Sharfuddin et al., 2018; Tripto and Ali, 2018), semantic textual similarity (Shajalal and Aono, 2018), parts-of-speech (PoS) tagging (Alam et al., 2016), named entity recognition (NER) (Ashrafi et al., 2020). However, Bangla NLU has not yet had a comprehensive, unified study. Motivated by the surge of NLU research brought about by benchmarks in other languages, e.g., English (Wang et al., 2018), French (Le et al., 2020), Korean (Park et al., 2021), we establish the first-ever Bangla Language Understanding Benchmark (BLUB). NLU generally comprises three types of tasks: text classification, sequence labeling, and text span prediction. Text classification tasks can further be sub-divided into single-sequence and sequence-pair classification. Therefore, we consider a total of four tasks for BLUB. For each task type, we carefully select one downstream task dataset. We emphasize the quality and open availability of the datasets while making the selection. We briefly mention them below.
|
| 67 |
+
|
| 68 |
+
1. Single-Sequence Classification Sentiment classification is perhaps the most-studied Bangla NLU task, with some of the earlier works dating back over a decade (Das and Bandyopadhyay, 2010). Hence, we chose this as the single-sequence classification task. However, most Bangla sentiment classification datasets are not publicly available. We could only find two public datasets: BYSA (Tripto and Ali, 2018) and SentNoB (Islam et al., 2021). We found BYSA to have many duplications. Even worse, many duplicates had different labels. SentNoB had better quality and covered a broader set of domains, making the classification task more challenging. Hence, we opted to use the latter.
|
| 69 |
+
|
| 70 |
+
2. Sequence-pair Classification In contrast to single-sequence classification, there has been a dearth of sequence-pair classification works in Bangla. We found work on semantic textual similarity (Shajalal and Aono, 2018), but the dataset is not publicly available. As such, we curated a new Bangla Natural Language Inference (BNLI) dataset for sequence-pair classification. We chose NLI as the representative task due to its fundamental importance in NLU. Given two sentences, a premise and a hypothesis as input, a model is tasked to predict whether or not the hypothesis is entailment, contradiction, or neutral to the premise. We used the same curation procedure as the XNLI (Conneau et al., 2018) dataset: we translated the MultiNLI (Williams et al., 2018) training data using the English to Bangla translation model by Hasan et al. (2020) and had the evaluation sets translated by expert human translators. Due to the possibility of the incursion of errors during automatic translation, we used the Language-Agnostic BERT Sentence Embeddings (LaBSE) (Feng et al., 2020) of the translations and original sentences to compute their similarity and discarded all sentences below a similarity threshold of 0.70. Moreover, to ensure good-quality human translation, we used similar quality assurance strategies as Guzmán et al. (2019).
|
| 71 |
+
|
| 72 |
+
<table><tr><td>Models</td><td>|Params.|</td><td>SC</td><td>NLI</td><td>NER</td><td>QA</td><td>BLUB Score</td></tr><tr><td colspan="7">Zero-shot cross-lingual transfer</td></tr><tr><td>mBERT</td><td>180M</td><td>27.05</td><td>62.22</td><td>39.27</td><td>59.01/64.18</td><td>50.35</td></tr><tr><td>XLM-R (base)</td><td>270M</td><td>42.03</td><td>72.18</td><td>45.37</td><td>55.03/61.83</td><td>55.29</td></tr><tr><td>XLM-R (large)</td><td>550M</td><td>49.49</td><td>78.13</td><td>56.48</td><td>71.13/77.70</td><td>66.59</td></tr><tr><td>BanglishBERT</td><td>110M</td><td>48.39</td><td>75.26</td><td>55.56</td><td>72.87/78.63</td><td>66.14</td></tr><tr><td colspan="7">Supervised fine-tuning</td></tr><tr><td>mBERT</td><td>180M</td><td>67.59</td><td>75.13</td><td>68.97</td><td>67.12/72.64</td><td>70.29</td></tr><tr><td>XLM-R (base)</td><td>270M</td><td>69.54</td><td>78.46</td><td>73.32</td><td>68.09/74.27</td><td>72.82</td></tr><tr><td>XLM-R (large)</td><td>550M</td><td>70.97</td><td>82.40</td><td>78.39</td><td>73.15/79.06</td><td>76.79</td></tr><tr><td>IndicBERT</td><td>18M</td><td>68.41</td><td>77.11</td><td>54.13</td><td>50.84/57.47</td><td>61.59</td></tr><tr><td>sahajBERT</td><td>18M</td><td>71.12</td><td>76.92</td><td>70.94</td><td>65.48/70.69</td><td>71.03</td></tr><tr><td>BanglishBERT</td><td>110M</td><td>70.61</td><td>80.95</td><td>76.28</td><td>72.43/78.40</td><td>75.73</td></tr><tr><td>BanglaBERT</td><td>110M</td><td>72.89</td><td>82.80</td><td>77.78</td><td>72.63/79.34</td><td>77.09</td></tr></table>
|
| 73 |
+
|
| 74 |
+
Table 2: Performance comparison of pretrained models on different downstream tasks. Scores in bold texts have statistically significant $(p < 0.05)$ difference from others with bootstrap sampling (Koehn, 2004).
|
| 75 |
+
|
| 76 |
+
3. Sequence Labeling In this task, all words of a text sequence have to be classified. Named Entity Recognition (NER) and Parts-of-Speech (PoS) tagging are two of the most prominent sequence labeling tasks. We chose the Bangla portion of SemEval 2022 MultiCoNER (Malmasi et al., 2022) dataset for BLUB.
|
| 77 |
+
|
| 78 |
+
4. Span Prediction Extractive question answering is a standard choice for text span prediction. Similar to BNLI, we machine-translated the SQuAD 2.0 (Rajpurkar et al., 2018) dataset and used it as the training set (BQA). For validation and test, We used the Bangla portion of the TyDiQA $^5$ (Clark et al., 2020a) dataset. We posed the task analogous to SQuAD 2.0: presented with a text passage and a question, a model has to predict whether or not it is answerable. If answerable, the model has to find the minimal text span that answers the question.
|
| 79 |
+
|
| 80 |
+
We present detailed statistics of the BLUB benchmark in Table 1.
|
| 81 |
+
|
| 82 |
+
# 4 Experiments & Results
|
| 83 |
+
|
| 84 |
+
Setup We fine-tuned BanglaBERT and BanglishBERT on the four downstream tasks and compared them with several multilingual models: mBERT (Devlin et al., 2019), XLM-R base and large (Conneau et al., 2020), and IndicBERT (Kakwani et al., 2020), a multilingual model for Indian languages; and sahajBERT (Diskin et al., 2021), an ALBERT-based (Lan et al., 2020) PLM for Bangla. All pre
|
| 85 |
+
|
| 86 |
+
trained models were fine-tuned for 3-20 epochs with batch size 32, and the learning rate was tuned from $\{2\mathrm{e} - 5,3\mathrm{e} - 5,4\mathrm{e} - 5,5\mathrm{e} - 5\}$ . The final models were selected based on the validation performances after each epoch. We performed fine-tuning with three random seeds and reported their average scores in Table 2. We reported the average performance of all tasks as the BLUB score.
|
| 87 |
+
|
| 88 |
+
Zero-shot Transfer We show the zero-shot cross-lingual transfer results of the multilingual models fine-tuned on the English counterpart of each dataset (SentNob has no English equivalent; hence we used the Stanford Sentiment Treebank (Socher et al., 2013) for the sentiment classification task) in Table 2. In zero-shot transfer setting, BanglishBERT achieves strong cross-lingual performance over similar-sized models and falls marginally short of XLM-R (large). This is an expected outcome since cross-lingual effectiveness depends explicitly on model size (K et al., 2020).
|
| 89 |
+
|
| 90 |
+
Supervised Fine-tuning In the supervised fine-tuning setup, BanglaBERT outperformed multilingual models and monolingual sahajBERT on all the tasks, achieving a BLUE score of 77.09, even coming head-to-head with XLM-R (large). On the other hand, BanglishBERT marginally lags behind BanglaBERT and XLM-R (large). BanglaBERT is not only superior in performance but also substantially compute- and memory-efficient. For instance, it may seem that sahajBERT is more efficient than BanglaBERT due to its smaller size, but it takes 2-3.5x time and 2.4-3.33x memory as BanglaBERT
|
| 91 |
+
|
| 92 |
+
to fine-tune. 6
|
| 93 |
+
|
| 94 |
+

|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Figure 1: Sample-efficiency tests with SC and NLI.
|
| 98 |
+
|
| 99 |
+
Sample efficiency It is often challenging to annotate training samples in real-world scenarios, especially for low-resource languages like Bangla. So, in addition to compute- and memory-efficiency, sample-efficiency (Howard and Ruder, 2018) is another necessity of PLMs. To assess the sample efficiency of BanglaBERT, we limit the number of training samples and see how it fares against other models. We compare it with XLM-R (large) and plot their performances on the SC and NLI tasks<sup>7</sup> for different sample size in Figure 1.
|
| 100 |
+
|
| 101 |
+
Results show that when we have fewer number of samples $(\leq 1k)$ , BanglaBERT has substantially better performance (2-9% on SC and 6-10% on NLI with $p < 0.05$ ) over XLM-R (large), making it more practically applicable for resource-scarce downstream tasks.
|
| 102 |
+
|
| 103 |
+
# 5 Conclusion & Future Works
|
| 104 |
+
|
| 105 |
+
Creating language-specific models is often infeasible for low-resource languages lacking ample data. Hence, researchers are compelled to use multilingual models for languages that do not have
|
| 106 |
+
|
| 107 |
+
strong pretrained models. To this end, we introduced BanglaBERT and BanglishBERT, two NLU models in Bangla, a widely spoken yet low-resource language. We presented new downstream datasets on NLI and QA, and established the BLUB benchmark, setting new state-of-the-art results with BanglaBERT. In future, we will include other Bangla NLU benchmarks (e.g., dependency parsing (de Marneffe et al., 2021)) in BLUB and investigate the benefits of initializing Bangla NLG models from BanglaBERT.
|
| 108 |
+
|
| 109 |
+
# Acknowledgements
|
| 110 |
+
|
| 111 |
+
We would like to thank the Research and Innovation Centre for Science and Engineering (RISE), BUET, for funding the project, and Intelligent Machines Limited and Google TensorFlow Research Cloud (TRC) Program for providing cloud support.
|
| 112 |
+
|
| 113 |
+
# Ethical Considerations
|
| 114 |
+
|
| 115 |
+
Dataset and Model Release The Copy Right Act, $2000^{8}$ of Bangladesh allows reproduction and public release of copy-right materials for non-commercial research purposes. As a transformative research work, we will release BanglaBERT under a non-commercial license. Furthermore, we will release only the pretraining data for which we know the distribution will not cause any copyright infringement. The downstream task datasets can all be made publicly available under a similar non-commercial license.
|
| 116 |
+
|
| 117 |
+
Quality Control in Human Translation Translations were done by expert translators who provide translation services for renowned Bangla newspapers. Each translated sentence was further assessed for quality by another expert. If found to be of low quality, it was again translated by the original translator. The sample was then discarded altogether if found to be of low quality again. Fewer than 100 samples were discarded in this process. Translators were paid as per standard rates in local currencies.
|
| 118 |
+
|
| 119 |
+
Text Content We tried to minimize offensive texts in the pretraining data by explicitly crawling the sites where such contents would be nominal. However, we cannot guarantee that there is absolutely no objectionable content present and therefore recommend using the model carefully, especially for text generation purposes.
|
| 120 |
+
|
| 121 |
+
# References
|
| 122 |
+
|
| 123 |
+
Firoj Alam, Shammur Absar Chowdhury, and Sheak Rashed Haider Noori. 2016. Bidirectional LSTMs - CRFs networks for bangla POS tagging. In 2016 19th International Conference on Computer and Information Technology (ICCIIT), pages 377-382. IEEE.
|
| 124 |
+
Wissam Antoun, Fady Baly, and Hazem Hajj. 2020. AraBERT: Transformer-based model for Arabic language understanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9-15, Marseille, France. European Language Resource Association.
|
| 125 |
+
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597-610.
|
| 126 |
+
I. Ashrafi, M. Mohammad, A. S. Mauree, G. M. A. Nijhum, R. Karim, N. Mohammed, and S. Mo men. 2020. Banner: A cost-sensitive contextualized model for bangla named entity recognition. IEEE Access, 8:58206-58226.
|
| 127 |
+
Jose Canete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge Pérez. 2020. Spanish pre-trained BERT model and evaluation data. In Proceedings of the Practical ML for Developing Countries Workshop at ICLR 2020, PML4DC.
|
| 128 |
+
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020a. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454-470.
|
| 129 |
+
Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020b. ELECTRA: pretraining text encoders as discriminators rather than generators. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.
|
| 130 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
|
| 131 |
+
Alexis Conneau and Guillaume Lample. 2019. Cross-lingual language model pretraining. In Advances in Neural Information Processing Systems, volume 32, pages 7059–7069. Curran Associates, Inc.
|
| 132 |
+
|
| 133 |
+
Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
|
| 134 |
+
Amitava Das and Sivaji Bandyopadhyay. 2010. Phrase-level polarity identification for bangla. Int. J. Comput. Linguist. Appl.(IJCLA), 1(1-2):169-182.
|
| 135 |
+
Avishek Das, Omar Sharif, Mohammed Moshiul Hoque, and Iqbal H. Sarker. 2021. Emotion classification in a resource constrained language using transformer-based approach. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, pages 150-158, Online. Association for Computational Linguistics.
|
| 136 |
+
Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255-308.
|
| 137 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 138 |
+
Michael Diskin, Alexey Bukhtiyarov, Max Ryabinin, Lucile Saulnier, Quentin Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, Albert Villanova del Moral, Denis Mazur, Ilia Kobelev, Yacine Jernite, Thomas Wolf, and Gennady Pekhimenko. 2021. Distributed deep learning in open collaborations. arXiv:2106.10207.
|
| 139 |
+
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. arXiv:2007.01852.
|
| 140 |
+
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.
|
| 141 |
+
Tahmid Hasan, Abhik Bhattacharjee, Kazi Samin, Musum Hasan, Madhusudan Basak, M. Sohel Rahman,
|
| 142 |
+
|
| 143 |
+
and Rifat Shahriyar. 2020. Not low-resource anymore: Aligner ensembling, batch filtering, and new datasets for Bengali-English machine translation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2612–2623, Online. Association for Computational Linguistics.
|
| 144 |
+
Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.
|
| 145 |
+
Khondoker Ittehadul Islam, Sadipta Kar, Md Saiful Islam, and Mohammad Ruhul Amin. 2021. SentNoB: A dataset for analysing sentiment on noisy Bangla texts. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3265-3271, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 146 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020a. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
|
| 147 |
+
Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020b. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics.
|
| 148 |
+
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.
|
| 149 |
+
Karthikeyan K, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.
|
| 150 |
+
Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual corpora, evaluation benchmarks and pre-trained multilingual language models for Indian languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4948-4961, Online. Association for Computational Linguistics.
|
| 151 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015.
|
| 152 |
+
|
| 153 |
+
Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics.
|
| 154 |
+
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In 8th International Conference on Learning Representations, ICLR 2020, April, 2020, Online.
|
| 155 |
+
Hang Le, Loic Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoit Crabbé, Laurent Besacier, and Didier Schwab. 2020. Flaubert: Unsupervised language model pre-training for french. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 2479-2490, Marseille, France. European Language Resources Association.
|
| 156 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692.
|
| 157 |
+
Alexandra Luccioni and Joseph Viviano. 2021. What's in the box? an analysis of undesirable content in the Common Crawl corpus. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 182-189, Online. Association for Computational Linguistics.
|
| 158 |
+
Shervin Malmasi, Anjie Fang, Besnik Fetahu, Sudipta Kar, and Oleg Rokhlenko. 2022. MultiCoNER: a Large-scale Multilingual dataset for Complex Named Entity Recognition.
|
| 159 |
+
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoit Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.
|
| 160 |
+
Sungjoon Park, Jihyung Moon, Sungdong Kim, Won Ik Cho, Ji Yoon Han, Jangwon Park, Chisung Song, Junseong Kim, Youngsook Song, Taehwan Oh, Joohong Lee, Juhyun Oh, Sungwon Lyu, Younghoon Jeong, Inkwon Lee, Sangwoo Seo, Dongjun Lee, Hyunwoo Kim, Myeonghwa Lee, Seongbo Jang, Seungwon Do, Sunkyoung Kim, Kyungtae Lim, Jongwon Lee, Kyumin Park, Jamin Shin, Seonghyun Kim, Lucy Park, Alice Oh, JungWoo Ha, and Kyunghyun Cho. 2021. KLUE: Korean language understanding evaluation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
|
| 161 |
+
|
| 162 |
+
Marco Polignano, Pierpaolo Basile, Marco de Gemmis, Giovanni Semeraro, and Valerio Basile. 2019. *Alberto: Italian BERT language understanding model for NLP challenging tasks based on tweets*. In *Proceedings of the Sixth Italian Conference on Computational Linguistics*, Bari, Italy, November 13-15, 2019, CEUR Workshop Proceedings.
|
| 163 |
+
Shana Poplack. 1980. Sometimes I'll start a sentence in Spanish Y TERMINO EN ESPANOL: toward a typology of code-switching. Linguistics, 18(7-8):581-618.
|
| 164 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8).
|
| 165 |
+
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784-789, Melbourne, Australia. Association for Computational Linguistics.
|
| 166 |
+
Md Shajalal and Masaki Aono. 2018. Semantic textual similarity in Bengali text. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-5. IEEE.
|
| 167 |
+
Abdullah Aziz Sharfuddin, Md Nafis Tihami, and Md Saiful Islam. 2018. A deep recurrent neural network with bilstm model for sentiment classification. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-4. IEEE.
|
| 168 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
|
| 169 |
+
Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7), Cardiff, United Kingdom. Leibniz-Institut für Deutsche Sprache.
|
| 170 |
+
Nafis Irtiza Tripto and Mohammed Eunus Ali. 2018. Detecting multilabel sentiment and emotions from bangla youtube comments. In 2018 International Conference on Bangla Speech and Language Processing (ICBSLP), pages 1-6. IEEE.
|
| 171 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018.
|
| 172 |
+
|
| 173 |
+
GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics.
|
| 174 |
+
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
|
| 175 |
+
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 176 |
+
Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics.
|
| 177 |
+
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.
|
| 178 |
+
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, volume 32, pages 5753-5763. Curran Associates, Inc.
|
| 179 |
+
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, page 19-27, USA. IEEE Computer Society.
|
| 180 |
+
|
| 181 |
+
# Appendix
|
| 182 |
+
|
| 183 |
+
# Pretraining Data Sources
|
| 184 |
+
|
| 185 |
+
We used the following sites for data collection. We categorize the sites into six types:
|
| 186 |
+
|
| 187 |
+
# Encyclopedia:
|
| 188 |
+
|
| 189 |
+
- bn.banglapedia.org
|
| 190 |
+
- bn.wikipedia.org
|
| 191 |
+
- songgramernotepad.com
|
| 192 |
+
|
| 193 |
+
# News:
|
| 194 |
+
|
| 195 |
+
- anandabazar.com
|
| 196 |
+
- arthoniteerkagoj.com
|
| 197 |
+
- bangla.24livenewpaper.com
|
| 198 |
+
- bangla.bdnews24.com
|
| 199 |
+
- bangla.dhakatribune.com
|
| 200 |
+
- bangla.hindustantimes.com
|
| 201 |
+
- bangladeshershkela.com
|
| 202 |
+
- banglanews24.com
|
| 203 |
+
- banglatribune.com
|
| 204 |
+
-bbc.com
|
| 205 |
+
- bd-journal.com
|
| 206 |
+
- bd-pratidin.com
|
| 207 |
+
- bd24live.com
|
| 208 |
+
- bengali.indianexpress.com
|
| 209 |
+
bigganprojukti.com
|
| 210 |
+
- bonikbarta.net
|
| 211 |
+
- chakarianews.com
|
| 212 |
+
- channelonline.com
|
| 213 |
+
- ctgtimes.com
|
| 214 |
+
- ctn24.com
|
| 215 |
+
daily-bangladesh.com
|
| 216 |
+
dailyagnishikha.com
|
| 217 |
+
- dainikazadi.net
|
| 218 |
+
-dainikdinkal.net
|
| 219 |
+
dailyfulki.com
|
| 220 |
+
dailyinqilab.com
|
| 221 |
+
- dailynayadiganta.com
|
| 222 |
+
dailysangram.com
|
| 223 |
+
dailysylhet.com
|
| 224 |
+
dainikamadershomoy.com
|
| 225 |
+
dainikshiksha.com
|
| 226 |
+
- dhakardak-bd.com
|
| 227 |
+
- dmpnews.org
|
| 228 |
+
dw.com
|
| 229 |
+
eisamay.indiatimes.com
|
| 230 |
+
- ittefaq.com.bd
|
| 231 |
+
jagonews24.com
|
| 232 |
+
- jugantor.com
|
| 233 |
+
- kalerkantho.com
|
| 234 |
+
- manobkantha.com.bd
|
| 235 |
+
- mzamin.com
|
| 236 |
+
- ntvbd.com
|
| 237 |
+
- onnodristy.com
|
| 238 |
+
|
| 239 |
+
- pavilion.com.bd
|
| 240 |
+
- prothomalo.com
|
| 241 |
+
protidinersangbad.com
|
| 242 |
+
- risingbd.com
|
| 243 |
+
- rtvonline.com
|
| 244 |
+
samakal.com
|
| 245 |
+
- sangbadpratidin.in
|
| 246 |
+
- somoyerkonthosor.com
|
| 247 |
+
somoynews.tv
|
| 248 |
+
-tbsnews.net
|
| 249 |
+
- teknfnews.com
|
| 250 |
+
- thedailystar.net
|
| 251 |
+
- voabangla.com
|
| 252 |
+
- zeenews.india.com
|
| 253 |
+
- zoombangla.com
|
| 254 |
+
|
| 255 |
+
# Blogs:
|
| 256 |
+
|
| 257 |
+
- amrabondhu.com
|
| 258 |
+
- banglablog.in
|
| 259 |
+
bigganblog.org
|
| 260 |
+
biggani.org
|
| 261 |
+
bigyan.org.in
|
| 262 |
+
- bishorgo.com
|
| 263 |
+
- cadetcollegeblog.com
|
| 264 |
+
- choturmatrik.com
|
| 265 |
+
- horoppa.wordpress.com
|
| 266 |
+
muktangon.blogspot.com
|
| 267 |
+
- roar.media/bangla
|
| 268 |
+
- sachalayatan.com
|
| 269 |
+
- shodalap.org
|
| 270 |
+
- shopnobaz.net
|
| 271 |
+
somewhereinblog.net
|
| 272 |
+
- subeen.com
|
| 273 |
+
tunerpage.com
|
| 274 |
+
tutobd.com
|
| 275 |
+
|
| 276 |
+
# E-books/Stories:
|
| 277 |
+
|
| 278 |
+
- banglaepub.github.io
|
| 279 |
+
- bengali.pratilipi.com
|
| 280 |
+
- bn.wikisource.org
|
| 281 |
+
ebangalibrary.com
|
| 282 |
+
- eboipotro.github.io
|
| 283 |
+
- golpokobita.com
|
| 284 |
+
- kaliokalam.com
|
| 285 |
+
- shirisherdalpala.net
|
| 286 |
+
- tagoreweb.in
|
| 287 |
+
|
| 288 |
+
# Social Media/Forums:
|
| 289 |
+
|
| 290 |
+
- banglacricket.com
|
| 291 |
+
- bn.globalvoices.org
|
| 292 |
+
- helpfulhub.com
|
| 293 |
+
- nirbik.com
|
| 294 |
+
- pchelplinebd.com
|
| 295 |
+
- techtunes.io
|
| 296 |
+
|
| 297 |
+
# Miscellaneous:
|
| 298 |
+
|
| 299 |
+
- banglasonglyric.com
|
| 300 |
+
- bdlaws.minlaw.gov.bd
|
| 301 |
+
bdup24.com
|
| 302 |
+
- bengalisongslyrics.com
|
| 303 |
+
dakghar.org
|
| 304 |
+
-gdn8.com
|
| 305 |
+
- gunijan.org.bd
|
| 306 |
+
hwr.org
|
| 307 |
+
- jakir.me
|
| 308 |
+
- jhankarmahbub.com
|
| 309 |
+
- JW.org
|
| 310 |
+
- lyricsbangla.com
|
| 311 |
+
- neonaloy.com
|
| 312 |
+
porjotonlipi.com
|
| 313 |
+
sasthabangla.com
|
| 314 |
+
tanzil.net
|
| 315 |
+
|
| 316 |
+
We wrote custom crawlers for each site above (except the Wikipedia dumps).
|
| 317 |
+
|
| 318 |
+
# Additional Sample Efficiency Tests
|
| 319 |
+
|
| 320 |
+
We plot the sample efficiency results of the NER and QA tasks in Figure 2.
|
| 321 |
+
|
| 322 |
+

|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
Figure 2: Sample-efficiency tests with NER and QA.
|
| 326 |
+
|
| 327 |
+
Similar results are also observed here for the NER task, where BanglaBERT is more sample-efficient when we have $\leq 1k$ training samples. In the QA task however, both models have identical performance for all sample counts.
|
| 328 |
+
|
| 329 |
+
# Compute and Memory Efficiency Tests
|
| 330 |
+
|
| 331 |
+
To validate that BanglaBERT is more efficient in terms of memory and compute, we measured each model's training time and memory usage during the fine-tuning of each task. All tests were done on a desktop machine with an 8-core Intel Core-i7 11700k CPU and NVIDIA RTX 3090 GPU. We used the same batch size, gradient accumulation steps, and sequence length for all models and tasks for a fair comparison. We use relative time and memory (GPU VRAM) usage considering those of BanglaBERT as units. The results are shown in Table 3. (We mention the upper and lower values of the different tasks for each model)
|
| 332 |
+
|
| 333 |
+
<table><tr><td>Model</td><td>Time</td><td>Memory Usage</td></tr><tr><td>mBERT</td><td>1.14x-1.92x</td><td>1.12x-2.04x</td></tr><tr><td>XLM-R (base)</td><td>1.29-1.81x</td><td>1.04-1.63x</td></tr><tr><td>XLM-R (large)</td><td>3.81-4.49x</td><td>4.44-5.55x</td></tr><tr><td>SahajBERT</td><td>2.40-3.33x</td><td>2.07-3.54x</td></tr><tr><td>BanglaBERT</td><td>1.00x</td><td>1.00x</td></tr></table>
|
| 334 |
+
|
| 335 |
+
Table 3: Compute and memory efficiency tests
|
data/2021/2101_00xxx/2101.00204/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00216/1322f2fe-d1ab-4e6d-b8cd-999150e9e3a0_content_list.json
CHANGED
|
@@ -1,3 +1,1993 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Brain Tumor Detection and Classification based on Hybrid Ensemble Classifier",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
168,
|
| 8 |
+
89,
|
| 9 |
+
831,
|
| 10 |
+
138
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Ginni Garg<sup>1</sup>, Ritu Garg",
|
| 17 |
+
"bbox": [
|
| 18 |
+
401,
|
| 19 |
+
156,
|
| 20 |
+
594,
|
| 21 |
+
172
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "Department of Computer Engineering",
|
| 28 |
+
"bbox": [
|
| 29 |
+
354,
|
| 30 |
+
175,
|
| 31 |
+
642,
|
| 32 |
+
189
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "National Institute of Technology, Kurukshetra, 136119",
|
| 39 |
+
"bbox": [
|
| 40 |
+
294,
|
| 41 |
+
191,
|
| 42 |
+
709,
|
| 43 |
+
205
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "gargginni01@gmail.com, ritu.59@gmail.com",
|
| 50 |
+
"bbox": [
|
| 51 |
+
326,
|
| 52 |
+
215,
|
| 53 |
+
671,
|
| 54 |
+
231
|
| 55 |
+
],
|
| 56 |
+
"page_idx": 0
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"text": "Abstract: To improve patient survival and treatment outcomes, early diagnosis of brain tumors is an essential task. It is a difficult task to evaluate the magnetic resonance imaging (MRI) images manually. Thus, there is a need for digital methods for tumor diagnosis with better accuracy. However, it is still a very challenging task in assessing their shape, volume, boundaries, tumor detection, size, segmentation, and classification. In this proposed work, we propose a hybrid ensemble method using Random Forest (RF), K-Nearest Neighbour, and Decision Tree (DT) (KNN-RF-DT) based on Majority Voting Method. It aims to calculate the area of the tumor region and classify brain tumors as benign and malignant. In the beginning, segmentation is done by using Otsu's Threshold method. Feature Extraction is done by using Stationary Wavelet Transform (SWT), Principle Component Analysis (PCA), and Gray Level Co-occurrence Matrix (GLCM), which gives thirteen features for classification. The classification is done by hybrid ensemble classifier (KNN-RF-DT) based on the Majority Voting method. Overall it aimed at improving the performance by traditional classifiers instead of going to deep learning. Traditional classifiers have an advantage over deep learning algorithms because they require small datasets for training and have low computational time complexity, low cost to the users, and can be easily adopted by less skilled people. Overall, our proposed method is tested upon dataset of 2556 images, which are used in 85:15 for training and testing respectively and gives good accuracy of $97.305\\%$ .",
|
| 61 |
+
"bbox": [
|
| 62 |
+
114,
|
| 63 |
+
237,
|
| 64 |
+
883,
|
| 65 |
+
523
|
| 66 |
+
],
|
| 67 |
+
"page_idx": 0
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"text": "Keyword: Otsu's Threshold; SWT; PCA; GLCM; hybrid ensemble Classifier (KNN-RF-DT) based on Majority Voting method.",
|
| 72 |
+
"bbox": [
|
| 73 |
+
116,
|
| 74 |
+
529,
|
| 75 |
+
883,
|
| 76 |
+
564
|
| 77 |
+
],
|
| 78 |
+
"page_idx": 0
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"text": "1. Introduction",
|
| 83 |
+
"text_level": 1,
|
| 84 |
+
"bbox": [
|
| 85 |
+
116,
|
| 86 |
+
570,
|
| 87 |
+
256,
|
| 88 |
+
588
|
| 89 |
+
],
|
| 90 |
+
"page_idx": 0
|
| 91 |
+
},
|
| 92 |
+
{
|
| 93 |
+
"type": "text",
|
| 94 |
+
"text": "A brain tumor is a cancerous or non-cancerous growth of abnormal cells in the brain, which leads to benign or malignant brain tumors. Most of the researchers are engaging in the primary type of tumor such as Gliomas. We have some ways to treat gliomas such as chemotherapy, radiotherapy, and surgery. Automation by computer-aided devices can be used to obtain the necessary clinical data such as tumor presence, location, and type. However, it is still a very challenging task in assessing their shape, volume, boundaries, tumor detection, size, segmentation, and classification. Also, brain tumor intensity varies from individual to individual. Magnetic Resonance Imaging (MRI) is preferred over other treatment and diagnosis methods because it gives superior image contrast in soft tissues and has non-invasive property. On applying different pulse sequences, we obtain a different type of MRI scans, such as (1) T1 weighted scans that distinguish between tumor and healthy tissues. (2) T2 weighted scans cause delineation of the edema region, and ultimately we get a bright image region. (3) T4-Gd scans which gives bright signal at tumor border by using a contrast agent. (4) FLAIR scans differentiate between cerebrospinal fluid (CSF) and edema region by using a signal of water molecule suppression. It is a difficult task to do annotation of brain tumors from MRI scans manually. Hence, there is a strong need for automation of brain tumor segmentation and classification with the help of computer vision and machine learning algorithms. Today,",
|
| 95 |
+
"bbox": [
|
| 96 |
+
114,
|
| 97 |
+
595,
|
| 98 |
+
883,
|
| 99 |
+
893
|
| 100 |
+
],
|
| 101 |
+
"page_idx": 0
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"type": "page_number",
|
| 105 |
+
"text": "1",
|
| 106 |
+
"bbox": [
|
| 107 |
+
872,
|
| 108 |
+
940,
|
| 109 |
+
880,
|
| 110 |
+
950
|
| 111 |
+
],
|
| 112 |
+
"page_idx": 0
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"type": "text",
|
| 116 |
+
"text": "researchers are working on computer vision and machine learning algorithms for brain tumor segmentation and classification. Clinician's plans are highly expensive because they depend on various imaging techniques such as PET, MRI, and CT. The clinical methods provide extract pertinent information and comprehensive analysis from images. Computational techniques help to investigate the details present in medical images. Imaging methods can be used to find the position of brain tumors. MRI provides more meaningful information in contrast to other imaging modalities like CT.",
|
| 117 |
+
"bbox": [
|
| 118 |
+
109,
|
| 119 |
+
90,
|
| 120 |
+
883,
|
| 121 |
+
212
|
| 122 |
+
],
|
| 123 |
+
"page_idx": 1
|
| 124 |
+
},
|
| 125 |
+
{
|
| 126 |
+
"type": "text",
|
| 127 |
+
"text": "The challenging task in Brain Tumor is due to high variability and inherent MRI data characteristics, e.g., variability in tumor sizes or shapes, tumor detection, area calculation, segmentation, classification, and finding uncertainty in segmented region. The most significant task in image understanding is image segmentation because it helps in feature extraction, area calculation, and significance in many real-life applications. It can be used, for example, estimation of tumor volume, tissue classification, blood cell delineation, and localization of tumors, matching of an atlas, surgical planning, and image registration. For monitoring oncologic therapy, the accurate and morphology quantification of tumors is a critical task. However, extensive scale work has been performed in this field; but still; clinicians depend on manual determination of tumor, due to lack of link between researchers and clinicians.",
|
| 128 |
+
"bbox": [
|
| 129 |
+
109,
|
| 130 |
+
215,
|
| 131 |
+
883,
|
| 132 |
+
388
|
| 133 |
+
],
|
| 134 |
+
"page_idx": 1
|
| 135 |
+
},
|
| 136 |
+
{
|
| 137 |
+
"type": "text",
|
| 138 |
+
"text": "Recently, many techniques have been proposed for automatic brain tumor classification that can be categorized into machine learning (ML) and deep learning (DL) techniques based on the feature selection and learning mechanism. In ML approaches, feature selection and extraction is essential for classification. However, DL approaches extract and learn the features from the image directly. Recent DL approaches, particularly CNN provides good accuracy and is widely used in medical image analysis. Moreover, they have disadvantage over traditional methods (ML) as they need large dataset for training, have high time complexity, less accurate for applications where we have availability of small dataset and require expensive GPUs which ultimately increases cost to the users. Additionally, selecting the right deep learning tools is also a challenging task as it needs knowledge regarding various parameters, training method, and topology. On the other hand, machine-learning approaches have played key role in the area of medical imaging. Several learning based classifiers have already been used for classification and detection of brain tumors, which includes - support vector machine (SVM), artificial neural network (ANN), sequential minimal optimization (SMO), fuzzy C mean (FCM), Naive Bayes (NB), Random Forest (RF), Decision Tree (DT) and K-Nearest Neighbor (KNN). KNN implementation is very simple and takes less computation and space complexity. It requires very less parameters to tune. The biggest advantage of DT is that it goes through all the outcomes of decision and finally traces each path to reach the conclusion. It is versatile; no complex mathematics involved, which makes it easy to understand. Further, Random Forest is itself ensemble classifiers of DT. It runs effectively on large dataset, which provides good parameter values for accuracies, precision and other evaluation metrics. Overall, these classifiers have received considerable research attention, as they require small dataset for training, low computational time complexity, low cost to the users, and can be easily adopted by less skilled people. Thus, in the present study, we work on hybrid ensemble classifiers in order to improve the accuracy of results obtained. Further, comparative study of various classifiers such as SVM, KNN, DT, RF, NB, ANN and proposed hybrid ensemble classifier is done.",
|
| 139 |
+
"bbox": [
|
| 140 |
+
109,
|
| 141 |
+
391,
|
| 142 |
+
883,
|
| 143 |
+
843
|
| 144 |
+
],
|
| 145 |
+
"page_idx": 1
|
| 146 |
+
},
|
| 147 |
+
{
|
| 148 |
+
"type": "text",
|
| 149 |
+
"text": "The outline of the paper is as follows: Section 2 describes the related work; Section 3 describes the proposed method for area calculation and brain tumor classification. Section 4",
|
| 150 |
+
"bbox": [
|
| 151 |
+
109,
|
| 152 |
+
844,
|
| 153 |
+
883,
|
| 154 |
+
878
|
| 155 |
+
],
|
| 156 |
+
"page_idx": 1
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"type": "page_number",
|
| 160 |
+
"text": "2",
|
| 161 |
+
"bbox": [
|
| 162 |
+
870,
|
| 163 |
+
939,
|
| 164 |
+
882,
|
| 165 |
+
950
|
| 166 |
+
],
|
| 167 |
+
"page_idx": 1
|
| 168 |
+
},
|
| 169 |
+
{
|
| 170 |
+
"type": "text",
|
| 171 |
+
"text": "gives information about experimental implementation and results, and Finally, Section 5 represents conclusion.",
|
| 172 |
+
"bbox": [
|
| 173 |
+
116,
|
| 174 |
+
90,
|
| 175 |
+
883,
|
| 176 |
+
125
|
| 177 |
+
],
|
| 178 |
+
"page_idx": 2
|
| 179 |
+
},
|
| 180 |
+
{
|
| 181 |
+
"type": "text",
|
| 182 |
+
"text": "2. Related Work",
|
| 183 |
+
"text_level": 1,
|
| 184 |
+
"bbox": [
|
| 185 |
+
116,
|
| 186 |
+
132,
|
| 187 |
+
267,
|
| 188 |
+
150
|
| 189 |
+
],
|
| 190 |
+
"page_idx": 2
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"text": "Brain Tumor research has been conducted in various private multinational companies like, Siemens, Becton Dickinson, Medtronic, Accenture, GE Medical Systems, Atlantic Biomedical P. Ltd, and others. Both theoretical and experimental works of International arena are reported in the literature. Some of work done by good researchers is described below:",
|
| 195 |
+
"bbox": [
|
| 196 |
+
116,
|
| 197 |
+
157,
|
| 198 |
+
883,
|
| 199 |
+
227
|
| 200 |
+
],
|
| 201 |
+
"page_idx": 2
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"text": "Othman [4] et al. proposed a method in which feature extraction is done by using Daubechies wavelets with DWT from MRI images. Each image consists of 17,689 feature vectors. Finally, classification is done using RBF a SVM kernel function. Sindhumol et al. [4] presents a spectral clustering (SC) technique for classification of brain tumor. Images (MRI) are divided into different clusters using the spectral distance. Feature reduction is made by using ICA and classification by using SVM. The training and testing data consists of 40 normal and 20 abnormal MRI images. Abd-Ellah [6] et al. proposed method consists of preprocessing of MRI images is done with the help of Median filters. DW does feature extraction, and PCA is used for feature reduction. Finally, classification is done by the SVM classifier using the RBF kernel function. The database consists of 80 images. SVM is trained using 43 abnormal and 5 normal images, and testing is done by using 27 abnormal images and 5 normal images. H. Kalbkhani [7] et al., proposed the subband of the detail coefficients and 2D DWT using (GARCH), Feature reduction is made from 61440 to 24 features. Feature extraction is done by linear discriminate analysis (LDA), which is further reduced using PCA. Finally, detection is done by using SVM and KNN identifier. The training and testing data consists of MRI images, normal and abnormal in ratio 10 to 70. The testing set consists of 7 normal and 49 abnormal images, while the training set contains 3 normal and 21 abnormal images. Saritha et al. [8] proposed classification technique for normal or abnormal brain tumor images considering 23 images for testing and 50 for training. Deep and Devi [9] et al., proposed a system in which the statistical method is used for texture feature extraction, neural network, and BPNN is used in segmentation and uncovering stages. The database consists of 42 images, which are further divided into training and testing as 30 and 12, respectively. Chandra [10] et al. proposed a new clustering algorithm based on PSO optimization with the help of MRI images. The clusters and corresponding centroids are being found out by algorithm, among them global best is considered. The dataset consists of 62 normal MRI images and 110 abnormal ones. Xuan and Liao [11], proposed a tumor detection method considering features of 3-types texture-based, intensity-based and symmetry-based. Then, total 40 features consisting of 13 intensity-based, 26 texture-based and 1 symmetry-based features are selected. Feature extraction is done from different images with 12 features from T2 images, 9 from T1 images and 19 from FLAIR images. The dataset contains 10 patients with 3 volumes each with 24 slices of MRI images. They divided the dataset equally into testing and training sets. Dhanalakshmi [12] et al., proposed work consist of k-means clustering for segmentation and then area is calculated using formula sqrt(P).*264, where P is the no. of pixel with value 1. The proposed algorithm shows the reproducibility and good performance. Kaushik [13] et al., proposed method consists of segmentation using genetic algorithm. The corners of the brain tumor region are also extracted based on proposed algorithm. Rani [16] et al., proposed a method for MRI brain tumor image classification using SVM and segmentation using otsu's thresholding method. This paper compared its proposed work with KIFCM, K-means an Fuzzy c-means but their accuracy and executive time was more effective than all remaining existing methods.",
|
| 206 |
+
"bbox": [
|
| 207 |
+
114,
|
| 208 |
+
229,
|
| 209 |
+
883,
|
| 210 |
+
893
|
| 211 |
+
],
|
| 212 |
+
"page_idx": 2
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "page_number",
|
| 216 |
+
"text": "3",
|
| 217 |
+
"bbox": [
|
| 218 |
+
872,
|
| 219 |
+
940,
|
| 220 |
+
882,
|
| 221 |
+
950
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 2
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "Additionally, many deep learning models have been investigated recently for brain tumor detection and classification and achieved the competitive results. Chaudhary [15] et al., proposed a method based on deep learning for the segmentation of MRI brain tumor images. In which pre processing of MRI images was done using intensity normalization. Kamboj [17] et al., reviewed deep learning methods, which has advantages over traditional methods. Their focus is on design of architecture as compare to segmentation and feature extraction. Deep learning methods provide good accuracy but they required more computation time, space and dataset as compare to the traditional classifiers. However, traditional machine learning methods are easy to understand, interpret and required less space, dataset and computational cost in terms of hardware.",
|
| 228 |
+
"bbox": [
|
| 229 |
+
114,
|
| 230 |
+
89,
|
| 231 |
+
883,
|
| 232 |
+
263
|
| 233 |
+
],
|
| 234 |
+
"page_idx": 3
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"text": "Moreover, none of the above mentioned machine learning approaches work on feature extraction using 3-fold techniques such as $\\mathrm{SWT + PCA + GLCM}$ , which significantly increases the robustness of extracted features as SwT helped in capturing the abrupt changes of images. PCA reduced the dimensionality of input images from SwT, which reduced space and time complexity up to some extent. Finally, GLCM extracted various useful features from dimensionally reduced images of PCA. In addition, none of the above methods worked on hybrid ensemble classifiers, which helped in achieving good evaluation metrics using traditional classifiers, as best properties of each classifier add up to gave excellent results. Therefore, overall 3-fold robust feature extraction and hybrid ensemble classification is the main focus of interest in this present study, which helped in improving the various evaluation metrics and reduced space and time complexity using traditional classifiers.",
|
| 239 |
+
"bbox": [
|
| 240 |
+
114,
|
| 241 |
+
265,
|
| 242 |
+
883,
|
| 243 |
+
455
|
| 244 |
+
],
|
| 245 |
+
"page_idx": 3
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"text": "3. Proposed Method",
|
| 250 |
+
"text_level": 1,
|
| 251 |
+
"bbox": [
|
| 252 |
+
116,
|
| 253 |
+
464,
|
| 254 |
+
294,
|
| 255 |
+
481
|
| 256 |
+
],
|
| 257 |
+
"page_idx": 3
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"type": "text",
|
| 261 |
+
"text": "The proposed work aims at improving the performance of traditional classifiers. These classifiers require small datasets for training and have low computational time complexity thus appropriate for computer assisted brain tumor diagnosis and classification. We propose a hybrid ensemble method using KNN, Random Forest (RF) and Decision Tree (DT) (KNN-RF-DT) based on Majority Voting Method. It aims to calculate the area of the tumor region and classify brain tumors as benign and malignant. In the beginning, MRI images segments using the Otsu's Threshold method. Feature Extraction is done by Stationary Wavelet Transform (SWT), Principle Component Analysis (PCA) and Gray Level Co-occurrence Matrix (GLCM), which gives thirteen features for classification. The classification is done by hybrid ensemble classifiers (KNN-RF-DT) based on the Majority Voting method.",
|
| 262 |
+
"bbox": [
|
| 263 |
+
114,
|
| 264 |
+
489,
|
| 265 |
+
883,
|
| 266 |
+
662
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 3
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "text",
|
| 272 |
+
"text": "In the current research work, we have done comparative studies of various classifiers such as SVM, KNN, DT, RF, NB, ANN and proposed hybrid ensemble Classifier. Overall, it aimed at improving the performance by using traditional classifiers. Working of proposed implementation is shown in Fig.1.",
|
| 273 |
+
"bbox": [
|
| 274 |
+
114,
|
| 275 |
+
664,
|
| 276 |
+
883,
|
| 277 |
+
732
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 3
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "page_number",
|
| 283 |
+
"text": "4",
|
| 284 |
+
"bbox": [
|
| 285 |
+
870,
|
| 286 |
+
940,
|
| 287 |
+
880,
|
| 288 |
+
950
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 3
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "image",
|
| 294 |
+
"img_path": "images/7c3a1a1c26d9b73a8edce55c3f657b56cd438fe31e1b242251d01b76d39268ef.jpg",
|
| 295 |
+
"image_caption": [
|
| 296 |
+
"Fig. 1 Flow diagram of proposed work"
|
| 297 |
+
],
|
| 298 |
+
"image_footnote": [],
|
| 299 |
+
"bbox": [
|
| 300 |
+
282,
|
| 301 |
+
99,
|
| 302 |
+
738,
|
| 303 |
+
330
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 4
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "a. Otsu's Method",
|
| 310 |
+
"text_level": 1,
|
| 311 |
+
"bbox": [
|
| 312 |
+
112,
|
| 313 |
+
367,
|
| 314 |
+
272,
|
| 315 |
+
383
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 4
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "Otsu method [19] is used for the automatic image threshold into two classed, foreground, and background based on a single threshold value. This threshold determines by maximizing inter-class variance and minimizing intra-class intensity variance. Threshold that minimizes the intra-class variance that describes by corresponding sum of variance of two classes:",
|
| 322 |
+
"bbox": [
|
| 323 |
+
109,
|
| 324 |
+
392,
|
| 325 |
+
883,
|
| 326 |
+
462
|
| 327 |
+
],
|
| 328 |
+
"page_idx": 4
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"type": "equation",
|
| 332 |
+
"text": "\n$$\n\\alpha_ {\\mathrm {w}} ^ {2} (\\mathbf {n}) = \\mathrm {A} _ {0} (\\mathbf {n}) ^ {*} \\alpha_ {0} ^ {2} (\\mathbf {n}) + \\mathrm {A} _ {1} (\\mathbf {n}) ^ {*} \\alpha_ {1} ^ {2} (\\mathbf {n}), \\tag {1}\n$$\n",
|
| 333 |
+
"text_format": "latex",
|
| 334 |
+
"bbox": [
|
| 335 |
+
112,
|
| 336 |
+
468,
|
| 337 |
+
846,
|
| 338 |
+
487
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 4
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "where $A_0, A_1 -$ Probability of two classes, $n = 154$ - Threshold, $\\alpha_0^2, \\alpha_1^2 -$ Variance of two classes. The Class Probability can be computed from the number of bins (L=256) as shown below:",
|
| 345 |
+
"bbox": [
|
| 346 |
+
111,
|
| 347 |
+
494,
|
| 348 |
+
859,
|
| 349 |
+
529
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 4
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "equation",
|
| 355 |
+
"text": "\n$$\n\\mathrm {A} _ {0} (\\mathbf {n}) = \\sum_ {k = 0} ^ {n - 1} \\mathrm {p} (\\mathbf {k}) \\tag {2}\n$$\n",
|
| 356 |
+
"text_format": "latex",
|
| 357 |
+
"bbox": [
|
| 358 |
+
112,
|
| 359 |
+
537,
|
| 360 |
+
846,
|
| 361 |
+
564
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 4
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "equation",
|
| 367 |
+
"text": "\n$$\n\\mathrm {A} _ {1} (\\mathbf {n}) = \\sum_ {k = n} ^ {L - 1} \\mathrm {p} (\\mathbf {k}), \\tag {3}\n$$\n",
|
| 368 |
+
"text_format": "latex",
|
| 369 |
+
"bbox": [
|
| 370 |
+
112,
|
| 371 |
+
569,
|
| 372 |
+
846,
|
| 373 |
+
595
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 4
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "where we know that minimum intra-class variance is equivalent to the maximum inter-class variance.",
|
| 380 |
+
"bbox": [
|
| 381 |
+
109,
|
| 382 |
+
603,
|
| 383 |
+
880,
|
| 384 |
+
635
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 4
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "equation",
|
| 390 |
+
"text": "\n$$\n\\alpha_ {\\mathrm {b}} ^ {2} (\\mathbf {n}) = \\alpha^ {2} - \\alpha_ {\\mathrm {w}} ^ {2} (\\mathbf {n}) = \\mathrm {A} _ {0} ^ {*} \\left(\\beta_ {0} - \\beta_ {\\mathrm {T}}\\right) ^ {2} + \\mathrm {A} _ {1} ^ {*} \\left(\\beta_ {1} - \\beta_ {\\mathrm {T}}\\right) ^ {2} = \\mathrm {A} _ {0} (\\mathbf {n}) ^ {*} \\mathrm {A} _ {1} (\\mathbf {n}) ^ {*} \\left(\\beta_ {0} - \\beta_ {1}\\right) ^ {2}, \\tag {4}\n$$\n",
|
| 391 |
+
"text_format": "latex",
|
| 392 |
+
"bbox": [
|
| 393 |
+
112,
|
| 394 |
+
643,
|
| 395 |
+
846,
|
| 396 |
+
664
|
| 397 |
+
],
|
| 398 |
+
"page_idx": 4
|
| 399 |
+
},
|
| 400 |
+
{
|
| 401 |
+
"type": "text",
|
| 402 |
+
"text": "where $\\beta_{1}(\\mathbf{n}),\\beta_{0}(\\mathbf{n})$ and $\\beta_{\\mathrm{T}}$ are class means",
|
| 403 |
+
"bbox": [
|
| 404 |
+
111,
|
| 405 |
+
670,
|
| 406 |
+
444,
|
| 407 |
+
686
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 4
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "equation",
|
| 413 |
+
"text": "\n$$\n\\beta_ {0} (\\mathbf {n}) = = \\sum_ {\\mathbf {k} = 0} ^ {n - 1} \\mathbf {k} * \\mathrm {p} (\\mathbf {k}) / \\mathrm {A} _ {0} (\\mathbf {n}) \\tag {5}\n$$\n",
|
| 414 |
+
"text_format": "latex",
|
| 415 |
+
"bbox": [
|
| 416 |
+
112,
|
| 417 |
+
695,
|
| 418 |
+
846,
|
| 419 |
+
720
|
| 420 |
+
],
|
| 421 |
+
"page_idx": 4
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "equation",
|
| 425 |
+
"text": "\n$$\n\\beta_ {1} (\\mathbf {n}) = = \\sum_ {\\mathbf {k} = n} ^ {L - 1} \\mathbf {k} * \\mathrm {p} (\\mathbf {k}) / \\mathrm {A} _ {1} (\\mathbf {n}) \\tag {6}\n$$\n",
|
| 426 |
+
"text_format": "latex",
|
| 427 |
+
"bbox": [
|
| 428 |
+
112,
|
| 429 |
+
726,
|
| 430 |
+
846,
|
| 431 |
+
753
|
| 432 |
+
],
|
| 433 |
+
"page_idx": 4
|
| 434 |
+
},
|
| 435 |
+
{
|
| 436 |
+
"type": "equation",
|
| 437 |
+
"text": "\n$$\n\\beta_ {\\mathrm {T}} = = \\sum_ {\\mathbf {k} = 0} ^ {L - 1} \\mathbf {k} * \\mathrm {p} (\\mathbf {k}) \\tag {7}\n$$\n",
|
| 438 |
+
"text_format": "latex",
|
| 439 |
+
"bbox": [
|
| 440 |
+
112,
|
| 441 |
+
760,
|
| 442 |
+
846,
|
| 443 |
+
787
|
| 444 |
+
],
|
| 445 |
+
"page_idx": 4
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "equation",
|
| 449 |
+
"text": "\n$$\n\\mathrm {A} _ {0} * \\beta_ {0} + \\mathrm {A} _ {1} * \\beta_ {1} = \\beta_ {\\mathrm {T}} \\tag {8}\n$$\n",
|
| 450 |
+
"text_format": "latex",
|
| 451 |
+
"bbox": [
|
| 452 |
+
112,
|
| 453 |
+
794,
|
| 454 |
+
846,
|
| 455 |
+
811
|
| 456 |
+
],
|
| 457 |
+
"page_idx": 4
|
| 458 |
+
},
|
| 459 |
+
{
|
| 460 |
+
"type": "equation",
|
| 461 |
+
"text": "\n$$\n\\mathrm {A} _ {0} + \\mathrm {A} _ {1} = 1 \\tag {9}\n$$\n",
|
| 462 |
+
"text_format": "latex",
|
| 463 |
+
"bbox": [
|
| 464 |
+
112,
|
| 465 |
+
820,
|
| 466 |
+
846,
|
| 467 |
+
837
|
| 468 |
+
],
|
| 469 |
+
"page_idx": 4
|
| 470 |
+
},
|
| 471 |
+
{
|
| 472 |
+
"type": "text",
|
| 473 |
+
"text": "Above computations, give us effective algorithm as class probability and class means computed iteratively. Histogram of brain tumor image with bins 256 and threshold 154 is shown below in Fig.2.",
|
| 474 |
+
"bbox": [
|
| 475 |
+
109,
|
| 476 |
+
844,
|
| 477 |
+
883,
|
| 478 |
+
897
|
| 479 |
+
],
|
| 480 |
+
"page_idx": 4
|
| 481 |
+
},
|
| 482 |
+
{
|
| 483 |
+
"type": "page_number",
|
| 484 |
+
"text": "5",
|
| 485 |
+
"bbox": [
|
| 486 |
+
870,
|
| 487 |
+
939,
|
| 488 |
+
882,
|
| 489 |
+
950
|
| 490 |
+
],
|
| 491 |
+
"page_idx": 4
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "image",
|
| 495 |
+
"img_path": "images/752848fb12b89cb6fa4b9176b464b1f286d5e94d98c8ffaaf23d0a3ba3a372e3.jpg",
|
| 496 |
+
"image_caption": [
|
| 497 |
+
"Fig. 2 Histogram of brain tumor image with bins $\\mathrm{L} = 256$ and threshold $\\mathrm{n} = 154$"
|
| 498 |
+
],
|
| 499 |
+
"image_footnote": [],
|
| 500 |
+
"bbox": [
|
| 501 |
+
277,
|
| 502 |
+
106,
|
| 503 |
+
781,
|
| 504 |
+
330
|
| 505 |
+
],
|
| 506 |
+
"page_idx": 5
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"type": "text",
|
| 510 |
+
"text": "b. Stationary Wavelet Transform",
|
| 511 |
+
"text_level": 1,
|
| 512 |
+
"bbox": [
|
| 513 |
+
111,
|
| 514 |
+
363,
|
| 515 |
+
408,
|
| 516 |
+
381
|
| 517 |
+
],
|
| 518 |
+
"page_idx": 5
|
| 519 |
+
},
|
| 520 |
+
{
|
| 521 |
+
"type": "text",
|
| 522 |
+
"text": "As far as signal is concerned, slow changes can be captured with the help of Fourier Transform, whereas when the images undergo abrupt changes that can be captured with the concept of wavelets. Wavelet is a small oscillation whose frequency inversely varies with scaling. To capture abrupt changes, we need high frequency and small scaling so we need the concept of wavelet. Stationary Wavelet Transform (SWT) [20] algorithm is designed to overcome the problem of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance can be achieved by removing the down-samples and up-samples in the DWT and up sampling the filter coefficients by a factor of $2^{(\\mathrm{j - 1})}$ in the $\\mathrm{j^th}$ level of algorithm. SWT has various applications such as - Signal de-noising, Pattern Recognition, Brain image classification, Pathological brain detection. In our proposed work, we work on 1-D SWT, where $\\mathrm{j} = 1$ .",
|
| 523 |
+
"bbox": [
|
| 524 |
+
109,
|
| 525 |
+
387,
|
| 526 |
+
888,
|
| 527 |
+
561
|
| 528 |
+
],
|
| 529 |
+
"page_idx": 5
|
| 530 |
+
},
|
| 531 |
+
{
|
| 532 |
+
"type": "text",
|
| 533 |
+
"text": "The origin of the wavelets and its types from Fourier transform is shown in Fig.3.",
|
| 534 |
+
"bbox": [
|
| 535 |
+
169,
|
| 536 |
+
563,
|
| 537 |
+
818,
|
| 538 |
+
580
|
| 539 |
+
],
|
| 540 |
+
"page_idx": 5
|
| 541 |
+
},
|
| 542 |
+
{
|
| 543 |
+
"type": "image",
|
| 544 |
+
"img_path": "images/4a8c46b51b221dd70bff31d2c60ee055fe9f4cf87dbf8533c7a5fe73005dc845.jpg",
|
| 545 |
+
"image_caption": [
|
| 546 |
+
"Fig. 3 Origin of SWT"
|
| 547 |
+
],
|
| 548 |
+
"image_footnote": [],
|
| 549 |
+
"bbox": [
|
| 550 |
+
267,
|
| 551 |
+
587,
|
| 552 |
+
730,
|
| 553 |
+
676
|
| 554 |
+
],
|
| 555 |
+
"page_idx": 5
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "text",
|
| 559 |
+
"text": "The digital implementation of SwT is shown below in Fig. 4 in which each level is an upsampled version of the previous level.",
|
| 560 |
+
"bbox": [
|
| 561 |
+
111,
|
| 562 |
+
699,
|
| 563 |
+
883,
|
| 564 |
+
734
|
| 565 |
+
],
|
| 566 |
+
"page_idx": 5
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"text": "Decomposition Step",
|
| 571 |
+
"bbox": [
|
| 572 |
+
120,
|
| 573 |
+
787,
|
| 574 |
+
261,
|
| 575 |
+
806
|
| 576 |
+
],
|
| 577 |
+
"page_idx": 5
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "image",
|
| 581 |
+
"img_path": "images/1abe9229b8882387176b5ef953f8c7876bfc283a70f7780f8677418b8875454a.jpg",
|
| 582 |
+
"image_caption": [],
|
| 583 |
+
"image_footnote": [],
|
| 584 |
+
"bbox": [
|
| 585 |
+
287,
|
| 586 |
+
746,
|
| 587 |
+
684,
|
| 588 |
+
859
|
| 589 |
+
],
|
| 590 |
+
"page_idx": 5
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "page_number",
|
| 594 |
+
"text": "6",
|
| 595 |
+
"bbox": [
|
| 596 |
+
870,
|
| 597 |
+
939,
|
| 598 |
+
883,
|
| 599 |
+
952
|
| 600 |
+
],
|
| 601 |
+
"page_idx": 5
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"text": "Filter computation",
|
| 606 |
+
"bbox": [
|
| 607 |
+
147,
|
| 608 |
+
127,
|
| 609 |
+
287,
|
| 610 |
+
143
|
| 611 |
+
],
|
| 612 |
+
"page_idx": 6
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "image",
|
| 616 |
+
"img_path": "images/f7cb507d399f1ae6c63010e03e7f39d054aa753e6203c5a0e478886f51c35252.jpg",
|
| 617 |
+
"image_caption": [],
|
| 618 |
+
"image_footnote": [],
|
| 619 |
+
"bbox": [
|
| 620 |
+
321,
|
| 621 |
+
88,
|
| 622 |
+
578,
|
| 623 |
+
125
|
| 624 |
+
],
|
| 625 |
+
"page_idx": 6
|
| 626 |
+
},
|
| 627 |
+
{
|
| 628 |
+
"type": "image",
|
| 629 |
+
"img_path": "images/2ff41a776eb6f756747069fc71a0769f538f0ba2828ff84e56aae2a2122bd813.jpg",
|
| 630 |
+
"image_caption": [],
|
| 631 |
+
"image_footnote": [],
|
| 632 |
+
"bbox": [
|
| 633 |
+
318,
|
| 634 |
+
152,
|
| 635 |
+
581,
|
| 636 |
+
189
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 6
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "image",
|
| 642 |
+
"img_path": "images/45ee83afb8556db9572282017eda4e4ccc1b5b32aee1faae355d11fe5b79d72e.jpg",
|
| 643 |
+
"image_caption": [
|
| 644 |
+
"Fig. 4 Implementation of SWT in which each level is an up-sampled version of the previous level"
|
| 645 |
+
],
|
| 646 |
+
"image_footnote": [
|
| 647 |
+
"Initialization: $\\mathrm{cA}_0 = \\mathrm{s}$ and $\\mathrm{F}_0 = \\mathrm{Lo\\_D}$ and $\\mathrm{G}_0 = \\mathrm{Hi\\_D}$"
|
| 648 |
+
],
|
| 649 |
+
"bbox": [
|
| 650 |
+
212,
|
| 651 |
+
196,
|
| 652 |
+
375,
|
| 653 |
+
231
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 6
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "c. Principal Component Analysis",
|
| 660 |
+
"text_level": 1,
|
| 661 |
+
"bbox": [
|
| 662 |
+
111,
|
| 663 |
+
305,
|
| 664 |
+
405,
|
| 665 |
+
324
|
| 666 |
+
],
|
| 667 |
+
"page_idx": 6
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "text",
|
| 671 |
+
"text": "PCA [21] is an orthogonal transformation of correlated variables to linearly uncorrelated variables known as principal components. The first principal component has the highest variances; successive components contain the most significant possible variance such that it remains orthogonal to previous components. It is used to reduce the dimensions or features with which we have to train our classifier, which ultimately helps in reducing the time and space complexity for given data computation.",
|
| 672 |
+
"bbox": [
|
| 673 |
+
109,
|
| 674 |
+
330,
|
| 675 |
+
885,
|
| 676 |
+
434
|
| 677 |
+
],
|
| 678 |
+
"page_idx": 6
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "text",
|
| 682 |
+
"text": "First Component",
|
| 683 |
+
"text_level": 1,
|
| 684 |
+
"bbox": [
|
| 685 |
+
112,
|
| 686 |
+
441,
|
| 687 |
+
263,
|
| 688 |
+
459
|
| 689 |
+
],
|
| 690 |
+
"page_idx": 6
|
| 691 |
+
},
|
| 692 |
+
{
|
| 693 |
+
"type": "equation",
|
| 694 |
+
"text": "\n$$\n\\mathrm {Y} _ {(1)} = \\arg \\max _ {| \\mathrm {Y} | = 1} \\left\\{\\Sigma_ {\\mathrm {k}} \\left(\\mathrm {t} _ {1}\\right) _ {(\\mathrm {k})} ^ {2} \\right\\} = \\arg \\max _ {| \\mathrm {Y} | = 1} \\left\\{\\Sigma_ {\\mathrm {k}} \\left(\\mathbf {X} _ {(\\mathrm {k})}. \\mathrm {Y}\\right) ^ {2} \\right\\}, \\tag {10}\n$$\n",
|
| 695 |
+
"text_format": "latex",
|
| 696 |
+
"bbox": [
|
| 697 |
+
112,
|
| 698 |
+
465,
|
| 699 |
+
857,
|
| 700 |
+
487
|
| 701 |
+
],
|
| 702 |
+
"page_idx": 6
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "text",
|
| 706 |
+
"text": "Where $\\mathbf{Y}_{(1)}$ denotes the unit vector and $\\mathbf{X}$ is Image matrix.",
|
| 707 |
+
"bbox": [
|
| 708 |
+
111,
|
| 709 |
+
492,
|
| 710 |
+
571,
|
| 711 |
+
508
|
| 712 |
+
],
|
| 713 |
+
"page_idx": 6
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "text",
|
| 717 |
+
"text": "Equation in matrix form:",
|
| 718 |
+
"bbox": [
|
| 719 |
+
112,
|
| 720 |
+
510,
|
| 721 |
+
315,
|
| 722 |
+
527
|
| 723 |
+
],
|
| 724 |
+
"page_idx": 6
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "equation",
|
| 728 |
+
"text": "\n$$\n\\mathbf {Y} _ {(1)} = \\arg \\max _ {| \\mathrm {Y} | = 1} \\left\\{\\| \\mathbf {X Y} \\| ^ {2} \\right\\} = \\arg \\max _ {| \\mathrm {Y} | = 1} \\left\\{\\mathbf {Y} ^ {\\mathrm {T}} \\mathbf {X} ^ {\\mathrm {T}} \\mathbf {X Y} \\right\\} \\tag {11}\n$$\n",
|
| 729 |
+
"text_format": "latex",
|
| 730 |
+
"bbox": [
|
| 731 |
+
112,
|
| 732 |
+
534,
|
| 733 |
+
857,
|
| 734 |
+
554
|
| 735 |
+
],
|
| 736 |
+
"page_idx": 6
|
| 737 |
+
},
|
| 738 |
+
{
|
| 739 |
+
"type": "equation",
|
| 740 |
+
"text": "\n$$\n\\mathbf {Y} _ {(1)} = \\arg \\max \\left\\{\\mathbf {Y} ^ {\\mathrm {T}} \\mathbf {X} ^ {\\mathrm {T}} \\mathbf {X Y} / \\mathbf {Y} ^ {\\mathrm {T}} \\mathbf {Y} \\right\\} \\tag {12}\n$$\n",
|
| 741 |
+
"text_format": "latex",
|
| 742 |
+
"bbox": [
|
| 743 |
+
112,
|
| 744 |
+
558,
|
| 745 |
+
857,
|
| 746 |
+
579
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 6
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "Largest Eigen value of the matrix is Rayleigh Quotient, where $\\mathbf{X}^{\\mathrm{T}}\\mathbf{X}$ is positive semi-definite matrix and $\\mathbf{Y} =$ corresponding Eigen vector",
|
| 753 |
+
"bbox": [
|
| 754 |
+
109,
|
| 755 |
+
584,
|
| 756 |
+
883,
|
| 757 |
+
619
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 6
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "text",
|
| 763 |
+
"text": "Further Components",
|
| 764 |
+
"text_level": 1,
|
| 765 |
+
"bbox": [
|
| 766 |
+
112,
|
| 767 |
+
627,
|
| 768 |
+
295,
|
| 769 |
+
643
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 6
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "The Nth component can be found by subtracting the first N-1 principle component from $\\mathbf{X}$ , where $\\mathbf{N} = 13$ for proposed method.",
|
| 776 |
+
"bbox": [
|
| 777 |
+
109,
|
| 778 |
+
645,
|
| 779 |
+
883,
|
| 780 |
+
680
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 6
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "equation",
|
| 786 |
+
"text": "\n$$\n\\mathbf {X} _ {\\mathbf {N}} = \\mathrm {X} - \\sum_ {n = 1} ^ {N - 1} \\mathbf {X Y} (\\mathrm {n}) \\mathrm {Y} (\\mathrm {n}) ^ {\\wedge} \\mathrm {T} \\tag {13}\n$$\n",
|
| 787 |
+
"text_format": "latex",
|
| 788 |
+
"bbox": [
|
| 789 |
+
111,
|
| 790 |
+
686,
|
| 791 |
+
857,
|
| 792 |
+
714
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 6
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "Further, weight vector can be found as describe below:",
|
| 799 |
+
"bbox": [
|
| 800 |
+
111,
|
| 801 |
+
720,
|
| 802 |
+
549,
|
| 803 |
+
738
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 6
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "equation",
|
| 809 |
+
"text": "\n$$\n\\mathbf {Y} _ {(K)} = \\arg \\max _ {\\| \\mathbf {Y} \\| = 1} \\left\\{\\| \\mathbf {X} _ {K} ^ {*} \\mathbf {Y} \\| ^ {2} \\right\\} = \\arg \\max \\left\\{\\mathbf {Y} ^ {\\mathrm {T}} \\mathbf {X} _ {K} ^ {\\mathrm {T}} \\mathbf {X} _ {K} \\mathbf {Y} / \\mathbf {Y} ^ {\\mathrm {T}} \\mathbf {Y} \\right\\} \\tag {14}\n$$\n",
|
| 810 |
+
"text_format": "latex",
|
| 811 |
+
"bbox": [
|
| 812 |
+
111,
|
| 813 |
+
744,
|
| 814 |
+
857,
|
| 815 |
+
763
|
| 816 |
+
],
|
| 817 |
+
"page_idx": 6
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"type": "text",
|
| 821 |
+
"text": "d. Gray-Level Co-occurrence Matrix",
|
| 822 |
+
"text_level": 1,
|
| 823 |
+
"bbox": [
|
| 824 |
+
111,
|
| 825 |
+
770,
|
| 826 |
+
436,
|
| 827 |
+
787
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 6
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "text",
|
| 833 |
+
"text": "It is a statistical method that describes the spatial relationship of pixels based on the spatial grey-level dependence matrix. GLCM [22] calculates the texture of the image by calculating the frequency of corresponding pixels and their spatial relationships. All the thirteen features which are used in proposed work as follow contrast, correlation, energy, homogeneity, mean, standard deviation, kurtosis, skewness, variance, smoothness, IDM, RMS, entropy. There detailed information is given below:",
|
| 834 |
+
"bbox": [
|
| 835 |
+
109,
|
| 836 |
+
795,
|
| 837 |
+
885,
|
| 838 |
+
902
|
| 839 |
+
],
|
| 840 |
+
"page_idx": 6
|
| 841 |
+
},
|
| 842 |
+
{
|
| 843 |
+
"type": "page_number",
|
| 844 |
+
"text": "7",
|
| 845 |
+
"bbox": [
|
| 846 |
+
870,
|
| 847 |
+
939,
|
| 848 |
+
883,
|
| 849 |
+
952
|
| 850 |
+
],
|
| 851 |
+
"page_idx": 6
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"type": "equation",
|
| 855 |
+
"text": "\n$$\n\\text {C o n t r a s t} (\\mathbf {C}) = \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} | \\mathrm {t} - \\mathrm {r} | ^ {2} \\mathbf {q} (\\mathrm {t}, \\mathrm {r}), \\tag {15}\n$$\n",
|
| 856 |
+
"text_format": "latex",
|
| 857 |
+
"bbox": [
|
| 858 |
+
112,
|
| 859 |
+
89,
|
| 860 |
+
856,
|
| 861 |
+
112
|
| 862 |
+
],
|
| 863 |
+
"page_idx": 7
|
| 864 |
+
},
|
| 865 |
+
{
|
| 866 |
+
"type": "text",
|
| 867 |
+
"text": "where, $\\mathbf{q}(\\mathrm{t},\\mathrm{r})$ is GLCM, t & r are row & column,T is total rows and R is total columns.",
|
| 868 |
+
"bbox": [
|
| 869 |
+
111,
|
| 870 |
+
119,
|
| 871 |
+
792,
|
| 872 |
+
137
|
| 873 |
+
],
|
| 874 |
+
"page_idx": 7
|
| 875 |
+
},
|
| 876 |
+
{
|
| 877 |
+
"type": "equation",
|
| 878 |
+
"text": "\n$$\n\\text {C o r r e l a t i o n} (\\operatorname {C o r r}) = \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} \\left((\\mathrm {t} - \\mu) (\\mathrm {r} - \\mu) \\mathbf {q} (\\mathrm {t}, \\mathrm {r})\\right) / (\\sigma (\\mathrm {t}) * \\sigma (\\mathrm {r})), \\tag {16}\n$$\n",
|
| 879 |
+
"text_format": "latex",
|
| 880 |
+
"bbox": [
|
| 881 |
+
112,
|
| 882 |
+
143,
|
| 883 |
+
856,
|
| 884 |
+
167
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 7
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "text",
|
| 890 |
+
"text": "where, mean is $\\mu$ and standard deviation is $\\sigma$",
|
| 891 |
+
"bbox": [
|
| 892 |
+
112,
|
| 893 |
+
175,
|
| 894 |
+
468,
|
| 895 |
+
191
|
| 896 |
+
],
|
| 897 |
+
"page_idx": 7
|
| 898 |
+
},
|
| 899 |
+
{
|
| 900 |
+
"type": "equation",
|
| 901 |
+
"text": "\n$$\n\\operatorname {E n e r g y} (\\mathbf {E}) = \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} \\mathbf {q} (\\mathrm {t}, \\mathrm {r}) ^ {2} \\tag {17}\n$$\n",
|
| 902 |
+
"text_format": "latex",
|
| 903 |
+
"bbox": [
|
| 904 |
+
112,
|
| 905 |
+
199,
|
| 906 |
+
856,
|
| 907 |
+
220
|
| 908 |
+
],
|
| 909 |
+
"page_idx": 7
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "equation",
|
| 913 |
+
"text": "\n$$\n\\text {H o m o g e n e i t y} (\\mathbf {H}) = \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} \\mathbf {q} (\\mathrm {t}, \\mathrm {r}) / (1 + | \\mathrm {t} - \\mathrm {r} |) \\tag {18}\n$$\n",
|
| 914 |
+
"text_format": "latex",
|
| 915 |
+
"bbox": [
|
| 916 |
+
112,
|
| 917 |
+
228,
|
| 918 |
+
856,
|
| 919 |
+
250
|
| 920 |
+
],
|
| 921 |
+
"page_idx": 7
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"type": "equation",
|
| 925 |
+
"text": "\n$$\n\\text {M e a n} (\\mu) = \\frac {1}{T * R} * \\sum_ {t = 1} ^ {T} \\sum_ {r = 1} ^ {R} q (t, r) \\tag {19}\n$$\n",
|
| 926 |
+
"text_format": "latex",
|
| 927 |
+
"bbox": [
|
| 928 |
+
112,
|
| 929 |
+
258,
|
| 930 |
+
856,
|
| 931 |
+
285
|
| 932 |
+
],
|
| 933 |
+
"page_idx": 7
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "equation",
|
| 937 |
+
"text": "\n$$\n\\text {S t a n d a r d D e v i a t i o n} (\\boldsymbol {\\sigma}) = \\sqrt [ 2 ]{\\frac {1}{T * R} * \\sum_ {t = 1} ^ {T} \\sum_ {r = 1} ^ {R} \\left(\\mathbf {q} (t , r) - \\mu\\right)} \\tag {20}\n$$\n",
|
| 938 |
+
"text_format": "latex",
|
| 939 |
+
"bbox": [
|
| 940 |
+
112,
|
| 941 |
+
292,
|
| 942 |
+
856,
|
| 943 |
+
329
|
| 944 |
+
],
|
| 945 |
+
"page_idx": 7
|
| 946 |
+
},
|
| 947 |
+
{
|
| 948 |
+
"type": "equation",
|
| 949 |
+
"text": "\n$$\n\\text {K u r t o s i s} (\\mathbf {K}) = \\left\\{\\frac {1}{T * R} * \\sum_ {t = 1} ^ {T} \\sum_ {r = 1} ^ {R} \\left(\\left(\\mathbf {q} (t, r) - \\mu\\right) / \\sigma\\right) ^ {\\wedge} 4 \\right\\} - 3 \\tag {21}\n$$\n",
|
| 950 |
+
"text_format": "latex",
|
| 951 |
+
"bbox": [
|
| 952 |
+
112,
|
| 953 |
+
335,
|
| 954 |
+
856,
|
| 955 |
+
363
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 7
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "equation",
|
| 961 |
+
"text": "\n$$\n\\operatorname {S k e w n e s s} (\\boldsymbol {\\sigma}) = \\sqrt [ 2 ]{\\frac {1}{T * R} * \\sum_ {t = 1} ^ {T} \\sum_ {r = 1} ^ {R} \\left(\\mathbf {q} (t , r) - \\mu\\right)} \\tag {22}\n$$\n",
|
| 962 |
+
"text_format": "latex",
|
| 963 |
+
"bbox": [
|
| 964 |
+
112,
|
| 965 |
+
369,
|
| 966 |
+
856,
|
| 967 |
+
406
|
| 968 |
+
],
|
| 969 |
+
"page_idx": 7
|
| 970 |
+
},
|
| 971 |
+
{
|
| 972 |
+
"type": "equation",
|
| 973 |
+
"text": "\n$$\n\\text {V a r i a n c e} (\\mathbf {V a r}) = \\frac {1}{T * R} * \\sum_ {t = 1} ^ {T} \\sum_ {r = 1} ^ {R} (\\mathbf {q} (t, r) - \\mu) \\tag {23}\n$$\n",
|
| 974 |
+
"text_format": "latex",
|
| 975 |
+
"bbox": [
|
| 976 |
+
112,
|
| 977 |
+
414,
|
| 978 |
+
856,
|
| 979 |
+
440
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 7
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "equation",
|
| 985 |
+
"text": "\n$$\n\\text {S m o o t h n e s s} (\\mathbf {R}) = 1 - 1 / \\left(1 + \\sigma^ {2}\\right) \\tag {24}\n$$\n",
|
| 986 |
+
"text_format": "latex",
|
| 987 |
+
"bbox": [
|
| 988 |
+
112,
|
| 989 |
+
446,
|
| 990 |
+
856,
|
| 991 |
+
464
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 7
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "equation",
|
| 997 |
+
"text": "\n$$\n\\operatorname {I D M} (\\mathbf {H H}) = \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} \\frac {\\mathbf {q} (\\mathrm {t} , \\mathrm {r})}{1 + | \\mathrm {t} - \\mathrm {r} |} \\tag {25}\n$$\n",
|
| 998 |
+
"text_format": "latex",
|
| 999 |
+
"bbox": [
|
| 1000 |
+
112,
|
| 1001 |
+
472,
|
| 1002 |
+
856,
|
| 1003 |
+
501
|
| 1004 |
+
],
|
| 1005 |
+
"page_idx": 7
|
| 1006 |
+
},
|
| 1007 |
+
{
|
| 1008 |
+
"type": "equation",
|
| 1009 |
+
"text": "\n$$\n\\operatorname {R M S} (\\mathbf {y}) = \\sqrt [ 2 ]{\\sum_ {\\mathrm {t} , \\mathrm {r} = 1} ^ {\\mathrm {T} , \\mathrm {R}} \\left(\\left| \\mathbf {q} (\\mathrm {t} , \\mathrm {r}) \\right|\\right) ^ {2} / \\mathrm {T}} \\tag {26}\n$$\n",
|
| 1010 |
+
"text_format": "latex",
|
| 1011 |
+
"bbox": [
|
| 1012 |
+
112,
|
| 1013 |
+
508,
|
| 1014 |
+
856,
|
| 1015 |
+
545
|
| 1016 |
+
],
|
| 1017 |
+
"page_idx": 7
|
| 1018 |
+
},
|
| 1019 |
+
{
|
| 1020 |
+
"type": "equation",
|
| 1021 |
+
"text": "\n$$\n\\operatorname {E n t r o p y} (\\mathbf {h}) = - \\sum_ {\\mathrm {t}, \\mathrm {r} = 1} ^ {\\mathrm {T}, \\mathrm {R}} \\mathbf {q} (\\mathrm {t}, \\mathrm {r}) \\left(\\log \\mathbf {q} (\\mathrm {t}, \\mathrm {r})\\right) \\tag {27}\n$$\n",
|
| 1022 |
+
"text_format": "latex",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
112,
|
| 1025 |
+
551,
|
| 1026 |
+
856,
|
| 1027 |
+
574
|
| 1028 |
+
],
|
| 1029 |
+
"page_idx": 7
|
| 1030 |
+
},
|
| 1031 |
+
{
|
| 1032 |
+
"type": "text",
|
| 1033 |
+
"text": "e. Random Forest",
|
| 1034 |
+
"text_level": 1,
|
| 1035 |
+
"bbox": [
|
| 1036 |
+
112,
|
| 1037 |
+
584,
|
| 1038 |
+
277,
|
| 1039 |
+
598
|
| 1040 |
+
],
|
| 1041 |
+
"page_idx": 7
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "text",
|
| 1045 |
+
"text": "Random Forest [23] is an ensemble classifier formed by the fusion of many decision trees. It calculates the result on the basis of the majority voting method. Random forest is more superior then the decision tree as it overcomes the problem of over-fitting. As a tree grows deep, they start to over-fit, i.e., they have low bias and high variance. Random forest uses the different parts of the same training dataset on different trees and helps them averaging multiple decision trees and avoid over-fitting, which increases bias and reduces variance, which boosts performance. Internal working of random forest is shown in Fig.5. We are using 100 decision trees, which are trained using training data consisting of 2172 images by concept of bagging.",
|
| 1046 |
+
"bbox": [
|
| 1047 |
+
109,
|
| 1048 |
+
607,
|
| 1049 |
+
883,
|
| 1050 |
+
748
|
| 1051 |
+
],
|
| 1052 |
+
"page_idx": 7
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "page_number",
|
| 1056 |
+
"text": "8",
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
870,
|
| 1059 |
+
939,
|
| 1060 |
+
882,
|
| 1061 |
+
950
|
| 1062 |
+
],
|
| 1063 |
+
"page_idx": 7
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "image",
|
| 1067 |
+
"img_path": "images/6778354d145f8663203c52eee5d103d27fd99143dbb6f3265e344a9f41c0d0dc.jpg",
|
| 1068 |
+
"image_caption": [
|
| 1069 |
+
"Fig. 5 Internal working of Random Forest"
|
| 1070 |
+
],
|
| 1071 |
+
"image_footnote": [],
|
| 1072 |
+
"bbox": [
|
| 1073 |
+
184,
|
| 1074 |
+
103,
|
| 1075 |
+
883,
|
| 1076 |
+
340
|
| 1077 |
+
],
|
| 1078 |
+
"page_idx": 8
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"text": "f. Decision Tree",
|
| 1083 |
+
"text_level": 1,
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
111,
|
| 1086 |
+
380,
|
| 1087 |
+
264,
|
| 1088 |
+
396
|
| 1089 |
+
],
|
| 1090 |
+
"page_idx": 8
|
| 1091 |
+
},
|
| 1092 |
+
{
|
| 1093 |
+
"type": "text",
|
| 1094 |
+
"text": "Decision Tree [24] is like a conditional control statements, which performs the research operations such as decision analysis. There occurs the problem of over-fitting when trees become deep enough. It is like a tree structure, where each node represents attribute or feature on bases of which one can get the outcome. Each leaf node holds the information related to the class label. Working of decision tree is shown in Fig.6. Features are used as internal nodes of the tree and class are leaf nodes.",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
109,
|
| 1097 |
+
404,
|
| 1098 |
+
883,
|
| 1099 |
+
508
|
| 1100 |
+
],
|
| 1101 |
+
"page_idx": 8
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"type": "image",
|
| 1105 |
+
"img_path": "images/4637b2787f95e0c253ec04f2933a82ce16c5d7da20d005238569d093bc421b20.jpg",
|
| 1106 |
+
"image_caption": [
|
| 1107 |
+
"Fig. 6 Working of Decision Tree"
|
| 1108 |
+
],
|
| 1109 |
+
"image_footnote": [],
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
388,
|
| 1112 |
+
518,
|
| 1113 |
+
700,
|
| 1114 |
+
588
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 8
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "text",
|
| 1120 |
+
"text": "g. K-Nearest Neighbor",
|
| 1121 |
+
"text_level": 1,
|
| 1122 |
+
"bbox": [
|
| 1123 |
+
111,
|
| 1124 |
+
628,
|
| 1125 |
+
318,
|
| 1126 |
+
647
|
| 1127 |
+
],
|
| 1128 |
+
"page_idx": 8
|
| 1129 |
+
},
|
| 1130 |
+
{
|
| 1131 |
+
"type": "text",
|
| 1132 |
+
"text": "KNN [25] is a lazy learning technique in which functions are locally approximation. It can be used for both classification and regression. In which weights assign to neighbors based on distance, i.e., if the distance is $d$ , then weights assign as $1 / d$ such that nearest neighbors contribute more than distant. It has little time complexity because training consists of just calculating the Euclidean distance. Euclidean Distance Measurement is shown in Fig.7. We have two classes, one represented by using square and other by triangle. Now, prediction for testing data circle is done based on minimum Euclidean distance measurement.",
|
| 1133 |
+
"bbox": [
|
| 1134 |
+
109,
|
| 1135 |
+
654,
|
| 1136 |
+
883,
|
| 1137 |
+
776
|
| 1138 |
+
],
|
| 1139 |
+
"page_idx": 8
|
| 1140 |
+
},
|
| 1141 |
+
{
|
| 1142 |
+
"type": "page_number",
|
| 1143 |
+
"text": "9",
|
| 1144 |
+
"bbox": [
|
| 1145 |
+
870,
|
| 1146 |
+
939,
|
| 1147 |
+
883,
|
| 1148 |
+
952
|
| 1149 |
+
],
|
| 1150 |
+
"page_idx": 8
|
| 1151 |
+
},
|
| 1152 |
+
{
|
| 1153 |
+
"type": "image",
|
| 1154 |
+
"img_path": "images/d319e6c9af7c2dde65b83a58c987999cbbaa3387ee4a096a2d025d1913dd70c0.jpg",
|
| 1155 |
+
"image_caption": [
|
| 1156 |
+
"Fig. 7 Euclidean Distance Measurement"
|
| 1157 |
+
],
|
| 1158 |
+
"image_footnote": [],
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
429,
|
| 1161 |
+
88,
|
| 1162 |
+
656,
|
| 1163 |
+
227
|
| 1164 |
+
],
|
| 1165 |
+
"page_idx": 9
|
| 1166 |
+
},
|
| 1167 |
+
{
|
| 1168 |
+
"type": "text",
|
| 1169 |
+
"text": "h. Hybrid Ensemble Classifier",
|
| 1170 |
+
"text_level": 1,
|
| 1171 |
+
"bbox": [
|
| 1172 |
+
111,
|
| 1173 |
+
261,
|
| 1174 |
+
388,
|
| 1175 |
+
279
|
| 1176 |
+
],
|
| 1177 |
+
"page_idx": 9
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "text",
|
| 1181 |
+
"text": "Proposed hybrid ensemble classifier KNN-RF-DT shown in Fig. 8, where prediction is considered for at least two-ratio-one voting of classifiers to a specific class such as benign or malignant.",
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
109,
|
| 1184 |
+
285,
|
| 1185 |
+
885,
|
| 1186 |
+
338
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 9
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "image",
|
| 1192 |
+
"img_path": "images/c805b2466a6aa75176f56062cebf2d2ead23ea41e8f7b81dd9f01e836a694715.jpg",
|
| 1193 |
+
"image_caption": [
|
| 1194 |
+
"Fig. 8 Proposed Hybrid Ensemble Classifier"
|
| 1195 |
+
],
|
| 1196 |
+
"image_footnote": [],
|
| 1197 |
+
"bbox": [
|
| 1198 |
+
320,
|
| 1199 |
+
347,
|
| 1200 |
+
740,
|
| 1201 |
+
498
|
| 1202 |
+
],
|
| 1203 |
+
"page_idx": 9
|
| 1204 |
+
},
|
| 1205 |
+
{
|
| 1206 |
+
"type": "text",
|
| 1207 |
+
"text": "Algorithm-1 Classification process with Hybrid Ensemble Classifier",
|
| 1208 |
+
"text_level": 1,
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
112,
|
| 1211 |
+
574,
|
| 1212 |
+
661,
|
| 1213 |
+
592
|
| 1214 |
+
],
|
| 1215 |
+
"page_idx": 9
|
| 1216 |
+
},
|
| 1217 |
+
{
|
| 1218 |
+
"type": "list",
|
| 1219 |
+
"sub_type": "text",
|
| 1220 |
+
"list_items": [
|
| 1221 |
+
"1. Mdl1 $\\leftarrow$ Model of KNN with $K = 1$",
|
| 1222 |
+
"2. Mdl2 - Model of Random Forest with 100 Trees",
|
| 1223 |
+
"3. Mdl3 $\\longleftarrow$ Model of Decision Tree",
|
| 1224 |
+
"4. p1 $\\longleftarrow$ predict from Md11",
|
| 1225 |
+
"5. p2 $\\longleftarrow$ predict from Mdl2",
|
| 1226 |
+
"6. p3 predict from Mdl3",
|
| 1227 |
+
"7. var right $\\text{一}$ 0 and var left 0",
|
| 1228 |
+
"8. if p1 is \"Malignant\" then",
|
| 1229 |
+
"9. right $\\longleftarrow$ right+1",
|
| 1230 |
+
"10. else",
|
| 1231 |
+
"11. left left+1",
|
| 1232 |
+
"12. end"
|
| 1233 |
+
],
|
| 1234 |
+
"bbox": [
|
| 1235 |
+
142,
|
| 1236 |
+
599,
|
| 1237 |
+
612,
|
| 1238 |
+
891
|
| 1239 |
+
],
|
| 1240 |
+
"page_idx": 9
|
| 1241 |
+
},
|
| 1242 |
+
{
|
| 1243 |
+
"type": "page_number",
|
| 1244 |
+
"text": "10",
|
| 1245 |
+
"bbox": [
|
| 1246 |
+
862,
|
| 1247 |
+
939,
|
| 1248 |
+
883,
|
| 1249 |
+
952
|
| 1250 |
+
],
|
| 1251 |
+
"page_idx": 9
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"type": "list",
|
| 1255 |
+
"sub_type": "text",
|
| 1256 |
+
"list_items": [
|
| 1257 |
+
"13. if p2 is \"Malignant\" then",
|
| 1258 |
+
"14. right $\\longleftarrow$ right+1",
|
| 1259 |
+
"15. else",
|
| 1260 |
+
"16. left left+1",
|
| 1261 |
+
"17. end",
|
| 1262 |
+
"18. if p3 is \"Malignant\" then",
|
| 1263 |
+
"19. right $\\longleftarrow$ right+1",
|
| 1264 |
+
"20. else",
|
| 1265 |
+
"21. left left+1",
|
| 1266 |
+
"22. end",
|
| 1267 |
+
"23. if right is greater then left then",
|
| 1268 |
+
"24. species $\\leftarrow$ \"Malignant\"",
|
| 1269 |
+
"25. else",
|
| 1270 |
+
"26. species $\\longleftarrow$ \"Benign\"",
|
| 1271 |
+
"27. end"
|
| 1272 |
+
],
|
| 1273 |
+
"bbox": [
|
| 1274 |
+
143,
|
| 1275 |
+
95,
|
| 1276 |
+
437,
|
| 1277 |
+
464
|
| 1278 |
+
],
|
| 1279 |
+
"page_idx": 10
|
| 1280 |
+
},
|
| 1281 |
+
{
|
| 1282 |
+
"type": "text",
|
| 1283 |
+
"text": "4. Implementation and Results",
|
| 1284 |
+
"text_level": 1,
|
| 1285 |
+
"bbox": [
|
| 1286 |
+
111,
|
| 1287 |
+
515,
|
| 1288 |
+
383,
|
| 1289 |
+
532
|
| 1290 |
+
],
|
| 1291 |
+
"page_idx": 10
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "text",
|
| 1295 |
+
"text": "a. Dataset used and various SWT filter's Matrix Representations",
|
| 1296 |
+
"text_level": 1,
|
| 1297 |
+
"bbox": [
|
| 1298 |
+
111,
|
| 1299 |
+
540,
|
| 1300 |
+
660,
|
| 1301 |
+
559
|
| 1302 |
+
],
|
| 1303 |
+
"page_idx": 10
|
| 1304 |
+
},
|
| 1305 |
+
{
|
| 1306 |
+
"type": "text",
|
| 1307 |
+
"text": "In the proposed work, Cancer Genome Atlas Glioblastoma Multi-forme (TCGA-GBM) [14] data collection is used to conduct the experimental computation of the proposed approach. This is an open and standard Glioblastoma Multi-forme dataset, which is main type of brain tumor. It is available freely for research work and highly accurate dataset. Hence, no decision from any committee is required on this dataset. The augmentation process is also used to increase dataset such that we have an average of 2556 samples of T1-weighted images used to test the proposed approach, based on testing and training images in ratio 85:15. A distribution of dataset between testing and training is shown in Table.1 and image segmentation using Otsu's method and 1-D SWT filters is shown in Fig. 9.",
|
| 1308 |
+
"bbox": [
|
| 1309 |
+
109,
|
| 1310 |
+
565,
|
| 1311 |
+
883,
|
| 1312 |
+
722
|
| 1313 |
+
],
|
| 1314 |
+
"page_idx": 10
|
| 1315 |
+
},
|
| 1316 |
+
{
|
| 1317 |
+
"type": "table",
|
| 1318 |
+
"img_path": "images/8cad42ea2f64441d7adca8ce8ed8300b201849b7ab3985d9c1ac03d83a855e43.jpg",
|
| 1319 |
+
"table_caption": [
|
| 1320 |
+
"Table 1. Database for Benign and Malignant classification"
|
| 1321 |
+
],
|
| 1322 |
+
"table_footnote": [],
|
| 1323 |
+
"table_body": "<table><tr><td>Database</td><td>Training Dataset</td><td>Testing Dataset</td></tr><tr><td>Benign</td><td>1086</td><td>192</td></tr><tr><td>Malignant</td><td>1086</td><td>192</td></tr></table>",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
120,
|
| 1326 |
+
753,
|
| 1327 |
+
869,
|
| 1328 |
+
834
|
| 1329 |
+
],
|
| 1330 |
+
"page_idx": 10
|
| 1331 |
+
},
|
| 1332 |
+
{
|
| 1333 |
+
"type": "page_number",
|
| 1334 |
+
"text": "11",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
862,
|
| 1337 |
+
939,
|
| 1338 |
+
882,
|
| 1339 |
+
952
|
| 1340 |
+
],
|
| 1341 |
+
"page_idx": 10
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"type": "image",
|
| 1345 |
+
"img_path": "images/b5f323188982875239d3bbf46db743fe8cf3281f32c65030d014beb966a08ae3.jpg",
|
| 1346 |
+
"image_caption": [
|
| 1347 |
+
"Fig. 9 Results of Otsu's, 1-D SWT filters with Approximation, Horizontal, Vertical and Diagonal matrix."
|
| 1348 |
+
],
|
| 1349 |
+
"image_footnote": [],
|
| 1350 |
+
"bbox": [
|
| 1351 |
+
125,
|
| 1352 |
+
104,
|
| 1353 |
+
877,
|
| 1354 |
+
645
|
| 1355 |
+
],
|
| 1356 |
+
"page_idx": 11
|
| 1357 |
+
},
|
| 1358 |
+
{
|
| 1359 |
+
"type": "text",
|
| 1360 |
+
"text": "b. Evaluation Metrics and its Graphical Representation",
|
| 1361 |
+
"text_level": 1,
|
| 1362 |
+
"bbox": [
|
| 1363 |
+
111,
|
| 1364 |
+
696,
|
| 1365 |
+
550,
|
| 1366 |
+
713
|
| 1367 |
+
],
|
| 1368 |
+
"page_idx": 11
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "text",
|
| 1372 |
+
"text": "The following parameters such as Accuracy, Precision, Sensitivity, specificity, F1-score, and Youden index are calculated based on proposed methodology with the help of False negative (FN), True Negative (TN), True Positive (TP), False Positive (FP). The equations are given below:",
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
109,
|
| 1375 |
+
720,
|
| 1376 |
+
883,
|
| 1377 |
+
787
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 11
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "equation",
|
| 1383 |
+
"text": "\n$$\n\\text {A c c u r a c y} = \\frac {(T P + T N)}{(T P + T N + F P + F N)} \\tag {28}\n$$\n",
|
| 1384 |
+
"text_format": "latex",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
155,
|
| 1387 |
+
790,
|
| 1388 |
+
841,
|
| 1389 |
+
828
|
| 1390 |
+
],
|
| 1391 |
+
"page_idx": 11
|
| 1392 |
+
},
|
| 1393 |
+
{
|
| 1394 |
+
"type": "equation",
|
| 1395 |
+
"text": "\n$$\n\\text {S e n s i t i v i t y} = \\frac {T P}{(\\mathrm {T P} + \\mathrm {F N})} \\tag {29}\n$$\n",
|
| 1396 |
+
"text_format": "latex",
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
155,
|
| 1399 |
+
829,
|
| 1400 |
+
841,
|
| 1401 |
+
867
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 11
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "equation",
|
| 1407 |
+
"text": "\n$$\n\\text {S p e c i f i c i t y} = \\frac {T N}{(\\mathrm {T N} + \\mathrm {F P})} \\tag {30}\n$$\n",
|
| 1408 |
+
"text_format": "latex",
|
| 1409 |
+
"bbox": [
|
| 1410 |
+
155,
|
| 1411 |
+
867,
|
| 1412 |
+
839,
|
| 1413 |
+
904
|
| 1414 |
+
],
|
| 1415 |
+
"page_idx": 11
|
| 1416 |
+
},
|
| 1417 |
+
{
|
| 1418 |
+
"type": "page_number",
|
| 1419 |
+
"text": "12",
|
| 1420 |
+
"bbox": [
|
| 1421 |
+
862,
|
| 1422 |
+
939,
|
| 1423 |
+
882,
|
| 1424 |
+
952
|
| 1425 |
+
],
|
| 1426 |
+
"page_idx": 11
|
| 1427 |
+
},
|
| 1428 |
+
{
|
| 1429 |
+
"type": "equation",
|
| 1430 |
+
"text": "\n$$\n\\text {Y o u d e n I n d e x} = \\text {S e n s i t i v i t y} + \\text {S p e c i f i c i t y} - 1 \\tag {31}\n$$\n",
|
| 1431 |
+
"text_format": "latex",
|
| 1432 |
+
"bbox": [
|
| 1433 |
+
155,
|
| 1434 |
+
89,
|
| 1435 |
+
841,
|
| 1436 |
+
108
|
| 1437 |
+
],
|
| 1438 |
+
"page_idx": 12
|
| 1439 |
+
},
|
| 1440 |
+
{
|
| 1441 |
+
"type": "equation",
|
| 1442 |
+
"text": "\n$$\n\\text {P r e c i s i o n} = \\frac {T P}{(\\mathrm {T P} + \\mathrm {F P})} \\tag {32}\n$$\n",
|
| 1443 |
+
"text_format": "latex",
|
| 1444 |
+
"bbox": [
|
| 1445 |
+
155,
|
| 1446 |
+
108,
|
| 1447 |
+
839,
|
| 1448 |
+
145
|
| 1449 |
+
],
|
| 1450 |
+
"page_idx": 12
|
| 1451 |
+
},
|
| 1452 |
+
{
|
| 1453 |
+
"type": "equation",
|
| 1454 |
+
"text": "\n$$\nF 1 - S c o r e = \\frac {2 * P r e c i s i o n * S e n s i t i v i t y}{P r e c i s i o n + S e n s i t i v i t y} \\tag {33}\n$$\n",
|
| 1455 |
+
"text_format": "latex",
|
| 1456 |
+
"bbox": [
|
| 1457 |
+
155,
|
| 1458 |
+
145,
|
| 1459 |
+
839,
|
| 1460 |
+
184
|
| 1461 |
+
],
|
| 1462 |
+
"page_idx": 12
|
| 1463 |
+
},
|
| 1464 |
+
{
|
| 1465 |
+
"type": "text",
|
| 1466 |
+
"text": "The results obtained in terms of considered evaluation metrics considered are shown in Table 2.",
|
| 1467 |
+
"bbox": [
|
| 1468 |
+
111,
|
| 1469 |
+
191,
|
| 1470 |
+
872,
|
| 1471 |
+
209
|
| 1472 |
+
],
|
| 1473 |
+
"page_idx": 12
|
| 1474 |
+
},
|
| 1475 |
+
{
|
| 1476 |
+
"type": "table",
|
| 1477 |
+
"img_path": "images/439d9ddd96a1dee4a18f874b0b772dde39cd8928814fcb0ebaca3cbfe3ea21eb.jpg",
|
| 1478 |
+
"table_caption": [
|
| 1479 |
+
"Table 2. Classification results of various classifiers and proposed scheme"
|
| 1480 |
+
],
|
| 1481 |
+
"table_footnote": [],
|
| 1482 |
+
"table_body": "<table><tr><td colspan=\"2\">Classifier</td><td>Accuracy %</td><td>Precision %</td><td>Sensitivity %</td><td>F1-score%</td><td>Youden Index%</td><td>Specifi city%</td></tr><tr><td colspan=\"2\">Proposed Method (KNN-RF-DT)</td><td>97.305</td><td>97.73</td><td>97.04</td><td>97.41</td><td>94.71</td><td>97.60</td></tr><tr><td>SVM</td><td>RBF</td><td>93.038</td><td>92.38</td><td>93.82</td><td>94.79</td><td>89.50</td><td>92.26</td></tr><tr><td>Kern</td><td>Linear</td><td>85.56</td><td>85.20</td><td>86.07</td><td>85.41</td><td>70.72</td><td>85.05</td></tr><tr><td>el</td><td>Polynomial</td><td>89.39</td><td>88.79</td><td>90.22</td><td>90.25</td><td>80.39</td><td>88.58</td></tr><tr><td colspan=\"2\">Naïve Bayes</td><td>81.33</td><td>81.68</td><td>80.83</td><td>81.62</td><td>63.54</td><td>81.85</td></tr><tr><td colspan=\"2\">Decision Tree</td><td>93.157</td><td>93.58</td><td>92.80</td><td>95.45</td><td>90.98</td><td>93.51</td></tr><tr><td colspan=\"2\">Neural Network</td><td>93</td><td>92.57</td><td>93.27</td><td>95.30</td><td>90.61</td><td>92.76</td></tr><tr><td colspan=\"2\">KNN</td><td>94.765</td><td>94.92</td><td>94.30</td><td>94.60</td><td>89.53</td><td>95.23</td></tr></table>",
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
112,
|
| 1485 |
+
258,
|
| 1486 |
+
883,
|
| 1487 |
+
433
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 12
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "From the results shown in Table 2, we can conclude that the proposed method with accuracy $97.305\\%$ outperforms the compared classifiers. We used KNN, RF and DT in hybrid ensemble classifier because they give best performance in terms of various evaluation parameters when compared to other classifiers but it increases time complexity due to computation by three individual classifiers. Accuracy of Random Forest is always greater than the Decision tree due to which RF is included in hybrid ensemble classifier. Sensitivity and Specificity parameters are quite comparable percentage-wise. Sensitivity results describe benign tumors, which are good determinative, as it is merely a total positive divide by the total actual benign tumor. Specificity results describe malignant tumors, which are good determinative, as it is merely a total negative divide by total actual malignant tumor. Youden-index describes the maximum difference between true-positive, and false-positive, high Youden-index means true-positive results are quite high as compared to false positive. F1-score describes the balance between precision and Sensitivity, which is vital as there occur uneven class distribution. Precision describes the percentage of actual positive (TP) among the entire predicted positive $(\\mathrm{TP} + \\mathrm{FP})$ , which is quite high for the proposed method. Overall, the comparison between the proposed and already existing classification methods, which are implemented in above table is shown below in Fig. 10.",
|
| 1494 |
+
"bbox": [
|
| 1495 |
+
109,
|
| 1496 |
+
440,
|
| 1497 |
+
883,
|
| 1498 |
+
720
|
| 1499 |
+
],
|
| 1500 |
+
"page_idx": 12
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "page_number",
|
| 1504 |
+
"text": "13",
|
| 1505 |
+
"bbox": [
|
| 1506 |
+
862,
|
| 1507 |
+
939,
|
| 1508 |
+
882,
|
| 1509 |
+
952
|
| 1510 |
+
],
|
| 1511 |
+
"page_idx": 12
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "image",
|
| 1515 |
+
"img_path": "images/19e8a688d7b13bb2dcbd464b6ca02354bca58c588f6cd7d0b2a6b37631d452e6.jpg",
|
| 1516 |
+
"image_caption": [
|
| 1517 |
+
"Fig. 10 Comparison of proposed and existing classification methods based on performance metrics."
|
| 1518 |
+
],
|
| 1519 |
+
"image_footnote": [],
|
| 1520 |
+
"bbox": [
|
| 1521 |
+
122,
|
| 1522 |
+
88,
|
| 1523 |
+
883,
|
| 1524 |
+
354
|
| 1525 |
+
],
|
| 1526 |
+
"page_idx": 13
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"type": "text",
|
| 1530 |
+
"text": "c. Confusion Matrix and GUI",
|
| 1531 |
+
"text_level": 1,
|
| 1532 |
+
"bbox": [
|
| 1533 |
+
112,
|
| 1534 |
+
412,
|
| 1535 |
+
352,
|
| 1536 |
+
429
|
| 1537 |
+
],
|
| 1538 |
+
"page_idx": 13
|
| 1539 |
+
},
|
| 1540 |
+
{
|
| 1541 |
+
"type": "image",
|
| 1542 |
+
"img_path": "images/6b76ae5869e4e4335b51e5c12e6c9092b35d64c5c8925b83a875183aeccd9da1.jpg",
|
| 1543 |
+
"image_caption": [
|
| 1544 |
+
"Fig. 11 Confusion matrix of the proposed method."
|
| 1545 |
+
],
|
| 1546 |
+
"image_footnote": [],
|
| 1547 |
+
"bbox": [
|
| 1548 |
+
196,
|
| 1549 |
+
440,
|
| 1550 |
+
588,
|
| 1551 |
+
575
|
| 1552 |
+
],
|
| 1553 |
+
"page_idx": 13
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "text",
|
| 1557 |
+
"text": "Confusion matrix for proposed method KNN-RF-DT based on majority voting is shown in Fig.11. 1249 of 1278 benign tumors are classified as benign, while 29 tumors as malignant. Besides, 1239 of 1278 malignant tumors are classified as malignant, while 39 tumors as benign. Overall, a good accuracy of 97.305 is obtained using the proposed method.",
|
| 1558 |
+
"bbox": [
|
| 1559 |
+
109,
|
| 1560 |
+
616,
|
| 1561 |
+
883,
|
| 1562 |
+
686
|
| 1563 |
+
],
|
| 1564 |
+
"page_idx": 13
|
| 1565 |
+
},
|
| 1566 |
+
{
|
| 1567 |
+
"type": "text",
|
| 1568 |
+
"text": "Overall, the proposed method gives excellent performance on various evaluation parameters as described above in comparison to existing methods. Thus, the proposed method is novel and effective to work for the classification of benign and malignant brain tumors.",
|
| 1569 |
+
"bbox": [
|
| 1570 |
+
109,
|
| 1571 |
+
686,
|
| 1572 |
+
883,
|
| 1573 |
+
738
|
| 1574 |
+
],
|
| 1575 |
+
"page_idx": 13
|
| 1576 |
+
},
|
| 1577 |
+
{
|
| 1578 |
+
"type": "text",
|
| 1579 |
+
"text": "GUI for this proposed method makes it more user-friendly, which is implemented in MATLAB 2017a itself shown below in Fig.12",
|
| 1580 |
+
"bbox": [
|
| 1581 |
+
109,
|
| 1582 |
+
738,
|
| 1583 |
+
880,
|
| 1584 |
+
773
|
| 1585 |
+
],
|
| 1586 |
+
"page_idx": 13
|
| 1587 |
+
},
|
| 1588 |
+
{
|
| 1589 |
+
"type": "page_number",
|
| 1590 |
+
"text": "14",
|
| 1591 |
+
"bbox": [
|
| 1592 |
+
862,
|
| 1593 |
+
939,
|
| 1594 |
+
883,
|
| 1595 |
+
952
|
| 1596 |
+
],
|
| 1597 |
+
"page_idx": 13
|
| 1598 |
+
},
|
| 1599 |
+
{
|
| 1600 |
+
"type": "image",
|
| 1601 |
+
"img_path": "images/6563c1d800b09800696dfd240cce8bed5593b1e5a6514a8106fb9b6563b33b8c.jpg",
|
| 1602 |
+
"image_caption": [
|
| 1603 |
+
"Fig. 12 GUI for the above-proposed method in Matlab 2017a."
|
| 1604 |
+
],
|
| 1605 |
+
"image_footnote": [],
|
| 1606 |
+
"bbox": [
|
| 1607 |
+
117,
|
| 1608 |
+
88,
|
| 1609 |
+
883,
|
| 1610 |
+
375
|
| 1611 |
+
],
|
| 1612 |
+
"page_idx": 14
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "text",
|
| 1616 |
+
"text": "d. Area Calculation for segmented Region",
|
| 1617 |
+
"bbox": [
|
| 1618 |
+
111,
|
| 1619 |
+
406,
|
| 1620 |
+
450,
|
| 1621 |
+
422
|
| 1622 |
+
],
|
| 1623 |
+
"page_idx": 14
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "text",
|
| 1627 |
+
"text": "Area calculation for segmented images is shown below in Table 3 using formula:",
|
| 1628 |
+
"bbox": [
|
| 1629 |
+
111,
|
| 1630 |
+
430,
|
| 1631 |
+
754,
|
| 1632 |
+
448
|
| 1633 |
+
],
|
| 1634 |
+
"page_idx": 14
|
| 1635 |
+
},
|
| 1636 |
+
{
|
| 1637 |
+
"type": "equation",
|
| 1638 |
+
"text": "\n$$\n\\operatorname {I m a g e}, \\mathrm {I} = \\sum_ {w = 0} ^ {2 0 0} \\sum_ {h = 0} ^ {2 0 0} [ \\mathrm {g} (0) + \\mathrm {g} (1) ] \\tag {34}\n$$\n",
|
| 1639 |
+
"text_format": "latex",
|
| 1640 |
+
"bbox": [
|
| 1641 |
+
112,
|
| 1642 |
+
449,
|
| 1643 |
+
856,
|
| 1644 |
+
484
|
| 1645 |
+
],
|
| 1646 |
+
"page_idx": 14
|
| 1647 |
+
},
|
| 1648 |
+
{
|
| 1649 |
+
"type": "equation",
|
| 1650 |
+
"text": "\n$$\n\\text {w h e r e P i x e l s} = \\text {w i d t h (w) * h e i g h t (h)} = 2 0 0 * 2 0 0\n$$\n",
|
| 1651 |
+
"text_format": "latex",
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
173,
|
| 1654 |
+
484,
|
| 1655 |
+
560,
|
| 1656 |
+
501
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 14
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "equation",
|
| 1662 |
+
"text": "\n$$\n\\mathrm {g} (0) = \\text {b l a c k p i x e l (d i g i t 0)}\n$$\n",
|
| 1663 |
+
"text_format": "latex",
|
| 1664 |
+
"bbox": [
|
| 1665 |
+
173,
|
| 1666 |
+
503,
|
| 1667 |
+
385,
|
| 1668 |
+
520
|
| 1669 |
+
],
|
| 1670 |
+
"page_idx": 14
|
| 1671 |
+
},
|
| 1672 |
+
{
|
| 1673 |
+
"type": "equation",
|
| 1674 |
+
"text": "\n$$\n\\mathrm {g} (1) = \\text {w h i t e p i x e l (d i g i t 1)}\n$$\n",
|
| 1675 |
+
"text_format": "latex",
|
| 1676 |
+
"bbox": [
|
| 1677 |
+
173,
|
| 1678 |
+
521,
|
| 1679 |
+
380,
|
| 1680 |
+
537
|
| 1681 |
+
],
|
| 1682 |
+
"page_idx": 14
|
| 1683 |
+
},
|
| 1684 |
+
{
|
| 1685 |
+
"type": "equation",
|
| 1686 |
+
"text": "\n$$\n\\text {N o . o f w h i t e p i x e l s ,} \\mathrm {P} = \\sum_ {w = 0} ^ {2 0 0} \\sum_ {h = 0} ^ {2 0 0} [ \\mathrm {g} (1) ], \\tag {35}\n$$\n",
|
| 1687 |
+
"text_format": "latex",
|
| 1688 |
+
"bbox": [
|
| 1689 |
+
112,
|
| 1690 |
+
537,
|
| 1691 |
+
856,
|
| 1692 |
+
575
|
| 1693 |
+
],
|
| 1694 |
+
"page_idx": 14
|
| 1695 |
+
},
|
| 1696 |
+
{
|
| 1697 |
+
"type": "text",
|
| 1698 |
+
"text": "where $\\mathbf{P}$ is number of white pixels and 1Pixel = .264 mm",
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
173,
|
| 1701 |
+
575,
|
| 1702 |
+
622,
|
| 1703 |
+
590
|
| 1704 |
+
],
|
| 1705 |
+
"page_idx": 14
|
| 1706 |
+
},
|
| 1707 |
+
{
|
| 1708 |
+
"type": "text",
|
| 1709 |
+
"text": "The formula for area calculation as follow:",
|
| 1710 |
+
"bbox": [
|
| 1711 |
+
112,
|
| 1712 |
+
592,
|
| 1713 |
+
454,
|
| 1714 |
+
607
|
| 1715 |
+
],
|
| 1716 |
+
"page_idx": 14
|
| 1717 |
+
},
|
| 1718 |
+
{
|
| 1719 |
+
"type": "equation",
|
| 1720 |
+
"text": "\n$$\n\\text {A r e a} = [ \\operatorname {s q r t} (\\mathrm {P}) ^ {*} \\cdot 2 6 4 ] \\text {i n m m} ^ {2} \\tag {36}\n$$\n",
|
| 1721 |
+
"text_format": "latex",
|
| 1722 |
+
"bbox": [
|
| 1723 |
+
112,
|
| 1724 |
+
609,
|
| 1725 |
+
856,
|
| 1726 |
+
627
|
| 1727 |
+
],
|
| 1728 |
+
"page_idx": 14
|
| 1729 |
+
},
|
| 1730 |
+
{
|
| 1731 |
+
"type": "table",
|
| 1732 |
+
"img_path": "images/c2cb3fcfd6cfe44ab682f39c61184c9fda24fb0dc4c27af9d729062635f7c752.jpg",
|
| 1733 |
+
"table_caption": [
|
| 1734 |
+
"Table 3. Describes the area calculation of segmented images"
|
| 1735 |
+
],
|
| 1736 |
+
"table_footnote": [],
|
| 1737 |
+
"table_body": "<table><tr><td>Segmented Images</td><td></td><td></td><td></td><td></td></tr><tr><td>Area (mm2)</td><td>28.6656</td><td>38.5765</td><td>12.3235</td><td>15.2869</td></tr></table>",
|
| 1738 |
+
"bbox": [
|
| 1739 |
+
120,
|
| 1740 |
+
664,
|
| 1741 |
+
841,
|
| 1742 |
+
806
|
| 1743 |
+
],
|
| 1744 |
+
"page_idx": 14
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "text",
|
| 1748 |
+
"text": "e. Analysis of Time Complexity",
|
| 1749 |
+
"bbox": [
|
| 1750 |
+
111,
|
| 1751 |
+
814,
|
| 1752 |
+
370,
|
| 1753 |
+
832
|
| 1754 |
+
],
|
| 1755 |
+
"page_idx": 14
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "text",
|
| 1759 |
+
"text": "This section presents the time complexity of traditional classifiers [18] and the proposed hybrid ensemble classifier and are shown in Table 4. Further, we are comparing the time complexity of proposed method to that of modern deep learning approaches like convolutional neural network (CNN).",
|
| 1760 |
+
"bbox": [
|
| 1761 |
+
109,
|
| 1762 |
+
839,
|
| 1763 |
+
883,
|
| 1764 |
+
909
|
| 1765 |
+
],
|
| 1766 |
+
"page_idx": 14
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "page_number",
|
| 1770 |
+
"text": "15",
|
| 1771 |
+
"bbox": [
|
| 1772 |
+
862,
|
| 1773 |
+
939,
|
| 1774 |
+
882,
|
| 1775 |
+
952
|
| 1776 |
+
],
|
| 1777 |
+
"page_idx": 14
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "text",
|
| 1781 |
+
"text": "For proposed hybrid ensemble classifier, training time complexity is $O(1 + n^2 p n_{trees} + n^2 p)$ where $n_{trees}$ represents number of trees of random forest, $n$ : no. of training samples, $p$ : no. of features used. Here we have considered $n_{trees} = 100$ ; $n = 2172$ , which is 85% of total dataset (2556); $p = 13$ . Thus, $O(1 + (2172)^2 * 13 * (100 + 1))$ is equivalent to $O(6.1942e9)$ .",
|
| 1782 |
+
"bbox": [
|
| 1783 |
+
109,
|
| 1784 |
+
90,
|
| 1785 |
+
883,
|
| 1786 |
+
160
|
| 1787 |
+
],
|
| 1788 |
+
"page_idx": 15
|
| 1789 |
+
},
|
| 1790 |
+
{
|
| 1791 |
+
"type": "text",
|
| 1792 |
+
"text": "Next, we compute training time complexity of deep learning classifiers for instance CNN. CNN consists of an input layer, several convolutional layers, pooling layers, fully connected layers and an output layer. Let us suppose, following represents the architecture of CNN.",
|
| 1793 |
+
"bbox": [
|
| 1794 |
+
109,
|
| 1795 |
+
160,
|
| 1796 |
+
883,
|
| 1797 |
+
212
|
| 1798 |
+
],
|
| 1799 |
+
"page_idx": 15
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "text",
|
| 1803 |
+
"text": "Size (pixels *pixels) of the input image being used ( $I^{*}I = 200^{*}200$ ); Size of kernel ( $S2^{*}S2 = 7^{*}7$ ); No. of kernels ( $N1 = 20$ ) in first Convolutional Layer; Pooling Size $= 2^{*}2$ Pixels with stride $= 2$ Pixels; Size of kernel ( $S3^{*}S3 = 4^{*}4$ ), No. of kernels ( $N2 = 10$ ) in second Convolutional Layer; Pooling Size $= 2^{*}2$ Pixels with stride $= 2$ Pixels.",
|
| 1804 |
+
"bbox": [
|
| 1805 |
+
109,
|
| 1806 |
+
212,
|
| 1807 |
+
883,
|
| 1808 |
+
281
|
| 1809 |
+
],
|
| 1810 |
+
"page_idx": 15
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "text",
|
| 1814 |
+
"text": "Here we process input image step by step to extract the number of features that become input to Fully connected layers of CNN. When first convolutional layer kernels are applied we get $200 - 7 + 1$ by $200 - 7 + 1$ size with $\\mathrm{N}1 = 20$ matrix i.e. $194^{*}194$ size 20 matrices. When we apply pool layer we get $194 / 2$ by $194 / 2$ size with $\\mathrm{N}1 = 20$ matrices i.e. $97^{*}97$ size 20 matrices. In second convolution layer, kernels are applied to the output of first convolutional layers and we get $97 - 4 + 1$ by $97 - 4 + 1$ size $\\mathrm{N}1^{*}\\mathrm{N}2$ matrices i.e. $94^{*}94$ size 200 matrices. When we apply pool layer we get $94 / 2$ by $94 / 2$ size 200 matrices i.e. $47^{*}47$ size 200 matrices. Now this $47^{*}47^{*}200$ becomes the input to the fully connected layer, which is nothing but the simple Feed-forward backpropagation neural network. So, here training time complexity of the model (Fully connected layers) is $O(nt^{*}(pj +jk))$ , where $n = 2172$ : no. of training set, $t = 1000$ : no. of epochs and $p = 47^{*}47^{*}200$ (input layer), $j = 20$ (hidden layer), $k = 2$ (output layer) i.e. O(2172*1000(47*47*200*20+20*2)) = O(1.9192e13).",
|
| 1815 |
+
"bbox": [
|
| 1816 |
+
109,
|
| 1817 |
+
282,
|
| 1818 |
+
883,
|
| 1819 |
+
491
|
| 1820 |
+
],
|
| 1821 |
+
"page_idx": 15
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "text",
|
| 1825 |
+
"text": "Training time complexity of proposed hybrid model is O(6.1942e9), which is quite less as compared to the training time complexity of deep learning classifiers like CNN O(1.9192e13) for same set of parameters. In reality, training time complexity of CNN will be much more than what we have calculated here. Because in present calculation, we have taken same number of training dataset for proposed and CNN but in reality number of dataset images would be much higher than 2172 for CNN method and so is the number of neurons required in hidden layer as well.",
|
| 1826 |
+
"bbox": [
|
| 1827 |
+
109,
|
| 1828 |
+
491,
|
| 1829 |
+
883,
|
| 1830 |
+
611
|
| 1831 |
+
],
|
| 1832 |
+
"page_idx": 15
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "text",
|
| 1836 |
+
"text": "Overall, we conclude that the proposed hybrid ensemble classifier provides good accuracy at the expense of increased the training time complexity in comparison to traditional classifiers like DT, SVM, and KNN etc. However, it has significantly less training time complexity as compared to modern deep learning methods like CNN and provides the comparative accuracy.",
|
| 1837 |
+
"bbox": [
|
| 1838 |
+
109,
|
| 1839 |
+
613,
|
| 1840 |
+
883,
|
| 1841 |
+
702
|
| 1842 |
+
],
|
| 1843 |
+
"page_idx": 15
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "table",
|
| 1847 |
+
"img_path": "images/2465998f1b00c87b10c08f7de1cff102df531cf05d09f866450deb129ba60456.jpg",
|
| 1848 |
+
"table_caption": [
|
| 1849 |
+
"Table.4. Describes the time complexity of traditional classifiers"
|
| 1850 |
+
],
|
| 1851 |
+
"table_footnote": [],
|
| 1852 |
+
"table_body": "<table><tr><td>Complexity</td><td>Training</td><td>Prediction</td></tr><tr><td rowspan=\"2\">KNN-RF-DT</td><td>O(1+n2pntrees+n2p)</td><td rowspan=\"2\">O(np+ptrees+p)</td></tr><tr><td>where ntrees = 100 i.e. number of trees of random forest, n = 2172: no. of training samples for model, which is 85% of total dataset (2556), p = 13: no. of features used.</td></tr><tr><td>Decision Tree</td><td>O(n2p)</td><td>O(p)</td></tr><tr><td>SVM (rbf)</td><td>O(n2p+n3)</td><td>O(nsvp), where nsv = 1</td></tr><tr><td>KNN</td><td>O(1)</td><td>O(np)</td></tr><tr><td>Neural Network</td><td>O(nt*(pj +jk)) \nwhere t = 1000: no. of epochs and p=13(input \nlayer), j=20(hidden layer), k=2 (output layer)</td><td>O(pj+jk) \nwhere p = 13 (input \nlayer), j = 20 (hidden \nlayer), k = 2 (output \nlayer)</td></tr><tr><td>Naïve Bayes</td><td>O(np)</td><td>O(p)</td></tr><tr><td>Random Forest</td><td>O(n2pntrees)</td><td>O(pntrees)</td></tr></table>",
|
| 1853 |
+
"bbox": [
|
| 1854 |
+
104,
|
| 1855 |
+
729,
|
| 1856 |
+
885,
|
| 1857 |
+
897
|
| 1858 |
+
],
|
| 1859 |
+
"page_idx": 15
|
| 1860 |
+
},
|
| 1861 |
+
{
|
| 1862 |
+
"type": "page_number",
|
| 1863 |
+
"text": "16",
|
| 1864 |
+
"bbox": [
|
| 1865 |
+
862,
|
| 1866 |
+
939,
|
| 1867 |
+
883,
|
| 1868 |
+
952
|
| 1869 |
+
],
|
| 1870 |
+
"page_idx": 15
|
| 1871 |
+
},
|
| 1872 |
+
{
|
| 1873 |
+
"type": "table",
|
| 1874 |
+
"img_path": "",
|
| 1875 |
+
"table_caption": [],
|
| 1876 |
+
"table_footnote": [],
|
| 1877 |
+
"bbox": [
|
| 1878 |
+
106,
|
| 1879 |
+
88,
|
| 1880 |
+
885,
|
| 1881 |
+
218
|
| 1882 |
+
],
|
| 1883 |
+
"page_idx": 16
|
| 1884 |
+
},
|
| 1885 |
+
{
|
| 1886 |
+
"type": "text",
|
| 1887 |
+
"text": "5. Conclusion",
|
| 1888 |
+
"text_level": 1,
|
| 1889 |
+
"bbox": [
|
| 1890 |
+
111,
|
| 1891 |
+
250,
|
| 1892 |
+
230,
|
| 1893 |
+
265
|
| 1894 |
+
],
|
| 1895 |
+
"page_idx": 16
|
| 1896 |
+
},
|
| 1897 |
+
{
|
| 1898 |
+
"type": "text",
|
| 1899 |
+
"text": "The proposed work aims at improving the performance of traditional classifiers. As traditional classifiers have an advantage over deep learning algorithms because they require small datasets for training and have low computational time complexity. Image is segmented using otsu's method, features are extracted by using SWT+PCA+GLCM, and finally, classification is done based on hybrid ensemble classifier KNN-RF-DT. The proposed method is novel and useful as it outperforms the already existing methods based on machine learning. Experiments are conducted with software MATLAB 2017a with a personal computer of 4 GB memory, Windows 10 64-bit operating system, and Intel (R) Core (TM) i3-6006U CPU @ 2.00 GHz. Overall, proposed method achieved accuracy of $97.305\\%$ , precision $97.73\\%$ , specificity $97.60\\%$ , Sensitivity $97.04\\%$ , Youden-index $94.71\\%$ , and F1-score $97.41\\%$ which indicates its authenticity over medical images. In future, other hybridization ideas will be investigated like Neural Network-SVM, Neural Network-KNN, Neural Network-RF, Neural Network-DT and Neural Network - Naïve Bayes to further improve the accuracy.",
|
| 1900 |
+
"bbox": [
|
| 1901 |
+
109,
|
| 1902 |
+
273,
|
| 1903 |
+
883,
|
| 1904 |
+
501
|
| 1905 |
+
],
|
| 1906 |
+
"page_idx": 16
|
| 1907 |
+
},
|
| 1908 |
+
{
|
| 1909 |
+
"type": "text",
|
| 1910 |
+
"text": "References",
|
| 1911 |
+
"text_level": 1,
|
| 1912 |
+
"bbox": [
|
| 1913 |
+
112,
|
| 1914 |
+
508,
|
| 1915 |
+
209,
|
| 1916 |
+
523
|
| 1917 |
+
],
|
| 1918 |
+
"page_idx": 16
|
| 1919 |
+
},
|
| 1920 |
+
{
|
| 1921 |
+
"type": "list",
|
| 1922 |
+
"sub_type": "ref_text",
|
| 1923 |
+
"list_items": [
|
| 1924 |
+
"[1]. Özyurt, F., Sert, E., Avci, E., & Dogantekin, E. (2019). Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy. Measurement, 147, 106830.",
|
| 1925 |
+
"[2]. Abd-Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2019). A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magnetic resonance imaging.",
|
| 1926 |
+
"[3]. Wadhwa, A., Bhardwaj, A., & Verma, V. S. (2019). A review on brain tumor segmentation of MRI images. Magnetic resonance imaging.",
|
| 1927 |
+
"[4]. Othman, M. F. B., Abdullah, N. B., & Kamal, N. F. B. (2011, April). MRI brain classification using support vector machine. In 2011 Fourth International Conference on Modeling, Simulation and Applied Optimization (pp. 1-4). IEEE.",
|
| 1928 |
+
"[5]. Sindhumol, S., Kumar, A., & Balakrishnan, K. (2013). Spectral clustering independent component analysis for tissue classification from brain MRI. Biomedical Signal Processing and Control, 8(6), 667-674.",
|
| 1929 |
+
"[6]. Abd-Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2016, September). Classification of brain tumor MRIs using a kernel support vector machine. In International Conference on Well-Being in the Information Society (pp. 151-160). Springer, Cham.",
|
| 1930 |
+
"[7]. Kalbkhani, H., Shayesteh, M. G., & Zali-Vargahan, B. (2013). Robust algorithm for brain magnetic resonance image (MRI) classification based on GARCH variances series. Biomedical Signal Processing and Control, 8(6), 909-919."
|
| 1931 |
+
],
|
| 1932 |
+
"bbox": [
|
| 1933 |
+
112,
|
| 1934 |
+
532,
|
| 1935 |
+
883,
|
| 1936 |
+
882
|
| 1937 |
+
],
|
| 1938 |
+
"page_idx": 16
|
| 1939 |
+
},
|
| 1940 |
+
{
|
| 1941 |
+
"type": "page_number",
|
| 1942 |
+
"text": "17",
|
| 1943 |
+
"bbox": [
|
| 1944 |
+
862,
|
| 1945 |
+
939,
|
| 1946 |
+
882,
|
| 1947 |
+
952
|
| 1948 |
+
],
|
| 1949 |
+
"page_idx": 16
|
| 1950 |
+
},
|
| 1951 |
+
{
|
| 1952 |
+
"type": "list",
|
| 1953 |
+
"sub_type": "ref_text",
|
| 1954 |
+
"list_items": [
|
| 1955 |
+
"[8]. Saritha, M., Joseph, K. P., & Mathew, A. T. (2013). Classification of MRI brain images using combined wavelet entropy based spider web plots and probabilistic neural network. Pattern Recognition Letters, 34(16), 2151-2156.",
|
| 1956 |
+
"[9]. Deepa, S. N., & Devi, B. A. (2012, January). Artificial neural networks design for classification of brain tumour. In 2012 International Conference on Computer Communication and Informatics (pp. 1-6). IEEE.",
|
| 1957 |
+
"[10]. Chandra, S., Bhat, R., & Singh, H. (2009, December). A PSO based method for detection of brain tumors from MRI. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC) (pp. 666-671). IEEE.",
|
| 1958 |
+
"[11]. Xuan, X., & Liao, Q. (2007, August). Statistical structure analysis in MRI brain tumor segmentation. In Fourth International Conference on Image and Graphics (ICIG 2007) (pp. 421-426). IEEE.",
|
| 1959 |
+
"[12]. Dhanalakshmi, P., & Kanimozhi, T. (2013). Automatic segmentation of brain tumor using K-Means clustering and its area calculation. International Journal of advanced electrical and Electronics Engineering, 2(2), 130-134.",
|
| 1960 |
+
"[13]. Kaushik, D., Singh, U., Singhal, P., & Singh, V. (2014). Brain tumor segmentation using genetic algorithm. In International Journal of Computer Applications®(IJCA)(0975-8887) International Conference on Advances in Computer Engineering & Applications (ICACEA-2014) at IMSEC, GZB.",
|
| 1961 |
+
"[14]. Brain Tumor dataset available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM",
|
| 1962 |
+
"[15]. Chaudhary, J., Rani, R., Kamboj, A., “Deep learning-based approach for segmentation of glioma sub-regions in MRI”. International Journal of Intelligent Computing and Cybernetics (2020).",
|
| 1963 |
+
"[16]. Rani, R., Kamboj, A., \"Brain Tumor Classification for MR Imaging Using Support Vector Machine\". In Progress in Advanced Computing and Intelligent Engineering (pp. 165-176). Springer, Singapore (2019).",
|
| 1964 |
+
"[17]. Kamboj, A., Rani, R., Chaudhary, J., “Deep learning approaches for brain tumor segmentation: A review”. In 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC) (pp. 599-603). IEEE.",
|
| 1965 |
+
"[18]. Time complexity analysis of various Classifiers: https://www.thekerneltrip.com/machine/learning/computational-complexity-learning-algorithms/",
|
| 1966 |
+
"[19]. Sezgin, M., & Sankur, B., “Survey over image thresholding techniques and quantitative performance evaluation”. Journal of Electronic imaging, 13(1), 146-166, 2014.",
|
| 1967 |
+
"[20]. Shensa, M, J., “The discrete wavelet transform: wedding the a trous and Mallat algorithms”. IEEE Transactions on signal processing, 40(10), 2464-2482, 1992.",
|
| 1968 |
+
"[21]. Jolliffe, I. T., \"Principal component analysis\". Technometrics, 45(3), 276, 2003.",
|
| 1969 |
+
"[22]. Haralick, R. M., Shanmugam, K., Dinstein, I. H., \"Textural features for image classification\". IEEE Transactions on systems, man, and cybernetics, (6), 610-621, 1973.",
|
| 1970 |
+
"[23]. Prinzie, A., Van den Poel, D., “Random multiclass classification: Generalizing random forests to random mnl and random nb”. In International Conference on Database and Expert Systems Applications (pp. 349-358). Springer, Berlin, Heidelberg, 2007.",
|
| 1971 |
+
"[24]. Karimi, K., Hamilton, H. J. \"Generation and interpretation of temporal decision rules\". arXiv preprint arXiv:1004.3334, 2010.",
|
| 1972 |
+
"[25]. Altman, N. S. “An introduction to kernel and nearest-neighbor nonparametric regression”. The American Statistician, 46(3), 175-185, 1992."
|
| 1973 |
+
],
|
| 1974 |
+
"bbox": [
|
| 1975 |
+
112,
|
| 1976 |
+
89,
|
| 1977 |
+
883,
|
| 1978 |
+
892
|
| 1979 |
+
],
|
| 1980 |
+
"page_idx": 17
|
| 1981 |
+
},
|
| 1982 |
+
{
|
| 1983 |
+
"type": "page_number",
|
| 1984 |
+
"text": "18",
|
| 1985 |
+
"bbox": [
|
| 1986 |
+
862,
|
| 1987 |
+
939,
|
| 1988 |
+
883,
|
| 1989 |
+
952
|
| 1990 |
+
],
|
| 1991 |
+
"page_idx": 17
|
| 1992 |
+
}
|
| 1993 |
+
]
|
data/2021/2101_00xxx/2101.00216/1322f2fe-d1ab-4e6d-b8cd-999150e9e3a0_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00216/full.md
CHANGED
|
@@ -1,3 +1,440 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Brain Tumor Detection and Classification based on Hybrid Ensemble Classifier
|
| 2 |
+
|
| 3 |
+
Ginni Garg<sup>1</sup>, Ritu Garg
|
| 4 |
+
|
| 5 |
+
Department of Computer Engineering
|
| 6 |
+
|
| 7 |
+
National Institute of Technology, Kurukshetra, 136119
|
| 8 |
+
|
| 9 |
+
gargginni01@gmail.com, ritu.59@gmail.com
|
| 10 |
+
|
| 11 |
+
Abstract: To improve patient survival and treatment outcomes, early diagnosis of brain tumors is an essential task. It is a difficult task to evaluate the magnetic resonance imaging (MRI) images manually. Thus, there is a need for digital methods for tumor diagnosis with better accuracy. However, it is still a very challenging task in assessing their shape, volume, boundaries, tumor detection, size, segmentation, and classification. In this proposed work, we propose a hybrid ensemble method using Random Forest (RF), K-Nearest Neighbour, and Decision Tree (DT) (KNN-RF-DT) based on Majority Voting Method. It aims to calculate the area of the tumor region and classify brain tumors as benign and malignant. In the beginning, segmentation is done by using Otsu's Threshold method. Feature Extraction is done by using Stationary Wavelet Transform (SWT), Principle Component Analysis (PCA), and Gray Level Co-occurrence Matrix (GLCM), which gives thirteen features for classification. The classification is done by hybrid ensemble classifier (KNN-RF-DT) based on the Majority Voting method. Overall it aimed at improving the performance by traditional classifiers instead of going to deep learning. Traditional classifiers have an advantage over deep learning algorithms because they require small datasets for training and have low computational time complexity, low cost to the users, and can be easily adopted by less skilled people. Overall, our proposed method is tested upon dataset of 2556 images, which are used in 85:15 for training and testing respectively and gives good accuracy of $97.305\%$ .
|
| 12 |
+
|
| 13 |
+
Keyword: Otsu's Threshold; SWT; PCA; GLCM; hybrid ensemble Classifier (KNN-RF-DT) based on Majority Voting method.
|
| 14 |
+
|
| 15 |
+
# 1. Introduction
|
| 16 |
+
|
| 17 |
+
A brain tumor is a cancerous or non-cancerous growth of abnormal cells in the brain, which leads to benign or malignant brain tumors. Most of the researchers are engaging in the primary type of tumor such as Gliomas. We have some ways to treat gliomas such as chemotherapy, radiotherapy, and surgery. Automation by computer-aided devices can be used to obtain the necessary clinical data such as tumor presence, location, and type. However, it is still a very challenging task in assessing their shape, volume, boundaries, tumor detection, size, segmentation, and classification. Also, brain tumor intensity varies from individual to individual. Magnetic Resonance Imaging (MRI) is preferred over other treatment and diagnosis methods because it gives superior image contrast in soft tissues and has non-invasive property. On applying different pulse sequences, we obtain a different type of MRI scans, such as (1) T1 weighted scans that distinguish between tumor and healthy tissues. (2) T2 weighted scans cause delineation of the edema region, and ultimately we get a bright image region. (3) T4-Gd scans which gives bright signal at tumor border by using a contrast agent. (4) FLAIR scans differentiate between cerebrospinal fluid (CSF) and edema region by using a signal of water molecule suppression. It is a difficult task to do annotation of brain tumors from MRI scans manually. Hence, there is a strong need for automation of brain tumor segmentation and classification with the help of computer vision and machine learning algorithms. Today,
|
| 18 |
+
|
| 19 |
+
researchers are working on computer vision and machine learning algorithms for brain tumor segmentation and classification. Clinician's plans are highly expensive because they depend on various imaging techniques such as PET, MRI, and CT. The clinical methods provide extract pertinent information and comprehensive analysis from images. Computational techniques help to investigate the details present in medical images. Imaging methods can be used to find the position of brain tumors. MRI provides more meaningful information in contrast to other imaging modalities like CT.
|
| 20 |
+
|
| 21 |
+
The challenging task in Brain Tumor is due to high variability and inherent MRI data characteristics, e.g., variability in tumor sizes or shapes, tumor detection, area calculation, segmentation, classification, and finding uncertainty in segmented region. The most significant task in image understanding is image segmentation because it helps in feature extraction, area calculation, and significance in many real-life applications. It can be used, for example, estimation of tumor volume, tissue classification, blood cell delineation, and localization of tumors, matching of an atlas, surgical planning, and image registration. For monitoring oncologic therapy, the accurate and morphology quantification of tumors is a critical task. However, extensive scale work has been performed in this field; but still; clinicians depend on manual determination of tumor, due to lack of link between researchers and clinicians.
|
| 22 |
+
|
| 23 |
+
Recently, many techniques have been proposed for automatic brain tumor classification that can be categorized into machine learning (ML) and deep learning (DL) techniques based on the feature selection and learning mechanism. In ML approaches, feature selection and extraction is essential for classification. However, DL approaches extract and learn the features from the image directly. Recent DL approaches, particularly CNN provides good accuracy and is widely used in medical image analysis. Moreover, they have disadvantage over traditional methods (ML) as they need large dataset for training, have high time complexity, less accurate for applications where we have availability of small dataset and require expensive GPUs which ultimately increases cost to the users. Additionally, selecting the right deep learning tools is also a challenging task as it needs knowledge regarding various parameters, training method, and topology. On the other hand, machine-learning approaches have played key role in the area of medical imaging. Several learning based classifiers have already been used for classification and detection of brain tumors, which includes - support vector machine (SVM), artificial neural network (ANN), sequential minimal optimization (SMO), fuzzy C mean (FCM), Naive Bayes (NB), Random Forest (RF), Decision Tree (DT) and K-Nearest Neighbor (KNN). KNN implementation is very simple and takes less computation and space complexity. It requires very less parameters to tune. The biggest advantage of DT is that it goes through all the outcomes of decision and finally traces each path to reach the conclusion. It is versatile; no complex mathematics involved, which makes it easy to understand. Further, Random Forest is itself ensemble classifiers of DT. It runs effectively on large dataset, which provides good parameter values for accuracies, precision and other evaluation metrics. Overall, these classifiers have received considerable research attention, as they require small dataset for training, low computational time complexity, low cost to the users, and can be easily adopted by less skilled people. Thus, in the present study, we work on hybrid ensemble classifiers in order to improve the accuracy of results obtained. Further, comparative study of various classifiers such as SVM, KNN, DT, RF, NB, ANN and proposed hybrid ensemble classifier is done.
|
| 24 |
+
|
| 25 |
+
The outline of the paper is as follows: Section 2 describes the related work; Section 3 describes the proposed method for area calculation and brain tumor classification. Section 4
|
| 26 |
+
|
| 27 |
+
gives information about experimental implementation and results, and Finally, Section 5 represents conclusion.
|
| 28 |
+
|
| 29 |
+
# 2. Related Work
|
| 30 |
+
|
| 31 |
+
Brain Tumor research has been conducted in various private multinational companies like, Siemens, Becton Dickinson, Medtronic, Accenture, GE Medical Systems, Atlantic Biomedical P. Ltd, and others. Both theoretical and experimental works of International arena are reported in the literature. Some of work done by good researchers is described below:
|
| 32 |
+
|
| 33 |
+
Othman [4] et al. proposed a method in which feature extraction is done by using Daubechies wavelets with DWT from MRI images. Each image consists of 17,689 feature vectors. Finally, classification is done using RBF a SVM kernel function. Sindhumol et al. [4] presents a spectral clustering (SC) technique for classification of brain tumor. Images (MRI) are divided into different clusters using the spectral distance. Feature reduction is made by using ICA and classification by using SVM. The training and testing data consists of 40 normal and 20 abnormal MRI images. Abd-Ellah [6] et al. proposed method consists of preprocessing of MRI images is done with the help of Median filters. DW does feature extraction, and PCA is used for feature reduction. Finally, classification is done by the SVM classifier using the RBF kernel function. The database consists of 80 images. SVM is trained using 43 abnormal and 5 normal images, and testing is done by using 27 abnormal images and 5 normal images. H. Kalbkhani [7] et al., proposed the subband of the detail coefficients and 2D DWT using (GARCH), Feature reduction is made from 61440 to 24 features. Feature extraction is done by linear discriminate analysis (LDA), which is further reduced using PCA. Finally, detection is done by using SVM and KNN identifier. The training and testing data consists of MRI images, normal and abnormal in ratio 10 to 70. The testing set consists of 7 normal and 49 abnormal images, while the training set contains 3 normal and 21 abnormal images. Saritha et al. [8] proposed classification technique for normal or abnormal brain tumor images considering 23 images for testing and 50 for training. Deep and Devi [9] et al., proposed a system in which the statistical method is used for texture feature extraction, neural network, and BPNN is used in segmentation and uncovering stages. The database consists of 42 images, which are further divided into training and testing as 30 and 12, respectively. Chandra [10] et al. proposed a new clustering algorithm based on PSO optimization with the help of MRI images. The clusters and corresponding centroids are being found out by algorithm, among them global best is considered. The dataset consists of 62 normal MRI images and 110 abnormal ones. Xuan and Liao [11], proposed a tumor detection method considering features of 3-types texture-based, intensity-based and symmetry-based. Then, total 40 features consisting of 13 intensity-based, 26 texture-based and 1 symmetry-based features are selected. Feature extraction is done from different images with 12 features from T2 images, 9 from T1 images and 19 from FLAIR images. The dataset contains 10 patients with 3 volumes each with 24 slices of MRI images. They divided the dataset equally into testing and training sets. Dhanalakshmi [12] et al., proposed work consist of k-means clustering for segmentation and then area is calculated using formula sqrt(P).*264, where P is the no. of pixel with value 1. The proposed algorithm shows the reproducibility and good performance. Kaushik [13] et al., proposed method consists of segmentation using genetic algorithm. The corners of the brain tumor region are also extracted based on proposed algorithm. Rani [16] et al., proposed a method for MRI brain tumor image classification using SVM and segmentation using otsu's thresholding method. This paper compared its proposed work with KIFCM, K-means an Fuzzy c-means but their accuracy and executive time was more effective than all remaining existing methods.
|
| 34 |
+
|
| 35 |
+
Additionally, many deep learning models have been investigated recently for brain tumor detection and classification and achieved the competitive results. Chaudhary [15] et al., proposed a method based on deep learning for the segmentation of MRI brain tumor images. In which pre processing of MRI images was done using intensity normalization. Kamboj [17] et al., reviewed deep learning methods, which has advantages over traditional methods. Their focus is on design of architecture as compare to segmentation and feature extraction. Deep learning methods provide good accuracy but they required more computation time, space and dataset as compare to the traditional classifiers. However, traditional machine learning methods are easy to understand, interpret and required less space, dataset and computational cost in terms of hardware.
|
| 36 |
+
|
| 37 |
+
Moreover, none of the above mentioned machine learning approaches work on feature extraction using 3-fold techniques such as $\mathrm{SWT + PCA + GLCM}$ , which significantly increases the robustness of extracted features as SwT helped in capturing the abrupt changes of images. PCA reduced the dimensionality of input images from SwT, which reduced space and time complexity up to some extent. Finally, GLCM extracted various useful features from dimensionally reduced images of PCA. In addition, none of the above methods worked on hybrid ensemble classifiers, which helped in achieving good evaluation metrics using traditional classifiers, as best properties of each classifier add up to gave excellent results. Therefore, overall 3-fold robust feature extraction and hybrid ensemble classification is the main focus of interest in this present study, which helped in improving the various evaluation metrics and reduced space and time complexity using traditional classifiers.
|
| 38 |
+
|
| 39 |
+
# 3. Proposed Method
|
| 40 |
+
|
| 41 |
+
The proposed work aims at improving the performance of traditional classifiers. These classifiers require small datasets for training and have low computational time complexity thus appropriate for computer assisted brain tumor diagnosis and classification. We propose a hybrid ensemble method using KNN, Random Forest (RF) and Decision Tree (DT) (KNN-RF-DT) based on Majority Voting Method. It aims to calculate the area of the tumor region and classify brain tumors as benign and malignant. In the beginning, MRI images segments using the Otsu's Threshold method. Feature Extraction is done by Stationary Wavelet Transform (SWT), Principle Component Analysis (PCA) and Gray Level Co-occurrence Matrix (GLCM), which gives thirteen features for classification. The classification is done by hybrid ensemble classifiers (KNN-RF-DT) based on the Majority Voting method.
|
| 42 |
+
|
| 43 |
+
In the current research work, we have done comparative studies of various classifiers such as SVM, KNN, DT, RF, NB, ANN and proposed hybrid ensemble Classifier. Overall, it aimed at improving the performance by using traditional classifiers. Working of proposed implementation is shown in Fig.1.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
Fig. 1 Flow diagram of proposed work
|
| 47 |
+
|
| 48 |
+
# a. Otsu's Method
|
| 49 |
+
|
| 50 |
+
Otsu method [19] is used for the automatic image threshold into two classed, foreground, and background based on a single threshold value. This threshold determines by maximizing inter-class variance and minimizing intra-class intensity variance. Threshold that minimizes the intra-class variance that describes by corresponding sum of variance of two classes:
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
\alpha_ {\mathrm {w}} ^ {2} (\mathbf {n}) = \mathrm {A} _ {0} (\mathbf {n}) ^ {*} \alpha_ {0} ^ {2} (\mathbf {n}) + \mathrm {A} _ {1} (\mathbf {n}) ^ {*} \alpha_ {1} ^ {2} (\mathbf {n}), \tag {1}
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $A_0, A_1 -$ Probability of two classes, $n = 154$ - Threshold, $\alpha_0^2, \alpha_1^2 -$ Variance of two classes. The Class Probability can be computed from the number of bins (L=256) as shown below:
|
| 57 |
+
|
| 58 |
+
$$
|
| 59 |
+
\mathrm {A} _ {0} (\mathbf {n}) = \sum_ {k = 0} ^ {n - 1} \mathrm {p} (\mathbf {k}) \tag {2}
|
| 60 |
+
$$
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
\mathrm {A} _ {1} (\mathbf {n}) = \sum_ {k = n} ^ {L - 1} \mathrm {p} (\mathbf {k}), \tag {3}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where we know that minimum intra-class variance is equivalent to the maximum inter-class variance.
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
\alpha_ {\mathrm {b}} ^ {2} (\mathbf {n}) = \alpha^ {2} - \alpha_ {\mathrm {w}} ^ {2} (\mathbf {n}) = \mathrm {A} _ {0} ^ {*} \left(\beta_ {0} - \beta_ {\mathrm {T}}\right) ^ {2} + \mathrm {A} _ {1} ^ {*} \left(\beta_ {1} - \beta_ {\mathrm {T}}\right) ^ {2} = \mathrm {A} _ {0} (\mathbf {n}) ^ {*} \mathrm {A} _ {1} (\mathbf {n}) ^ {*} \left(\beta_ {0} - \beta_ {1}\right) ^ {2}, \tag {4}
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
where $\beta_{1}(\mathbf{n}),\beta_{0}(\mathbf{n})$ and $\beta_{\mathrm{T}}$ are class means
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\beta_ {0} (\mathbf {n}) = = \sum_ {\mathbf {k} = 0} ^ {n - 1} \mathbf {k} * \mathrm {p} (\mathbf {k}) / \mathrm {A} _ {0} (\mathbf {n}) \tag {5}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\beta_ {1} (\mathbf {n}) = = \sum_ {\mathbf {k} = n} ^ {L - 1} \mathbf {k} * \mathrm {p} (\mathbf {k}) / \mathrm {A} _ {1} (\mathbf {n}) \tag {6}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\beta_ {\mathrm {T}} = = \sum_ {\mathbf {k} = 0} ^ {L - 1} \mathbf {k} * \mathrm {p} (\mathbf {k}) \tag {7}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
\mathrm {A} _ {0} * \beta_ {0} + \mathrm {A} _ {1} * \beta_ {1} = \beta_ {\mathrm {T}} \tag {8}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathrm {A} _ {0} + \mathrm {A} _ {1} = 1 \tag {9}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
Above computations, give us effective algorithm as class probability and class means computed iteratively. Histogram of brain tumor image with bins 256 and threshold 154 is shown below in Fig.2.
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Fig. 2 Histogram of brain tumor image with bins $\mathrm{L} = 256$ and threshold $\mathrm{n} = 154$
|
| 98 |
+
|
| 99 |
+
# b. Stationary Wavelet Transform
|
| 100 |
+
|
| 101 |
+
As far as signal is concerned, slow changes can be captured with the help of Fourier Transform, whereas when the images undergo abrupt changes that can be captured with the concept of wavelets. Wavelet is a small oscillation whose frequency inversely varies with scaling. To capture abrupt changes, we need high frequency and small scaling so we need the concept of wavelet. Stationary Wavelet Transform (SWT) [20] algorithm is designed to overcome the problem of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance can be achieved by removing the down-samples and up-samples in the DWT and up sampling the filter coefficients by a factor of $2^{(\mathrm{j - 1})}$ in the $\mathrm{j^th}$ level of algorithm. SWT has various applications such as - Signal de-noising, Pattern Recognition, Brain image classification, Pathological brain detection. In our proposed work, we work on 1-D SWT, where $\mathrm{j} = 1$ .
|
| 102 |
+
|
| 103 |
+
The origin of the wavelets and its types from Fourier transform is shown in Fig.3.
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
Fig. 3 Origin of SWT
|
| 107 |
+
|
| 108 |
+
The digital implementation of SwT is shown below in Fig. 4 in which each level is an upsampled version of the previous level.
|
| 109 |
+
|
| 110 |
+
Decomposition Step
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+
Filter computation
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
|
| 120 |
+
Fig. 4 Implementation of SWT in which each level is an up-sampled version of the previous level
|
| 121 |
+

|
| 122 |
+
Initialization: $\mathrm{cA}_0 = \mathrm{s}$ and $\mathrm{F}_0 = \mathrm{Lo\_D}$ and $\mathrm{G}_0 = \mathrm{Hi\_D}$
|
| 123 |
+
|
| 124 |
+
# c. Principal Component Analysis
|
| 125 |
+
|
| 126 |
+
PCA [21] is an orthogonal transformation of correlated variables to linearly uncorrelated variables known as principal components. The first principal component has the highest variances; successive components contain the most significant possible variance such that it remains orthogonal to previous components. It is used to reduce the dimensions or features with which we have to train our classifier, which ultimately helps in reducing the time and space complexity for given data computation.
|
| 127 |
+
|
| 128 |
+
# First Component
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\mathrm {Y} _ {(1)} = \arg \max _ {| \mathrm {Y} | = 1} \left\{\Sigma_ {\mathrm {k}} \left(\mathrm {t} _ {1}\right) _ {(\mathrm {k})} ^ {2} \right\} = \arg \max _ {| \mathrm {Y} | = 1} \left\{\Sigma_ {\mathrm {k}} \left(\mathbf {X} _ {(\mathrm {k})}. \mathrm {Y}\right) ^ {2} \right\}, \tag {10}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
Where $\mathbf{Y}_{(1)}$ denotes the unit vector and $\mathbf{X}$ is Image matrix.
|
| 135 |
+
|
| 136 |
+
Equation in matrix form:
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\mathbf {Y} _ {(1)} = \arg \max _ {| \mathrm {Y} | = 1} \left\{\| \mathbf {X Y} \| ^ {2} \right\} = \arg \max _ {| \mathrm {Y} | = 1} \left\{\mathbf {Y} ^ {\mathrm {T}} \mathbf {X} ^ {\mathrm {T}} \mathbf {X Y} \right\} \tag {11}
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\mathbf {Y} _ {(1)} = \arg \max \left\{\mathbf {Y} ^ {\mathrm {T}} \mathbf {X} ^ {\mathrm {T}} \mathbf {X Y} / \mathbf {Y} ^ {\mathrm {T}} \mathbf {Y} \right\} \tag {12}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
Largest Eigen value of the matrix is Rayleigh Quotient, where $\mathbf{X}^{\mathrm{T}}\mathbf{X}$ is positive semi-definite matrix and $\mathbf{Y} =$ corresponding Eigen vector
|
| 147 |
+
|
| 148 |
+
# Further Components
|
| 149 |
+
|
| 150 |
+
The Nth component can be found by subtracting the first N-1 principle component from $\mathbf{X}$ , where $\mathbf{N} = 13$ for proposed method.
|
| 151 |
+
|
| 152 |
+
$$
|
| 153 |
+
\mathbf {X} _ {\mathbf {N}} = \mathrm {X} - \sum_ {n = 1} ^ {N - 1} \mathbf {X Y} (\mathrm {n}) \mathrm {Y} (\mathrm {n}) ^ {\wedge} \mathrm {T} \tag {13}
|
| 154 |
+
$$
|
| 155 |
+
|
| 156 |
+
Further, weight vector can be found as describe below:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\mathbf {Y} _ {(K)} = \arg \max _ {\| \mathbf {Y} \| = 1} \left\{\| \mathbf {X} _ {K} ^ {*} \mathbf {Y} \| ^ {2} \right\} = \arg \max \left\{\mathbf {Y} ^ {\mathrm {T}} \mathbf {X} _ {K} ^ {\mathrm {T}} \mathbf {X} _ {K} \mathbf {Y} / \mathbf {Y} ^ {\mathrm {T}} \mathbf {Y} \right\} \tag {14}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
# d. Gray-Level Co-occurrence Matrix
|
| 163 |
+
|
| 164 |
+
It is a statistical method that describes the spatial relationship of pixels based on the spatial grey-level dependence matrix. GLCM [22] calculates the texture of the image by calculating the frequency of corresponding pixels and their spatial relationships. All the thirteen features which are used in proposed work as follow contrast, correlation, energy, homogeneity, mean, standard deviation, kurtosis, skewness, variance, smoothness, IDM, RMS, entropy. There detailed information is given below:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\text {C o n t r a s t} (\mathbf {C}) = \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} | \mathrm {t} - \mathrm {r} | ^ {2} \mathbf {q} (\mathrm {t}, \mathrm {r}), \tag {15}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where, $\mathbf{q}(\mathrm{t},\mathrm{r})$ is GLCM, t & r are row & column,T is total rows and R is total columns.
|
| 171 |
+
|
| 172 |
+
$$
|
| 173 |
+
\text {C o r r e l a t i o n} (\operatorname {C o r r}) = \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} \left((\mathrm {t} - \mu) (\mathrm {r} - \mu) \mathbf {q} (\mathrm {t}, \mathrm {r})\right) / (\sigma (\mathrm {t}) * \sigma (\mathrm {r})), \tag {16}
|
| 174 |
+
$$
|
| 175 |
+
|
| 176 |
+
where, mean is $\mu$ and standard deviation is $\sigma$
|
| 177 |
+
|
| 178 |
+
$$
|
| 179 |
+
\operatorname {E n e r g y} (\mathbf {E}) = \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} \mathbf {q} (\mathrm {t}, \mathrm {r}) ^ {2} \tag {17}
|
| 180 |
+
$$
|
| 181 |
+
|
| 182 |
+
$$
|
| 183 |
+
\text {H o m o g e n e i t y} (\mathbf {H}) = \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} \mathbf {q} (\mathrm {t}, \mathrm {r}) / (1 + | \mathrm {t} - \mathrm {r} |) \tag {18}
|
| 184 |
+
$$
|
| 185 |
+
|
| 186 |
+
$$
|
| 187 |
+
\text {M e a n} (\mu) = \frac {1}{T * R} * \sum_ {t = 1} ^ {T} \sum_ {r = 1} ^ {R} q (t, r) \tag {19}
|
| 188 |
+
$$
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
\text {S t a n d a r d D e v i a t i o n} (\boldsymbol {\sigma}) = \sqrt [ 2 ]{\frac {1}{T * R} * \sum_ {t = 1} ^ {T} \sum_ {r = 1} ^ {R} \left(\mathbf {q} (t , r) - \mu\right)} \tag {20}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
$$
|
| 195 |
+
\text {K u r t o s i s} (\mathbf {K}) = \left\{\frac {1}{T * R} * \sum_ {t = 1} ^ {T} \sum_ {r = 1} ^ {R} \left(\left(\mathbf {q} (t, r) - \mu\right) / \sigma\right) ^ {\wedge} 4 \right\} - 3 \tag {21}
|
| 196 |
+
$$
|
| 197 |
+
|
| 198 |
+
$$
|
| 199 |
+
\operatorname {S k e w n e s s} (\boldsymbol {\sigma}) = \sqrt [ 2 ]{\frac {1}{T * R} * \sum_ {t = 1} ^ {T} \sum_ {r = 1} ^ {R} \left(\mathbf {q} (t , r) - \mu\right)} \tag {22}
|
| 200 |
+
$$
|
| 201 |
+
|
| 202 |
+
$$
|
| 203 |
+
\text {V a r i a n c e} (\mathbf {V a r}) = \frac {1}{T * R} * \sum_ {t = 1} ^ {T} \sum_ {r = 1} ^ {R} (\mathbf {q} (t, r) - \mu) \tag {23}
|
| 204 |
+
$$
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
\text {S m o o t h n e s s} (\mathbf {R}) = 1 - 1 / \left(1 + \sigma^ {2}\right) \tag {24}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
$$
|
| 211 |
+
\operatorname {I D M} (\mathbf {H H}) = \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} \frac {\mathbf {q} (\mathrm {t} , \mathrm {r})}{1 + | \mathrm {t} - \mathrm {r} |} \tag {25}
|
| 212 |
+
$$
|
| 213 |
+
|
| 214 |
+
$$
|
| 215 |
+
\operatorname {R M S} (\mathbf {y}) = \sqrt [ 2 ]{\sum_ {\mathrm {t} , \mathrm {r} = 1} ^ {\mathrm {T} , \mathrm {R}} \left(\left| \mathbf {q} (\mathrm {t} , \mathrm {r}) \right|\right) ^ {2} / \mathrm {T}} \tag {26}
|
| 216 |
+
$$
|
| 217 |
+
|
| 218 |
+
$$
|
| 219 |
+
\operatorname {E n t r o p y} (\mathbf {h}) = - \sum_ {\mathrm {t}, \mathrm {r} = 1} ^ {\mathrm {T}, \mathrm {R}} \mathbf {q} (\mathrm {t}, \mathrm {r}) \left(\log \mathbf {q} (\mathrm {t}, \mathrm {r})\right) \tag {27}
|
| 220 |
+
$$
|
| 221 |
+
|
| 222 |
+
# e. Random Forest
|
| 223 |
+
|
| 224 |
+
Random Forest [23] is an ensemble classifier formed by the fusion of many decision trees. It calculates the result on the basis of the majority voting method. Random forest is more superior then the decision tree as it overcomes the problem of over-fitting. As a tree grows deep, they start to over-fit, i.e., they have low bias and high variance. Random forest uses the different parts of the same training dataset on different trees and helps them averaging multiple decision trees and avoid over-fitting, which increases bias and reduces variance, which boosts performance. Internal working of random forest is shown in Fig.5. We are using 100 decision trees, which are trained using training data consisting of 2172 images by concept of bagging.
|
| 225 |
+
|
| 226 |
+

|
| 227 |
+
Fig. 5 Internal working of Random Forest
|
| 228 |
+
|
| 229 |
+
# f. Decision Tree
|
| 230 |
+
|
| 231 |
+
Decision Tree [24] is like a conditional control statements, which performs the research operations such as decision analysis. There occurs the problem of over-fitting when trees become deep enough. It is like a tree structure, where each node represents attribute or feature on bases of which one can get the outcome. Each leaf node holds the information related to the class label. Working of decision tree is shown in Fig.6. Features are used as internal nodes of the tree and class are leaf nodes.
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
Fig. 6 Working of Decision Tree
|
| 235 |
+
|
| 236 |
+
# g. K-Nearest Neighbor
|
| 237 |
+
|
| 238 |
+
KNN [25] is a lazy learning technique in which functions are locally approximation. It can be used for both classification and regression. In which weights assign to neighbors based on distance, i.e., if the distance is $d$ , then weights assign as $1 / d$ such that nearest neighbors contribute more than distant. It has little time complexity because training consists of just calculating the Euclidean distance. Euclidean Distance Measurement is shown in Fig.7. We have two classes, one represented by using square and other by triangle. Now, prediction for testing data circle is done based on minimum Euclidean distance measurement.
|
| 239 |
+
|
| 240 |
+

|
| 241 |
+
Fig. 7 Euclidean Distance Measurement
|
| 242 |
+
|
| 243 |
+
# h. Hybrid Ensemble Classifier
|
| 244 |
+
|
| 245 |
+
Proposed hybrid ensemble classifier KNN-RF-DT shown in Fig. 8, where prediction is considered for at least two-ratio-one voting of classifiers to a specific class such as benign or malignant.
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
Fig. 8 Proposed Hybrid Ensemble Classifier
|
| 249 |
+
|
| 250 |
+
# Algorithm-1 Classification process with Hybrid Ensemble Classifier
|
| 251 |
+
|
| 252 |
+
1. Mdl1 $\leftarrow$ Model of KNN with $K = 1$
|
| 253 |
+
2. Mdl2 - Model of Random Forest with 100 Trees
|
| 254 |
+
3. Mdl3 $\longleftarrow$ Model of Decision Tree
|
| 255 |
+
4. p1 $\longleftarrow$ predict from Md11
|
| 256 |
+
5. p2 $\longleftarrow$ predict from Mdl2
|
| 257 |
+
6. p3 predict from Mdl3
|
| 258 |
+
7. var right $\text{一}$ 0 and var left 0
|
| 259 |
+
8. if p1 is "Malignant" then
|
| 260 |
+
9. right $\longleftarrow$ right+1
|
| 261 |
+
10. else
|
| 262 |
+
11. left left+1
|
| 263 |
+
12. end
|
| 264 |
+
|
| 265 |
+
13. if p2 is "Malignant" then
|
| 266 |
+
14. right $\longleftarrow$ right+1
|
| 267 |
+
15. else
|
| 268 |
+
16. left left+1
|
| 269 |
+
17. end
|
| 270 |
+
18. if p3 is "Malignant" then
|
| 271 |
+
19. right $\longleftarrow$ right+1
|
| 272 |
+
20. else
|
| 273 |
+
21. left left+1
|
| 274 |
+
22. end
|
| 275 |
+
23. if right is greater then left then
|
| 276 |
+
24. species $\leftarrow$ "Malignant"
|
| 277 |
+
25. else
|
| 278 |
+
26. species $\longleftarrow$ "Benign"
|
| 279 |
+
27. end
|
| 280 |
+
|
| 281 |
+
# 4. Implementation and Results
|
| 282 |
+
|
| 283 |
+
# a. Dataset used and various SWT filter's Matrix Representations
|
| 284 |
+
|
| 285 |
+
In the proposed work, Cancer Genome Atlas Glioblastoma Multi-forme (TCGA-GBM) [14] data collection is used to conduct the experimental computation of the proposed approach. This is an open and standard Glioblastoma Multi-forme dataset, which is main type of brain tumor. It is available freely for research work and highly accurate dataset. Hence, no decision from any committee is required on this dataset. The augmentation process is also used to increase dataset such that we have an average of 2556 samples of T1-weighted images used to test the proposed approach, based on testing and training images in ratio 85:15. A distribution of dataset between testing and training is shown in Table.1 and image segmentation using Otsu's method and 1-D SWT filters is shown in Fig. 9.
|
| 286 |
+
|
| 287 |
+
Table 1. Database for Benign and Malignant classification
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Database</td><td>Training Dataset</td><td>Testing Dataset</td></tr><tr><td>Benign</td><td>1086</td><td>192</td></tr><tr><td>Malignant</td><td>1086</td><td>192</td></tr></table>
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
Fig. 9 Results of Otsu's, 1-D SWT filters with Approximation, Horizontal, Vertical and Diagonal matrix.
|
| 293 |
+
|
| 294 |
+
# b. Evaluation Metrics and its Graphical Representation
|
| 295 |
+
|
| 296 |
+
The following parameters such as Accuracy, Precision, Sensitivity, specificity, F1-score, and Youden index are calculated based on proposed methodology with the help of False negative (FN), True Negative (TN), True Positive (TP), False Positive (FP). The equations are given below:
|
| 297 |
+
|
| 298 |
+
$$
|
| 299 |
+
\text {A c c u r a c y} = \frac {(T P + T N)}{(T P + T N + F P + F N)} \tag {28}
|
| 300 |
+
$$
|
| 301 |
+
|
| 302 |
+
$$
|
| 303 |
+
\text {S e n s i t i v i t y} = \frac {T P}{(\mathrm {T P} + \mathrm {F N})} \tag {29}
|
| 304 |
+
$$
|
| 305 |
+
|
| 306 |
+
$$
|
| 307 |
+
\text {S p e c i f i c i t y} = \frac {T N}{(\mathrm {T N} + \mathrm {F P})} \tag {30}
|
| 308 |
+
$$
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
\text {Y o u d e n I n d e x} = \text {S e n s i t i v i t y} + \text {S p e c i f i c i t y} - 1 \tag {31}
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
$$
|
| 315 |
+
\text {P r e c i s i o n} = \frac {T P}{(\mathrm {T P} + \mathrm {F P})} \tag {32}
|
| 316 |
+
$$
|
| 317 |
+
|
| 318 |
+
$$
|
| 319 |
+
F 1 - S c o r e = \frac {2 * P r e c i s i o n * S e n s i t i v i t y}{P r e c i s i o n + S e n s i t i v i t y} \tag {33}
|
| 320 |
+
$$
|
| 321 |
+
|
| 322 |
+
The results obtained in terms of considered evaluation metrics considered are shown in Table 2.
|
| 323 |
+
|
| 324 |
+
Table 2. Classification results of various classifiers and proposed scheme
|
| 325 |
+
|
| 326 |
+
<table><tr><td colspan="2">Classifier</td><td>Accuracy %</td><td>Precision %</td><td>Sensitivity %</td><td>F1-score%</td><td>Youden Index%</td><td>Specifi city%</td></tr><tr><td colspan="2">Proposed Method (KNN-RF-DT)</td><td>97.305</td><td>97.73</td><td>97.04</td><td>97.41</td><td>94.71</td><td>97.60</td></tr><tr><td>SVM</td><td>RBF</td><td>93.038</td><td>92.38</td><td>93.82</td><td>94.79</td><td>89.50</td><td>92.26</td></tr><tr><td>Kern</td><td>Linear</td><td>85.56</td><td>85.20</td><td>86.07</td><td>85.41</td><td>70.72</td><td>85.05</td></tr><tr><td>el</td><td>Polynomial</td><td>89.39</td><td>88.79</td><td>90.22</td><td>90.25</td><td>80.39</td><td>88.58</td></tr><tr><td colspan="2">Naïve Bayes</td><td>81.33</td><td>81.68</td><td>80.83</td><td>81.62</td><td>63.54</td><td>81.85</td></tr><tr><td colspan="2">Decision Tree</td><td>93.157</td><td>93.58</td><td>92.80</td><td>95.45</td><td>90.98</td><td>93.51</td></tr><tr><td colspan="2">Neural Network</td><td>93</td><td>92.57</td><td>93.27</td><td>95.30</td><td>90.61</td><td>92.76</td></tr><tr><td colspan="2">KNN</td><td>94.765</td><td>94.92</td><td>94.30</td><td>94.60</td><td>89.53</td><td>95.23</td></tr></table>
|
| 327 |
+
|
| 328 |
+
From the results shown in Table 2, we can conclude that the proposed method with accuracy $97.305\%$ outperforms the compared classifiers. We used KNN, RF and DT in hybrid ensemble classifier because they give best performance in terms of various evaluation parameters when compared to other classifiers but it increases time complexity due to computation by three individual classifiers. Accuracy of Random Forest is always greater than the Decision tree due to which RF is included in hybrid ensemble classifier. Sensitivity and Specificity parameters are quite comparable percentage-wise. Sensitivity results describe benign tumors, which are good determinative, as it is merely a total positive divide by the total actual benign tumor. Specificity results describe malignant tumors, which are good determinative, as it is merely a total negative divide by total actual malignant tumor. Youden-index describes the maximum difference between true-positive, and false-positive, high Youden-index means true-positive results are quite high as compared to false positive. F1-score describes the balance between precision and Sensitivity, which is vital as there occur uneven class distribution. Precision describes the percentage of actual positive (TP) among the entire predicted positive $(\mathrm{TP} + \mathrm{FP})$ , which is quite high for the proposed method. Overall, the comparison between the proposed and already existing classification methods, which are implemented in above table is shown below in Fig. 10.
|
| 329 |
+
|
| 330 |
+

|
| 331 |
+
Fig. 10 Comparison of proposed and existing classification methods based on performance metrics.
|
| 332 |
+
|
| 333 |
+
# c. Confusion Matrix and GUI
|
| 334 |
+
|
| 335 |
+

|
| 336 |
+
Fig. 11 Confusion matrix of the proposed method.
|
| 337 |
+
|
| 338 |
+
Confusion matrix for proposed method KNN-RF-DT based on majority voting is shown in Fig.11. 1249 of 1278 benign tumors are classified as benign, while 29 tumors as malignant. Besides, 1239 of 1278 malignant tumors are classified as malignant, while 39 tumors as benign. Overall, a good accuracy of 97.305 is obtained using the proposed method.
|
| 339 |
+
|
| 340 |
+
Overall, the proposed method gives excellent performance on various evaluation parameters as described above in comparison to existing methods. Thus, the proposed method is novel and effective to work for the classification of benign and malignant brain tumors.
|
| 341 |
+
|
| 342 |
+
GUI for this proposed method makes it more user-friendly, which is implemented in MATLAB 2017a itself shown below in Fig.12
|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
Fig. 12 GUI for the above-proposed method in Matlab 2017a.
|
| 346 |
+
|
| 347 |
+
d. Area Calculation for segmented Region
|
| 348 |
+
|
| 349 |
+
Area calculation for segmented images is shown below in Table 3 using formula:
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
\operatorname {I m a g e}, \mathrm {I} = \sum_ {w = 0} ^ {2 0 0} \sum_ {h = 0} ^ {2 0 0} [ \mathrm {g} (0) + \mathrm {g} (1) ] \tag {34}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
$$
|
| 356 |
+
\text {w h e r e P i x e l s} = \text {w i d t h (w) * h e i g h t (h)} = 2 0 0 * 2 0 0
|
| 357 |
+
$$
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\mathrm {g} (0) = \text {b l a c k p i x e l (d i g i t 0)}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\mathrm {g} (1) = \text {w h i t e p i x e l (d i g i t 1)}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\text {N o . o f w h i t e p i x e l s ,} \mathrm {P} = \sum_ {w = 0} ^ {2 0 0} \sum_ {h = 0} ^ {2 0 0} [ \mathrm {g} (1) ], \tag {35}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
where $\mathbf{P}$ is number of white pixels and 1Pixel = .264 mm
|
| 372 |
+
|
| 373 |
+
The formula for area calculation as follow:
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
\text {A r e a} = [ \operatorname {s q r t} (\mathrm {P}) ^ {*} \cdot 2 6 4 ] \text {i n m m} ^ {2} \tag {36}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
Table 3. Describes the area calculation of segmented images
|
| 380 |
+
|
| 381 |
+
<table><tr><td>Segmented Images</td><td></td><td></td><td></td><td></td></tr><tr><td>Area (mm2)</td><td>28.6656</td><td>38.5765</td><td>12.3235</td><td>15.2869</td></tr></table>
|
| 382 |
+
|
| 383 |
+
e. Analysis of Time Complexity
|
| 384 |
+
|
| 385 |
+
This section presents the time complexity of traditional classifiers [18] and the proposed hybrid ensemble classifier and are shown in Table 4. Further, we are comparing the time complexity of proposed method to that of modern deep learning approaches like convolutional neural network (CNN).
|
| 386 |
+
|
| 387 |
+
For proposed hybrid ensemble classifier, training time complexity is $O(1 + n^2 p n_{trees} + n^2 p)$ where $n_{trees}$ represents number of trees of random forest, $n$ : no. of training samples, $p$ : no. of features used. Here we have considered $n_{trees} = 100$ ; $n = 2172$ , which is 85% of total dataset (2556); $p = 13$ . Thus, $O(1 + (2172)^2 * 13 * (100 + 1))$ is equivalent to $O(6.1942e9)$ .
|
| 388 |
+
|
| 389 |
+
Next, we compute training time complexity of deep learning classifiers for instance CNN. CNN consists of an input layer, several convolutional layers, pooling layers, fully connected layers and an output layer. Let us suppose, following represents the architecture of CNN.
|
| 390 |
+
|
| 391 |
+
Size (pixels *pixels) of the input image being used ( $I^{*}I = 200^{*}200$ ); Size of kernel ( $S2^{*}S2 = 7^{*}7$ ); No. of kernels ( $N1 = 20$ ) in first Convolutional Layer; Pooling Size $= 2^{*}2$ Pixels with stride $= 2$ Pixels; Size of kernel ( $S3^{*}S3 = 4^{*}4$ ), No. of kernels ( $N2 = 10$ ) in second Convolutional Layer; Pooling Size $= 2^{*}2$ Pixels with stride $= 2$ Pixels.
|
| 392 |
+
|
| 393 |
+
Here we process input image step by step to extract the number of features that become input to Fully connected layers of CNN. When first convolutional layer kernels are applied we get $200 - 7 + 1$ by $200 - 7 + 1$ size with $\mathrm{N}1 = 20$ matrix i.e. $194^{*}194$ size 20 matrices. When we apply pool layer we get $194 / 2$ by $194 / 2$ size with $\mathrm{N}1 = 20$ matrices i.e. $97^{*}97$ size 20 matrices. In second convolution layer, kernels are applied to the output of first convolutional layers and we get $97 - 4 + 1$ by $97 - 4 + 1$ size $\mathrm{N}1^{*}\mathrm{N}2$ matrices i.e. $94^{*}94$ size 200 matrices. When we apply pool layer we get $94 / 2$ by $94 / 2$ size 200 matrices i.e. $47^{*}47$ size 200 matrices. Now this $47^{*}47^{*}200$ becomes the input to the fully connected layer, which is nothing but the simple Feed-forward backpropagation neural network. So, here training time complexity of the model (Fully connected layers) is $O(nt^{*}(pj +jk))$ , where $n = 2172$ : no. of training set, $t = 1000$ : no. of epochs and $p = 47^{*}47^{*}200$ (input layer), $j = 20$ (hidden layer), $k = 2$ (output layer) i.e. O(2172*1000(47*47*200*20+20*2)) = O(1.9192e13).
|
| 394 |
+
|
| 395 |
+
Training time complexity of proposed hybrid model is O(6.1942e9), which is quite less as compared to the training time complexity of deep learning classifiers like CNN O(1.9192e13) for same set of parameters. In reality, training time complexity of CNN will be much more than what we have calculated here. Because in present calculation, we have taken same number of training dataset for proposed and CNN but in reality number of dataset images would be much higher than 2172 for CNN method and so is the number of neurons required in hidden layer as well.
|
| 396 |
+
|
| 397 |
+
Overall, we conclude that the proposed hybrid ensemble classifier provides good accuracy at the expense of increased the training time complexity in comparison to traditional classifiers like DT, SVM, and KNN etc. However, it has significantly less training time complexity as compared to modern deep learning methods like CNN and provides the comparative accuracy.
|
| 398 |
+
|
| 399 |
+
Table.4. Describes the time complexity of traditional classifiers
|
| 400 |
+
|
| 401 |
+
<table><tr><td>Complexity</td><td>Training</td><td>Prediction</td></tr><tr><td rowspan="2">KNN-RF-DT</td><td>O(1+n2pntrees+n2p)</td><td rowspan="2">O(np+ptrees+p)</td></tr><tr><td>where ntrees = 100 i.e. number of trees of random forest, n = 2172: no. of training samples for model, which is 85% of total dataset (2556), p = 13: no. of features used.</td></tr><tr><td>Decision Tree</td><td>O(n2p)</td><td>O(p)</td></tr><tr><td>SVM (rbf)</td><td>O(n2p+n3)</td><td>O(nsvp), where nsv = 1</td></tr><tr><td>KNN</td><td>O(1)</td><td>O(np)</td></tr><tr><td>Neural Network</td><td>O(nt*(pj +jk))
|
| 402 |
+
where t = 1000: no. of epochs and p=13(input
|
| 403 |
+
layer), j=20(hidden layer), k=2 (output layer)</td><td>O(pj+jk)
|
| 404 |
+
where p = 13 (input
|
| 405 |
+
layer), j = 20 (hidden
|
| 406 |
+
layer), k = 2 (output
|
| 407 |
+
layer)</td></tr><tr><td>Naïve Bayes</td><td>O(np)</td><td>O(p)</td></tr><tr><td>Random Forest</td><td>O(n2pntrees)</td><td>O(pntrees)</td></tr></table>
|
| 408 |
+
|
| 409 |
+
# 5. Conclusion
|
| 410 |
+
|
| 411 |
+
The proposed work aims at improving the performance of traditional classifiers. As traditional classifiers have an advantage over deep learning algorithms because they require small datasets for training and have low computational time complexity. Image is segmented using otsu's method, features are extracted by using SWT+PCA+GLCM, and finally, classification is done based on hybrid ensemble classifier KNN-RF-DT. The proposed method is novel and useful as it outperforms the already existing methods based on machine learning. Experiments are conducted with software MATLAB 2017a with a personal computer of 4 GB memory, Windows 10 64-bit operating system, and Intel (R) Core (TM) i3-6006U CPU @ 2.00 GHz. Overall, proposed method achieved accuracy of $97.305\%$ , precision $97.73\%$ , specificity $97.60\%$ , Sensitivity $97.04\%$ , Youden-index $94.71\%$ , and F1-score $97.41\%$ which indicates its authenticity over medical images. In future, other hybridization ideas will be investigated like Neural Network-SVM, Neural Network-KNN, Neural Network-RF, Neural Network-DT and Neural Network - Naïve Bayes to further improve the accuracy.
|
| 412 |
+
|
| 413 |
+
# References
|
| 414 |
+
|
| 415 |
+
[1]. Özyurt, F., Sert, E., Avci, E., & Dogantekin, E. (2019). Brain tumor detection based on Convolutional Neural Network with neutrosophic expert maximum fuzzy sure entropy. Measurement, 147, 106830.
|
| 416 |
+
[2]. Abd-Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2019). A review on brain tumor diagnosis from MRI images: Practical implications, key achievements, and lessons learned. Magnetic resonance imaging.
|
| 417 |
+
[3]. Wadhwa, A., Bhardwaj, A., & Verma, V. S. (2019). A review on brain tumor segmentation of MRI images. Magnetic resonance imaging.
|
| 418 |
+
[4]. Othman, M. F. B., Abdullah, N. B., & Kamal, N. F. B. (2011, April). MRI brain classification using support vector machine. In 2011 Fourth International Conference on Modeling, Simulation and Applied Optimization (pp. 1-4). IEEE.
|
| 419 |
+
[5]. Sindhumol, S., Kumar, A., & Balakrishnan, K. (2013). Spectral clustering independent component analysis for tissue classification from brain MRI. Biomedical Signal Processing and Control, 8(6), 667-674.
|
| 420 |
+
[6]. Abd-Ellah, M. K., Awad, A. I., Khalaf, A. A., & Hamed, H. F. (2016, September). Classification of brain tumor MRIs using a kernel support vector machine. In International Conference on Well-Being in the Information Society (pp. 151-160). Springer, Cham.
|
| 421 |
+
[7]. Kalbkhani, H., Shayesteh, M. G., & Zali-Vargahan, B. (2013). Robust algorithm for brain magnetic resonance image (MRI) classification based on GARCH variances series. Biomedical Signal Processing and Control, 8(6), 909-919.
|
| 422 |
+
|
| 423 |
+
[8]. Saritha, M., Joseph, K. P., & Mathew, A. T. (2013). Classification of MRI brain images using combined wavelet entropy based spider web plots and probabilistic neural network. Pattern Recognition Letters, 34(16), 2151-2156.
|
| 424 |
+
[9]. Deepa, S. N., & Devi, B. A. (2012, January). Artificial neural networks design for classification of brain tumour. In 2012 International Conference on Computer Communication and Informatics (pp. 1-6). IEEE.
|
| 425 |
+
[10]. Chandra, S., Bhat, R., & Singh, H. (2009, December). A PSO based method for detection of brain tumors from MRI. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC) (pp. 666-671). IEEE.
|
| 426 |
+
[11]. Xuan, X., & Liao, Q. (2007, August). Statistical structure analysis in MRI brain tumor segmentation. In Fourth International Conference on Image and Graphics (ICIG 2007) (pp. 421-426). IEEE.
|
| 427 |
+
[12]. Dhanalakshmi, P., & Kanimozhi, T. (2013). Automatic segmentation of brain tumor using K-Means clustering and its area calculation. International Journal of advanced electrical and Electronics Engineering, 2(2), 130-134.
|
| 428 |
+
[13]. Kaushik, D., Singh, U., Singhal, P., & Singh, V. (2014). Brain tumor segmentation using genetic algorithm. In International Journal of Computer Applications®(IJCA)(0975-8887) International Conference on Advances in Computer Engineering & Applications (ICACEA-2014) at IMSEC, GZB.
|
| 429 |
+
[14]. Brain Tumor dataset available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM
|
| 430 |
+
[15]. Chaudhary, J., Rani, R., Kamboj, A., “Deep learning-based approach for segmentation of glioma sub-regions in MRI”. International Journal of Intelligent Computing and Cybernetics (2020).
|
| 431 |
+
[16]. Rani, R., Kamboj, A., "Brain Tumor Classification for MR Imaging Using Support Vector Machine". In Progress in Advanced Computing and Intelligent Engineering (pp. 165-176). Springer, Singapore (2019).
|
| 432 |
+
[17]. Kamboj, A., Rani, R., Chaudhary, J., “Deep learning approaches for brain tumor segmentation: A review”. In 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC) (pp. 599-603). IEEE.
|
| 433 |
+
[18]. Time complexity analysis of various Classifiers: https://www.thekerneltrip.com/machine/learning/computational-complexity-learning-algorithms/
|
| 434 |
+
[19]. Sezgin, M., & Sankur, B., “Survey over image thresholding techniques and quantitative performance evaluation”. Journal of Electronic imaging, 13(1), 146-166, 2014.
|
| 435 |
+
[20]. Shensa, M, J., “The discrete wavelet transform: wedding the a trous and Mallat algorithms���. IEEE Transactions on signal processing, 40(10), 2464-2482, 1992.
|
| 436 |
+
[21]. Jolliffe, I. T., "Principal component analysis". Technometrics, 45(3), 276, 2003.
|
| 437 |
+
[22]. Haralick, R. M., Shanmugam, K., Dinstein, I. H., "Textural features for image classification". IEEE Transactions on systems, man, and cybernetics, (6), 610-621, 1973.
|
| 438 |
+
[23]. Prinzie, A., Van den Poel, D., “Random multiclass classification: Generalizing random forests to random mnl and random nb”. In International Conference on Database and Expert Systems Applications (pp. 349-358). Springer, Berlin, Heidelberg, 2007.
|
| 439 |
+
[24]. Karimi, K., Hamilton, H. J. "Generation and interpretation of temporal decision rules". arXiv preprint arXiv:1004.3334, 2010.
|
| 440 |
+
[25]. Altman, N. S. “An introduction to kernel and nearest-neighbor nonparametric regression”. The American Statistician, 46(3), 175-185, 1992.
|
data/2021/2101_00xxx/2101.00216/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00217/b21cf84d-19f0-4755-9648-42b72523bbcc_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00217/b21cf84d-19f0-4755-9648-42b72523bbcc_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00217/full.md
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00217/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00240/a32da6b4-14ab-4952-b4f7-fdc1a1a5e4f1_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00240/a32da6b4-14ab-4952-b4f7-fdc1a1a5e4f1_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00240/full.md
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00240/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00288/0dc1456f-b5e5-426e-a776-7cd26db1c614_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00288/0dc1456f-b5e5-426e-a776-7cd26db1c614_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00288/full.md
CHANGED
|
@@ -1,3 +1,433 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# POLYJUICE: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
|
| 2 |
+
|
| 3 |
+
Tongshuang Wu<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of Washington wtshuang@cs.uw.edu
|
| 6 |
+
|
| 7 |
+
Marco Tulio Ribeiro
|
| 8 |
+
|
| 9 |
+
2Microsoft Research
|
| 10 |
+
|
| 11 |
+
marcotcr@microsoft.com
|
| 12 |
+
|
| 13 |
+
Jeffrey Heer
|
| 14 |
+
|
| 15 |
+
Daniel S. Weld<sup>1,3</sup>
|
| 16 |
+
|
| 17 |
+
3Allen Institute for Artificial Intelligence
|
| 18 |
+
|
| 19 |
+
{jheer,weld} $@$ cs.uw.edu
|
| 20 |
+
|
| 21 |
+
# Abstract
|
| 22 |
+
|
| 23 |
+
While counterfactual examples are useful for analysis and training of NLP models, current generation methods either rely on manual labor to create very few counterfactuals, or only instantiate limited types of perturbations such as paraphrases or word substitutions. We present Polyjuice, a general-purpose counterfactual generator that allows for control over perturbation types and locations, trained by finetuning GPT-2 on multiple datasets of paired sentences. We show that Polyjuice produces diverse sets of realistic counterfactuals, which in turn are useful in various distinct applications: improving training and evaluation on three different tasks (with around $70\%$ less annotation effort than manual generation), augmenting state-of-the-art explanation techniques, and supporting systematic counterfactual error analysis by revealing behaviors easily missed by human experts.
|
| 24 |
+
|
| 25 |
+
# 1 Introduction
|
| 26 |
+
|
| 27 |
+
Counterfactual reasoning — mentally simulating what would have happened if conditions were different — is a common tool for making causality assessments (Kahneman and Tversky, 1981), which in turn are crucial for model evaluation, error analysis, and explanation (Miller, 2019). For example, in Figure 1, "It is great for kids" is perturbed into multiple variations, each providing unique insights by simulating what would have happened if the sentence was different.
|
| 28 |
+
|
| 29 |
+
Applications of counterfactual reasoning to NLP generally specify the relationship $x \rightarrow \hat{x}$ , and then create $\hat{x}$ according to the relationship. As a result, prior work has tailored counterfactual generators for different applications, only collecting subsets of $\hat{x}$ that are useful for the specific task. For example, to support model training and evaluation, human annotators create counterfactuals
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
Figure 1: Overview: (A) given a sentiment analysis instance $x$ , $\text{Polyjuice}^1$ generates (B) various counterfactuals $\hat{x}$ , which are then (C) selected for downstream use. e.g., in (D) we select counterfactual explanations that complement a black box explanation: though "great" and "kids" are deemed important, perturbing them may not affect the prediction $f(x) = f(\hat{x}) = \text{positive}$ , revealing model failures not covered by feature attributions.
|
| 33 |
+
|
| 34 |
+
that change the groundtruth labels by manually rewriting instances (Gardner et al., 2020; Qin et al., 2019) or defining perturbation functions (Ribeiro et al., 2020). Manual rewrites are costly (e.g., 4-5 minutes per counterfactual (Kaushik et al., 2020)) and susceptible to systematic omissions (e.g., human annotators may cover great $\twoheadrightarrow$ not great, but miss kids $\twoheadrightarrow$ no one in Figure 1B). Meanwhile, automated generators for model analysis and explanation usually focus on other relationships, e.g., generating $\hat{x}$ that have different model predictions than $x$ (Ross et al., 2020; Zhang et al., 2019a). As a result, they neglect prediction-preserving counterfactuals that are equally important for understanding or shaping model behaviors, like kids $\twoheadrightarrow$ no one and great $\twoheadrightarrow$ scary linked to Figure 1D.
|
| 35 |
+
|
| 36 |
+
However, counterfactual generation does not have to be task-specific. The same set of counterfactuals in Figure 1 can support a variety of applica
|
| 37 |
+
|
| 38 |
+
tions. Moreover, for cases like model explanation and analysis, a general-purpose pool of counterfactuals may be preferable, as the relationship of interest can be more exploratory and user-oriented (Wu et al., 2019). In this work, we formalize the task of counterfactual generation, disentangling generation from the application of counterfactuals. Given an input $x$ (Figure 1A), our generator produces a set of counterfactuals $\hat{\mathbf{X}} = \{\hat{x}_1, \hat{x}_2, \dots\}$ with application-agnostic relationships $x \rightarrow \hat{x}_i$ (Figure 1B). Afterwards, we use application-specific selection methods to find subsets of $\hat{x}$ that are most effective for a given use case (Figure 1C).
|
| 39 |
+
|
| 40 |
+
We frame the generation step as conditional text generation, and finetune GPT-2 (Radford et al., 2019) into a generator called Polyjuice using $(x,\hat{x})$ pairs. To allow for targeted counterfactuals, we also design control codes like negation or delete (Figure 1B), and adopt fill-in-the-blank structures (Donahue et al., 2020) to specify where the perturbation occurs and how. Intrinsic evaluation shows that Polyjuice generates $\hat{x}$ that are fluent, diverse, and close to $x$ , and that the control mechanisms retrieve perturbations that would likely not be sampled from off-the-shelf language models.
|
| 41 |
+
|
| 42 |
+
With simple selection heuristics, we show that a single Polyjuice model can significantly aid humans in diverse downstream applications. For counterfactual training and evaluation (§3), humans label Polyjuice counterfactuals rather than creating them from scratch. They produce training data that significantly improve model generalization, as well as contrast sets that help identify model vulnerabilities (Gardner et al., 2020), with around $70\%$ less annotation effort. In another application, Polyjuice produces counterfactual explanations (§4), providing significant insight on top of state-of-the-art explanation techniques. Finally, Polyjuice supports counterfactual error analysis (§5). It allows users to explore related counterfactuals (e.g., the model responds differently to different negation forms in Figure 1B), and to aggregate individual counterfactuals into patterns in order to gain systematic understanding of model behavior.
|
| 43 |
+
|
| 44 |
+
# 2 General-Purpose Counterfactuals
|
| 45 |
+
|
| 46 |
+
# 2.1 Definition and Desiderata
|
| 47 |
+
|
| 48 |
+
Given an instance $x$ , a generator $g$ produces a set of counterfactuals $\hat{\mathbf{X}} = \{\hat{x}_1, \hat{x}_2, \ldots\}$ with various re
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Figure 2: (A) POLYJUICE prompt format, which concatenates the original $x$ , the control code, and the $\hat{x}$ ("It is not great for children" converted to an infilling structure). At generation time, POLYJUICE accepts prompts that just include $x$ (Line 1), or optionally with the code and the [BLANK]s (Lines 2-3), and fills in the blanks sequentially with spans separated by [ANSWER]s (Line 4). (B) POLYJUICE allows blanking at different granularities (even the entire sentence), such that Lines 3-4 in (A) can be replaced by Lines 6-7 or 8-9.
|
| 52 |
+
|
| 53 |
+
relationships $x \rightarrow \hat{x}_i$ . For example, great $\rightarrow$ not great, kids $\rightarrow$ no one in Figure 1B are both instances of the negation relationship. Each $(x, \hat{x})$ pair shares multiple relationships — these two are also instances of the label flipping relationship if the task is sentiment analysis (but might not be for other tasks). As illustrated in §1, knowing which relationships apply aids selection for downstream applications.
|
| 54 |
+
|
| 55 |
+
We expect $g$ to produce counterfactuals $\hat{x}$ that are (1) close to $x$ , preferably only involving the minimal changes necessary to establish a certain effect (Pearl, 2018), allowing users to make causality assessments. The generated $\hat{x}$ should also be (2) fluent, i.e., grammatically correct (Morris et al., 2020) and semantically meaningful (e.g., "Colorless green ideas sleep furiously" is not meaningful (Chomsky, 2002)). Fluency operationalizes "probable" counterfactuals in the context of NLP; as Kahneman and Tversky (1981) stated, humans strongly favor counterfactuals that are close to the original instance, but also prefer those that could have easily happened without assuming rare events or strange coincidences. Further, as a general-purpose generator, $g$ should produce counterfactuals with a measure of (3) control over relationships $x \rightarrow \hat{x}$ , such that the counterfactuals can vary with the object-of-attention in each application (the "focus rule" (Kahneman and Tversky, 1981)). Finally, we expect $g$ to output a (4) diverse set of $\hat{x}$ in terms of relationships, covering a large variety of "what-ifs" for different applications (Pearl, 2018).
|
| 56 |
+
|
| 57 |
+
<table><tr><td>Control code</td><td>Definitions and PolYJUICE-generated Examples</td><td>Training Datasets</td></tr><tr><td>negation</td><td>A dog is not embraced by the woman.</td><td>(Kaushik et al., 2020)</td></tr><tr><td>quantifier</td><td>A dog is → Three dogs are embraced by the woman.</td><td>(Gardner et al., 2020)</td></tr><tr><td>shuffle</td><td>To move (or swap) key phrases or entities around the sentence.A dog → woman is embraced by the woman → dog.</td><td>(Zhang et al., 2019b)</td></tr><tr><td>lexical</td><td>To change just one word or noun chunk without altering the POS tags.A dog is embraced → attacked by the woman.</td><td>(Sakaguchi et al., 2020)</td></tr><tr><td>resemantic</td><td>To replace short phrases without altering the remaining dependency tree.A dog is embraced by the woman → wrapped in a blanket.</td><td>(Wieting and Gimpel, 2018)</td></tr><tr><td>insert</td><td>To add short phrases without altering the remaining dependency tree.A dog is embraced by the little woman.</td><td>(McCoy et al., 2019)</td></tr><tr><td>delete</td><td>To remove short phrases without altering the remaining dependency tree.A dog is embraced by the woman.</td><td>(McCoy et al., 2019)</td></tr><tr><td>restructure</td><td>To alter the dependency tree structure, e.g., changing from passive to active.A dog is embraced by → hugging the woman.</td><td>(Wieting and Gimpel, 2018)</td></tr></table>
|
| 58 |
+
|
| 59 |
+
Table 1: We design a list of control codes to guide generation. We show POLYJUICE-generated counterfactual examples, and the representative training datasets for each corresponding pattern. Details are in Appendix A.
|
| 60 |
+
|
| 61 |
+
# 2.2 Conditional Counterfactual Generation
|
| 62 |
+
|
| 63 |
+
We frame counterfactual generation as a conditional text generation task using language models (LMs), and train Polyjuice by finetuning GPT-2 (Radford et al., 2019) using the following prompt design (alternative LMs could also have been used).
|
| 64 |
+
|
| 65 |
+
Prompt format design. To ensure that $\hat{x}$ is close to $x$ rather than arbitrary text, we condition the generation on $x$ , followed by a special token (Line 1 in Figure 2A). In Line 2, we have control codes (Keskar et al., 2019) such as negation. We design them to specify types of perturbation from among lexical, syntactic, or semantic aspects (see Table 1), inspired by prior work that categorizes manually created counterfactuals (Kaushik et al., 2020; Gardner et al., 2020). As an additional layer of control over $x \rightarrow \hat{x}$ , we allow users to specify where changes happen by having the LM infill [BLANK] tokens (Donahue et al., 2020), rather than generating arbitrary counterfactuals (Lines 3-4).
|
| 66 |
+
|
| 67 |
+
Finetuning GPT-2 — a causal LM for predicting next tokens — additionally allows us to exercise control at various levels of granularity. At generation time, if the user provides only the original example, POLYJUICE will generate the control code, the blank locations, and the infilling (Lines 2–4). Alternatively, the user can specify the control code, or the control code and the blanks, to exercise different degrees of control depending on the application. As later shown in §4 and §5, such control is important for different use cases.
|
| 68 |
+
|
| 69 |
+
Training data. To train a conditional model, we combine six existing sentence-pair datasets, each containing a subset of the desired phenomena in Table 1. Further, we find naturally occurring sentence pairs (filtered by edit distance to guarantee closeness) in non-paired datasets including CommonGen (Lin et al., 2020), Natural Questions (Kwiatkowski et al., 2019), and SQuAD (Rajpurkar et al., 2016), such that the resulting dataset contains diverse counterfactuals.
|
| 70 |
+
|
| 71 |
+
We translate these sentence pairs into the format given in Figure 2A. For each $(x, \hat{x})$ , we compute its primary control code using part-of-speech tags and dependency trees. For example, negation occurs when we observe changes to negation modifiers or specific words like "supposedly", and shuffle occurs when we have overlap between tokens deleted and added. When multiple changes occur, we label it with the control code which most significantly changes the semantics of the corresponding subphrase as computed by SBERT (Reimers and Gurevych, 2019). For example, in Figure 2A, negation (great $\rightarrow$ not great) is more significant than lexical (kids $\rightarrow$ children). To balance the distribution (Table 7 in Appendix A), for each dataset, we extract control codes from all the $(x, \hat{x})$ ,<sup>4</sup> and randomly sample up to 10,000 instances per codes.
|
| 72 |
+
|
| 73 |
+
In order to allow for flexible blanking at generation time, we generate multiple training prompts per pair, covering different dependency tree struc
|
| 74 |
+
|
| 75 |
+
<table><tr><td rowspan="2">Model</td><td>Diversity</td><td colspan="2">Closeness</td></tr><tr><td>Self-BLEU ↓</td><td>Levenshtein ↓</td><td>Syntactic ↓</td></tr><tr><td>POLYJUICE</td><td>0.34</td><td>0.25</td><td>2.13</td></tr><tr><td>GPT-2</td><td>0.18</td><td>0.70</td><td>6.35</td></tr><tr><td>T5</td><td>0.12</td><td>9,52</td><td>3.50</td></tr><tr><td>RoBERTa</td><td>0.47</td><td>0.14</td><td>1.32</td></tr></table>
|
| 76 |
+
|
| 77 |
+
Table 2: Intrinsic evaluations: POLYJUICE counterfactuals are closer to the original instance than non-fintuned GPT-2 and T5, and more diverse than RoBERTa. Computational details are in Appendix A.2.
|
| 78 |
+
|
| 79 |
+
tures related to the perturbed spans (Figure 2B), including (1) just the changed tokens, (2) the associated parsing structures, (3) the merged changes, and (4) the entire sentence. We eventually obtain 657, 144 prompts from 186, 451 pairs.
|
| 80 |
+
|
| 81 |
+
Fluency filtering. While the original GPT-2 produces fluent text, some combinations of control codes and blanks cause Polyjuice to generate nonsensical results. Following Morris et al. (2020), we score both $x$ and $\hat{x}$ with GPT-2, and filter $\hat{x}$ when the log-probability (on the full sentence or the perturbed chunks) decreases by more than 10 points relative to $x$ . Fully automated uses of Polyjuice (e.g., adversarial attacks) may benefit from stricter constraints, at the cost of diversity (as surprising changes may be filtered even if they are fluent).
|
| 82 |
+
|
| 83 |
+
# 2.3 Intrinsic Evaluation
|
| 84 |
+
|
| 85 |
+
We evaluate POLYJUICE on closeness and diversity by comparing its perturbations on 300 randomly selected sentences with baselines that use more or less context from $x$ : (1) non-finetuned GPT-2, (2) token-infilling RoBERTa (Liu et al., 2019) and (3) span-infilling T5 (Raffel et al., 2020).
|
| 86 |
+
|
| 87 |
+
As shown in Table 2, Polyjuice generates counterfactuals that are close to the original instance, measured by syntactic tree (Zhang and Shasha, 1989) and Levenshtein edit distance (Levenshtein, 1966). In contrast, non-finetuned GPT-2 generates arbitrary text instead of perturbations when given the starting tokens of a sentence, as it only leverages context in a single direction. As for infilling models, Polyjuice counterfactuals are more diverse (measured by self-BLEU (Zhu et al., 2018)) than RoBERTa ones, which is restricted to word substitution. Meanwhile, T5 displays higher diversity but less closeness, probably due to the fact that it does not consider the original masked tokens when generating $\hat{x}$ . For example, in Figure 1 "It is great for kids," T5 replaces "for kids" with "idea", "to
|
| 88 |
+
|
| 89 |
+
meet you," whereas Polyjuice generates "for kids yet adults can enjoy," "for any audience."
|
| 90 |
+
|
| 91 |
+
We evaluate controllability by comparing Polyjuice with T5 as well as with GPT-2 finetuned on prompts without codes. We verify that the codes improve the success rate of generating counterfactuals with the desired perturbation types set out in Table 1 by as much as $42\%$ for perturbations such as negation and insert. For example, given "It is [BLANK] great for kids," baselines generate "also," "fun and," rather than "not" (negation).
|
| 92 |
+
|
| 93 |
+
We further verify the fluency for Polyjuice counterfactuals in three tasks/datasets: (1) Sentiment Analysis, SST-2 (Socher et al., 2013), (2) Natural Language Inference (NLI), SNLI (Bowman et al., 2015), and (3) Duplicate Question Detection (QQP) (Wang et al., 2019). We randomly select 100 sentences per dataset, generate $3\hat{x}$ per $x$ , and ask crowd workers to rate whether they are "likely written by native speakers." The workers rated most counterfactuals as fluent: $78\%$ in SST-2, $76\%$ in QQP, and $86\%$ in SNLI. In subsequent sections, we show these rates are suitable for applications where people "team up" with Polyjuice.
|
| 94 |
+
|
| 95 |
+
# 3 Counterfactual Evaluation & Training
|
| 96 |
+
|
| 97 |
+
We ask crowdworkers to label POLYJUICE-generated counterfactuals for Sentiment, NLI, and QQP, for the purposes of evaluation and training. In each labeling round, the worker is presented with an original $x$ and its label, and asked to annotate the groundtruth for three $\hat{x}$ , rejecting non-fluent ones (details and interface in Appendix B.1).
|
| 98 |
+
|
| 99 |
+
We use a simple heuristic to select which counterfactuals are presented for labeling, aimed at increasing diversity. Representing each $\hat{x}$ by its token changes, control code, and dependency tree structure, we greedily select the ones that are least similar to those already selected for labeling. This avoids redundancy in the labeling set, e.g., common perturbation patterns such as black $\rightarrow$ white.
|
| 100 |
+
|
| 101 |
+
# 3.1 Evaluation with Contrast Sets
|
| 102 |
+
|
| 103 |
+
We verify whether PolyJuice counterfactuals can be used to create contrast sets (Gardner et al., 2020), i.e., evaluation sets where each instance has a nearby counterfactual with a different groundtruth, to better evaluate model decision boundaries. We
|
| 104 |
+
|
| 105 |
+
<table><tr><td>Task</td><td>Dev.</td><td>Orig. set</td><td>Contrast set ↓</td><td>Consistency ↓</td></tr><tr><td>Sentiment</td><td>94.3</td><td>93.8</td><td>84.9 (-8.9)</td><td>76.1</td></tr><tr><td>NLI</td><td>86.5</td><td>91.6</td><td>72.3 (-19.3)</td><td>56.4</td></tr><tr><td>QQP</td><td>91.7</td><td>87.5</td><td>75.3 (-12.2)</td><td>61.1</td></tr></table>
|
| 106 |
+
|
| 107 |
+
Table 3: POLYJUICE $\hat{x}$ as contrasts sets, with model accuracy on the development set, the original set of $x$ , the contrast sets, and consistency (cases where the model predicts both $x$ and $\hat{x}$ correctly). The performance drops are similar to that of expert-created sets (Gardner et al., 2020), on which the accuracy of all classification models decreases by 9.8 on average, with a consistency of $\approx 64.1$ . This indicates POLYJUICE can be used to create such sets without expert annotators and at less cost.
|
| 108 |
+
|
| 109 |
+
construct these sets by simply filtering out counterfactuals that are labeled the same as their original instances (40%–63% depending on the task).
|
| 110 |
+
|
| 111 |
+
For each task, we test multiple classifiers open-sourced by Huggingface (Wolf et al., 2020), and report the best performing model for each in Table 3 (results for other models are analogous). Polyjuice contrast sets display performance gaps consistent with those of Gardner et al. (2020), where the sets are constructed manually by NLP researchers, even though we use non-expert annotators who only label examples rather than creating them.
|
| 112 |
+
|
| 113 |
+
# 3.2 Training with Counterfactuals
|
| 114 |
+
|
| 115 |
+
Following Kaushik et al. (2020), we augment training sets with counterfactual examples. In all experiments, we finetune roberta-base on datasets of $n$ original examples and $m$ counterfactuals, which are generated by Polyjuice (m-polyjuice) or crafted from scratch by humans (m-CAD from Kaushik et al. (2020), only available for NLI). To distinguish the benefit of counterfactuals from that of just adding more data, we further add a baseline that uses $n + m$ original examples (m-baseline). In addition to in-domain test set accuracy, we measure models' generalization on out-of-domain datasets, as well as contrast sets and challenge sets. We also evaluate model capabilities with CheckList (Ribeiro et al., 2020) for Sentiment and QQP. Reported model performances are averaged across multiple data samples and random seeds (Appendix B.2).
|
| 116 |
+
|
| 117 |
+
For Sentiment, we select random POLYJUICE counterfactuals regardless of their labels, as long as an original $x$ has at least one $\hat{x}$ that flips the label. For NLI and QQP, we observed in a pilot study that
|
| 118 |
+
|
| 119 |
+
randomly chosen counterfactuals may not be more effective than the same amount of additional data. We suspect that Polyjuice lacks domain knowledge and context for identifying critical perturbations, and therefore brings benefits redundant with pretraining (Longpre et al., 2020). Thus, we use the slicing functions of Chen et al. (2019) to find patterns of interest (e.g., prepositions in NLI), and perturb those patterns by placing [BLANK]s on the matched spans. For example, "His surfboard is beneath him" becomes "His surfboard is [BLANK] him", and Polyjuice generates counterfactuals such as "His surfboard is beneath $\rightarrow$ next to him."
|
| 120 |
+
|
| 121 |
+
Results. Tables 4-6 indicate that Polyjuice augmentation is effective in all tasks: m-polyjuice maintains in-domain accuracy while consistently improving or maintaining generalization accuracy in various out-of-domain and challenge sets. On NLI, Polyjuice counterfactuals are as effective or more effective than counterfactuals created from scratch (m-CAD). Notably, we obtain the largest gains on challenge and contrast sets (e.g., Break and DNC in Table 5) or when the out-of-domain dataset is sufficiently different from the training domain (e.g., Senti140 and SemEval in Table 4). Polyjuice also improves results on CheckList tests that previously had high error rates: it significantly lowers the error rates on 11 out of 27 QQP tests, making 2/27 tests worse. For Sentiment, it improves the model on 5 out of 15 tests, hurting 1. Here, we only report a low $m/n$ ratio (<10% for NLI and QQP) to show that a small amount of augmentation is already beneficial. The results are similar for other combinations we explored (see Appendix B.2), except when the ratio of counterfactual to original data was too high (e.g., $m = n$ may decrease vocabulary diversity or induce additional data bias, echoing (Khashabi et al., 2020)).
|
| 122 |
+
|
| 123 |
+
# 3.3 Discussion
|
| 124 |
+
|
| 125 |
+
We show that POLYJUICE counterfactuals are useful for evaluation, and more effective than additional (non-counterfactual) data for training in a variety of tasks. In contrast to prior work where humans generate counterfactuals from scratch, we only ask them to label automatically generated ones, while still achieving similar or better results.
|
| 126 |
+
|
| 127 |
+
We believe our approach is more effective than manual creation (although both are beneficial): in
|
| 128 |
+
|
| 129 |
+
<table><tr><td>Model</td><td>SST-2</td><td>Senti140</td><td>SemEval</td><td>Amzbook</td><td>Yelp</td><td>IMDB</td><td>IMDB-Cont.</td><td>IMDB-CAD</td></tr><tr><td>m-baseline</td><td>92.9 ± 0.2</td><td>88.9 ± 0.3</td><td>84.8 ± 0.5</td><td>85.1 ± 0.4</td><td>90.0 ± 0.3</td><td>90.8 ± 0.5</td><td>92.2 ± 0.6</td><td>86.5 ± 0.2</td></tr><tr><td>m-polyjuice</td><td>92.7 ± 0.2</td><td>90.7 ± 0.4</td><td>86.4 ± 0.1</td><td>85.6 ± 0.8</td><td>90.1 ± 0.0</td><td>90.6 ± 0.3</td><td>94.0 ± 0.3</td><td>89.7 ± 0.5</td></tr></table>
|
| 130 |
+
|
| 131 |
+
Table 4: Sentiment model performance, with $n = 4,000$ and $m = 2,000$ . Bolded cells highlight significant improvements. m-polyjuice maintains the in-domain and out-of-domain accuracies on reviews (SST-2, Amzbook, Yelp, IMDb Movie Review (Ni et al., 2019; Asghar, 2016; Maas et al., 2011)), improving it on Twitter data (Senti140 and SemEval 2017 (Go et al., 2009; Nakov et al., 2013)) and contrast sets (Gardner et al., 2020; Kaushik et al., 2020), likely because their distributions are less similar to the original SST-2 training data.
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Model</td><td>SNLI</td><td>MNLI-m</td><td>MNLI-mm</td><td>SNLI-CAD</td><td>break</td><td>DNC</td><td>stress</td><td>diagnostic</td></tr><tr><td>m-baseline</td><td>85.7 ± 0.4</td><td>86.1 ± 0.2</td><td>86.6 ± 0.2</td><td>72.8 ± 0.3</td><td>86.4 ± 1.5</td><td>54.5 ± 0.6</td><td>65.1 ± 0.6</td><td>56.0 ± 0.8</td></tr><tr><td>m-CAD</td><td>85.8 ± 0.6</td><td>86.6 ± 0.1</td><td>85.6 ± 0.3</td><td>73.8 ± 0.2</td><td>89.4 ± 2.9</td><td>55.8 ± 0.9</td><td>65.5 ± 0.5</td><td>56.4 ± 0.4</td></tr><tr><td>m-polyjuice</td><td>85.3 ± 0.3</td><td>86.0 ± 0.1</td><td>86.4 ± 0.0</td><td>73.6 ± 0.2</td><td>89.1 ± 1.2</td><td>57.7 ± 0.3</td><td>65.1 ± 0.2</td><td>57.5 ± 0.5</td></tr></table>
|
| 134 |
+
|
| 135 |
+
Table 5: NLI models, with $n = {20},{000}$ and $m = 1,{574}$ . m-polyjuice improves accuracy on contrast and challenge sets (Kim et al., 2019; Naik et al., 2018; Glockner et al., 2018; Wang et al., 2019); it exhibits comparable (or better) gains than m-CAD (manual counterfactuals) with less implementation and annotation effort.
|
| 136 |
+
|
| 137 |
+
<table><tr><td>Model</td><td>QQP</td><td>PAWS-QQP</td></tr><tr><td>m-baseline</td><td>84.5 ± 0.6</td><td>37.0 ± 0.5</td></tr><tr><td>m-polyjuice</td><td>84.7 ± 1.0</td><td>38.7 ± 0.4</td></tr></table>
|
| 138 |
+
|
| 139 |
+
Table 6: POLYJUICE with $n = {20},{000}$ and $m = 1,{911}$ improves accuracy on PAWS-QQP (Zhang et al., 2019b).
|
| 140 |
+
|
| 141 |
+
terms of implementation effort, the process of just labeling counterfactuals is the same as labeling original examples, such that no additional annotator training or separate pipelines are required; in contrast, Kaushik et al. (2020) set up two separate crowdsourcing tasks for creating and labeling the counterfactuals. Further, annotator effort is much lower, as evaluating examples is easier than creating them — Kaushik et al. (2020) report an average of $\approx 2$ minutes per NLI counterfactual prior to quality validation, while our median time was 10 seconds per counterfactual. Even after our quality validation (removing noisy annotators, disregarding non-fluent counterfactuals), our rate for NLI is $\approx 36$ seconds per counterfactual (used in Table 5).
|
| 142 |
+
|
| 143 |
+
In terms of the utility per counterfactual, manual creation and Polyjuice may be complementary. Manual annotation may be unreliable or incomplete for certain forms of counterfactuals (Ribeiro et al., 2018), whereas Polyjuice can miss more complex or context-dependent changes, and could benefit from target perturbations that compensate for its lack of domain knowledge (targeted guidance is also helpful for human annotators (Huang et al., 2020)). Thus, it may be important to mix both approaches (Khashabi et al., 2020). Polyjuice's flexibility opens up possibilities for hybrids between human creation and human verification of targeted, machine-generated counterfactuals.
|
| 144 |
+
|
| 145 |
+
Figure 3: (A) An instance in QQP where the model prediction $f(x)$ is Duplicate (=) at $98.2\%$ confidence, with SHAP importance weights for tokens in Q2. Counterfactual explanations complement SHAP with concrete examples and surprising behaviors, e.g., (B) shows that friend → woman surprisingly flips the prediction to Non-Duplicate (≠), despite the low weight on “friend.”
|
| 146 |
+

|
| 147 |
+
B Q2: How do I help a woman who is in depression?
|
| 148 |
+
Q2: How do I help a friend who is $\bullet$ suicidal?
|
| 149 |
+
Q2: How do I •find a friend who is in depression? =
|
| 150 |
+
|
| 151 |
+
# 4 Counterfactual Explanations
|
| 152 |
+
|
| 153 |
+
A popular way of explaining NLP models is to attribute importance weights to the input tokens, either using attention scores (Wiegreffe and Pinter, 2019) or by summarizing the model behavior on perturbed instances (e.g., LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017)). Though ubiquitous, token scores may not always reflect their real importance (Pruthi et al., 2020). Popular packages like LIME or SHAP estimate scores by masking words, and therefore may not reflect model behavior on natural counterfactual cases. For example, the token "friend" in Figure 3A is not considered important even though a natural substitution in Figure 3B flips the prediction. The opposite happens to "in depression," where a significant change makes no difference to the model's prediction (Figure 3C). Even perfect importance
|
| 154 |
+
|
| 155 |
+
scores may be too abstract for users to gain real understanding (Miller, 2019), e.g., users may not grasp the significance of a low importance score for the token "help" without concrete examples such as the one in Figure 3D.
|
| 156 |
+
|
| 157 |
+
Since presenting a large number of concrete counterfactuals would be overwhelming, we propose a hybrid approach, displaying feature attributions as a high-level summary, together with a judicious selection of Polyjuice counterfactuals that make behaviors concrete and highlight potential limitations. Following Miller (2019)'s observation that people look for explanations revealing unexpected behavior, we select surprising counterfactuals. That is, we estimate the expected change in prediction with feature attributions, and select counterfactuals that violate these expectations, i.e., examples where the real change in prediction is large even though importance scores are low (Figure 3B), and examples where the change is small but importance scores are high (Figure 3C). Of course, users can also view additional counterfactuals that perturb tokens of particular interest, a technique that we explore in the next section.
|
| 158 |
+
|
| 159 |
+
User evaluation. We study the scenario where an expert has access to a model and local explanations, and evaluate the additional benefit of showing counterfactuals, i.e., whether they bring new insights. We evaluate three ways of generating counterfactuals: (1) POLYJUICE-random, a baseline where we show random POLYJUICE counterfactuals, (2) Expert-surprise, where two graduate students (non-participants) were given access to the model and instructed to create counterfactuals that are surprising given the associated SHAP scores, and (3) POLYJUICE-surprise, which uses the selection procedure described in the previous paragraph.
|
| 160 |
+
|
| 161 |
+
We recruited 13 participants (graduate students with experience in model explanation), and had them analyze the aforementioned QQP model. In each round, users were shown an example, the model prediction, and a SHAP explanation, as in Figure 3A. Users were instructed to create up to 10 counterfactuals in order to better understand model behavior around the example, for which model predictions were given (users created 6 on average). Finally, users simulated what the model would do on six counterfactuals (Hase and Bansal, 2020), two from each condition (in random order). Counterfactuals where users make mistakes are prefer
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
Figure 4: Simulation error rates per condition (higher the better). POLYJUICE-surprise has the highest error rate, indicating these counterfactuals would add the most information to users if displayed.
|
| 165 |
+
|
| 166 |
+
able, as displaying these would add information that users do not already have.
|
| 167 |
+
|
| 168 |
+
As shown in Figure 4, humans simulated model behavior on Polyjuice-surprise counterfactuals only slightly better than random guessing $(45\% \pm 6\%)$ , i.e., these examples display model behavior that is surprising to users even after seeing explanations and creating their own counterfactuals. Expert-surprise also had a high error rate, but at a much higher cost: generating these for just 20 original instances took 1.5-2 hours of expert labor.
|
| 169 |
+
|
| 170 |
+
While high error rates could be achieved with unrelated or nonsensical examples, all counterfactuals under evaluation were close to the original examples, when measured by syntactic tree edit $(\approx 1.0)$ or Levenshtein distance $(\approx 0.2)$ , Polyjuice-surprise being the closest on both. An independent rater labeled $95\%$ of Polyjuice-surprise counterfactuals as "likely written by a native speaker," in contrast to $85\%$ for Expert-surprise, indicating that experts sometimes resorted to ungrammatical or nonsensical sentences to find surprising behaviors.
|
| 171 |
+
|
| 172 |
+
Qualitatively, the study participants tended to create counterfactuals by perturbing the token with the highest weights (84% of their $\hat{x}$ perturbed tokens in the top 15% quantile of weights), not gaining a real understanding of how the other tokens impact predictions. Participants also made a significant number of mistakes even for tokens they had inspected, e.g., a participant perturbed the example in Figure 3A by replacing help $\rightarrow$ play with, yielding a Non-Duplicate model prediction. When faced with help $\rightarrow$ find in Figure 3D, they incorrectly assumed the behavior would be the same.
|
| 173 |
+
|
| 174 |
+
These results indicate that POLYJUICE counterfactuals complement feature attribution explanations by displaying information that users often miss, even after they have manually explored the model behavior beyond explanations. Moreover, POLYJUICE counterfactuals for this application were more surprising and fluent than Expert-surprise, despite being computed automatically.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
Figure 6: Perturbing the subject of $x$ in Figure 5A through [BLANK], resulting in erroneous predictions for different quantifiers (all should be Neutral).
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
Figure 5: (A) An NLI case with a Neutral prediction (underlined $f(\hat{x})$ are correct). POLYJUICE generates counterfactual hypotheses conditioned on the negation control code. (B) Generalizing perturbations into patterns (Wu et al., 2020). The change DET → no flips 92.8% of predictions from Neutral → Contradiction.
|
| 181 |
+
|
| 182 |
+
# 5 Interactive Analysis
|
| 183 |
+
|
| 184 |
+
While our use of Polyjuice has so far relied on automatic selection of counterfactuals, we show in this section how an analyst can benefit from multiple counterfactuals per $x$ , make use of controlled generation for more advanced analysis, and extract general patterns from individual observations. Our use case is counterfactual error analysis (Wu et al., 2019) of RoBERTa finetuned on NLI (used in §3.1), although the techniques are generally applicable.
|
| 185 |
+
|
| 186 |
+
There is a known correlation between the label Contradiction and hypotheses with negation in NLI datasets (Gururangan et al., 2018), which may cause models to fail on non-contradiction negations. We explore this in Figure 5A by generating counterfactual hypotheses for a random Neutral instance, conditioning only on the original $x$ and the negation control code. While the first two counterfactuals display this failure mode, there is a surprising inconsistency in model behavior between "not" and "n't". We note that manual analysis may not explore these three negation forms, and thus not surface this puzzling behavior.
|
| 187 |
+
|
| 188 |
+
To verify if the pattern is widespread, we generate counterfactuals with the negation control code for a random set of instances correctly predicted as Neutral $(n = 895)$ . To generalize individual changes into patterns, we extract frequent counterfactual templates with Tempura (Wu et al., 2020) (details in Appendix D.2), shown in Figure 5B. The top templates (in bold) show that the model flips
|
| 189 |
+
|
| 190 |
+

|
| 191 |
+
|
| 192 |
+
its prediction from Neutral to Contradiction with roughly the same frequency $(\approx 43\%)$ whether the negation word is "not" or "n't", but flips much more frequently with a different negation pattern where a determiner is replaced with "no" $(92.8\%)$ . While these behaviors may be correct in some instances, they often are not (e.g., Figure 5A), and thus would warrant further exploration, and potential mitigation strategies (e.g., counterfactual training, §3). Tangentially, the impact of DET $\rightarrow$ no might lead the analyst to explore the impact of perturbing the subject of hypotheses, which we do in Figure 6 by placing a [BLANK] on the subject rather than using a control code. This leads to the discovery of unstable and erroneous behaviors regarding quantifiers, which we analyze in more detail in Appendix D.1.
|
| 193 |
+
|
| 194 |
+
Discussion. POLYJUICE is a powerful tool for interactive analysis. Generating multiple counterfactuals per instance leads to insights that might be missed by manual analysis, and the steering provided by control codes and [BLANK]s allow for analyses that would be non-trivial to do manually (Wu et al., 2019) or with masked language models (e.g., Figure 5B places negations in various parts of sentences, and Figure 6 replaces spans with other spans of varying lengths). Besides error analysis, an analogous interactive use of POLYJUICE may be suitable for test creation (Ribeiro et al., 2020) and forms of data augmentation that are more controlled than what we presented in §3.
|
| 195 |
+
|
| 196 |
+
# 6 Related Work
|
| 197 |
+
|
| 198 |
+
Some prior work in training and evaluation relies on humans to generate counterfactuals from scratch (Gardner et al., 2020; Teney et al., 2020; Kaushik et al., 2020). Our experiments in §3 indicate that asking humans to label PolYJuice counterfactuals yields similar or better results at a lower cost, which motivates an exploration of a mixture of manual and semi-automated generation. Similarly, prior work on analysis relies on experts to
|
| 199 |
+
|
| 200 |
+
create individual counterfactuals or perturbation functions (Wu et al., 2019; Ribeiro et al., 2020). In §5, we show that POLYJUICE enhances current practice by generating multiple counterfactuals that might have been overlooked, and by providing abstractions that allow for new kinds of analyses.
|
| 201 |
+
|
| 202 |
+
Prior work on automatically generating counterfactuals typically has a narrower scope in terms of the relationships $x \rightarrow \hat{x}$ . For example, adversarial generators aim to maintain semantics while changing model predictions (Ribeiro et al., 2018; Iyyer et al., 2018; Li et al., 2021), whereas concurrent work to our own (Madaan et al., 2021; Ross et al., 2020) automatically generates $\hat{x}$ that change predictions for explanation or analysis, with no constraints on semantics. However, as shown in §3-§5, a mix of label-preserving and label-flipping counterfactuals generated by Polyjuice is quite useful for training, evaluation, explanation, and analysis. Further, general-purpose counterfactuals may lead to serendipitous discoveries (§5), especially as Polyjuice is not fine-tuned to the target domain (and thus less liable to merely replicate what is already there). Finally, by allowing control through control codes and [BLANK]s, Polyjuice supports human-generator collaboration, where a person specifies desired changes (e.g., perturb the sentence subject). Such collaboration is hard to imagine using automatic generators with no control, or with coarser control through predefined style attributes or labels (Madaan et al., 2020; Malmi et al., 2020). To our knowledge, prior work on controlled generation (Keskar et al., 2019; Dathathri et al., 2020) does not address counterfactual generation.
|
| 203 |
+
|
| 204 |
+
# 7 Conclusion and Future Work
|
| 205 |
+
|
| 206 |
+
We propose POLYJUICE, a general-purpose generator that produces fluent and diverse counterfactuals, allowing for control over the kinds and locations of perturbations. With simple, task-specific selection heuristics, POLYJUICE supports various downstream tasks on different domains, including counterfactual data augmentation, contrast set generation, counterfactual explanation, and error analysis.
|
| 207 |
+
|
| 208 |
+
While Polyjuice is broadly applicable, it is not bias-free: control codes are pre-defined and certainly not exhaustive, and the model is fine-tuned on a collection of paired datasets where certain perturbations are more or less likely (e.g., we observe that words with negative sentiment tend to be slightly more likely than positive ones in some
|
| 209 |
+
|
| 210 |
+
contexts). Collecting naturally occurring counterfactuals is an important area of future research, as is the development of generators that allow for control even without a-priori control codes.
|
| 211 |
+
|
| 212 |
+
Besides improving the generators, further work is needed to improve the value of counterfactuals. For example, while Polyjuice shows consistent gains across tasks in data augmentation, the improvements on some datasets are not as significant. This aligns with observations in prior work that even manual counterfactuals can be marginally beneficial (Kaushik et al., 2020; Huang et al., 2020), possibly because the original data is already diverse enough, or the perturbed signal in counterfactuals is too subtle to affect the model (e.g., when only a single word is changed in a long sentence.) We hope to perform more thorough experiments on tuning the amount and the distribution of counterfactual augmentation, as well as other ways of incorporating counterfactuals, such as having explicit terms in the loss function for contrasting counterfactuals with original data (Teney et al., 2020), or other forms of contrastive learning.
|
| 213 |
+
|
| 214 |
+
Although our applications all involved people, the human-Polyjuice collaboration in labeling and explanations could benefit from richer interaction mechanisms. We believe Polyjuice motivates future research on more expressive forms of counterfactual training, where users generate counterfactuals together with Polyjuice, and label counterfactual patterns rather than individual instances. Similarly, interactive explanations and analysis are exciting directions, especially as we develop new ways of selecting, presenting, and aggregating counterfactuals for various analysis objectives. Having noted these opportunities, we believe Polyjuice is already a powerful tool for counterfactual reasoning, in particular for tasks where people are directly involved. Polyjuice is opensource, and available at https://github.com/tongshuangwu/polyjuice.
|
| 215 |
+
|
| 216 |
+
# Acknowledgements
|
| 217 |
+
|
| 218 |
+
The work was supported by ONR grant N00014-18-1-2193, NSF RAPID grant 2040196, NSF award IIS-1901386, the University of Washington WRF/Cable Professorship, and the Allen Institute for Artificial Intelligence (AI2). We thank Jim Chen, Dianqi Li, Scott Lundberg, Hao Peng, Sameer Singh, Jiao Sun, Victor Zhong, and Sitong Zhou for their helpful comments, as well as our user study participants for their valuable input.
|
| 219 |
+
|
| 220 |
+
# Ethical Considerations
|
| 221 |
+
|
| 222 |
+
Our work includes labeling counterfactuals on crowdsourcing platforms, as well as conducting user studies with graduate students. As detailed in Appendix B.1 and C.2, we compensated the MTurk workers $2.5 for \approx 15 minutes of labeling, and the graduate students$ 20 for the user study (one hour), above the U.S. federal minimum wage. The studies are with IRB approval.
|
| 223 |
+
|
| 224 |
+
We only finetune GPT-2 rather than training it from scratch, such that our compute costs are relatively low (around 8 hours for finetuning, Appendix A). All of our finetuning experiments involved finetuning RoBERTa on smaller datasets.
|
| 225 |
+
|
| 226 |
+
More critically, with most of our demonstrated applications using a human-generator hybrid mechanism, we stress that the interaction between the two deserves careful consideration. It has long been reported that algorithms interacting with humans can negatively impact the human.<sup>9</sup> In our case, the concern might be that users can develop an over-reliance on Polyjuice (Bansal et al., 2021) and hastily accept its generations. Not only can this decrease users' creativity (Green et al., 2014), but it may bias their analysis process: as discussed in §7, Polyjuice generation is not exhaustive, and may favor some perturbation patterns over others in unpredictable ways. In the short term, we plan to highlight these limitations as part of the model documentation, while future research should identify interaction mechanisms, so as to ensure that Polyjuice or other counterfactual generators support humans, rather than hindering their performance.
|
| 227 |
+
|
| 228 |
+
# References
|
| 229 |
+
|
| 230 |
+
Nabiha Asghar. 2016. Yelp dataset challenge: Review rating prediction. arXiv preprint arXiv:1605.05362.
|
| 231 |
+
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI '21, New York, NY, USA. Association for Computing Machinery.
|
| 232 |
+
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference.
|
| 233 |
+
|
| 234 |
+
In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Computational Linguistics.
|
| 235 |
+
Vincent S. Chen, Sen Wu, Alexander J. Ratner, Jen Weng, and Christopher Ré. 2019. Slice-based learning: A programming model for residual learning in critical data slices. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 9392-9402.
|
| 236 |
+
Noam Chomsky. 2002. Syntactic structures. Walter de Gruyter.
|
| 237 |
+
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language models: A simple approach to controlled text generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 238 |
+
Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2492-2501, Online. Association for Computational Linguistics.
|
| 239 |
+
Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models' local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307-1323, Online. Association for Computational Linguistics.
|
| 240 |
+
Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H Chi, and Alex Beutel. 2019. Counterfactual fairness in text classification through robustness. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 219-226.
|
| 241 |
+
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650-655, Melbourne, Australia. Association for Computational Linguistics.
|
| 242 |
+
Alec Go, Richa Bhayani, and Lei Huang. 2009. Twitter sentiment classification using distant supervision. CS224N project report, Stanford, 1(12):2009.
|
| 243 |
+
|
| 244 |
+
Spence Green, Sida I. Wang, Jason Chuang, Jeffrey Heer, Sebastian Schuster, and Christopher D. Manning. 2014. Human effort and machine learnability in computer aided translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1225-1236, Doha, Qatar. Association for Computational Linguistics.
|
| 245 |
+
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 246 |
+
Peter Hase and Mohit Bansal. 2020. Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5540-5552, Online. Association for Computational Linguistics.
|
| 247 |
+
William Huang, Haokun Liu, and Samuel R. Bowman. 2020. Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 82-87, Online. Association for Computational Linguistics.
|
| 248 |
+
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875-1885, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 249 |
+
Daniel Kahneman and Amos Tversky. 1981. The simulation heuristic. Technical report, Stanford Univ CA Dept of Psychology.
|
| 250 |
+
Divyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning the difference that makes A difference with counterfactually-augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 251 |
+
Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL - A Conditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909.05858.
|
| 252 |
+
Daniel Khashabi, Tushar Khot, and Ashish Sabharwal. 2020. More bang for your buck: Natural perturbation for robust question answering. In Proceedings of the 2020 Conference on Empirical Methods in
|
| 253 |
+
|
| 254 |
+
Natural Language Processing (EMNLP), pages 163-170, Online. Association for Computational Linguistics.
|
| 255 |
+
Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bowman, and Ellie Pavlick. 2019. Probing what different NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM* 2019), pages 235-249, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 256 |
+
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466.
|
| 257 |
+
VI Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707.
|
| 258 |
+
Dianqi Li, Yizhe Zhang, Hao Peng, Liquin Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2021. Contextualized perturbation for textual adversarial attack. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5053-5069, Online. Association for Computational Linguistics.
|
| 259 |
+
Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1823-1840, Online. Association for Computational Linguistics.
|
| 260 |
+
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
|
| 261 |
+
Shayne Longpre, Yu Wang, and Chris DuBois. 2020. How effective is task-agnostic data augmentation for pretrained transformers? In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4401–4411, Online. Association for Computational Linguistics.
|
| 262 |
+
Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774.
|
| 263 |
+
|
| 264 |
+
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
|
| 265 |
+
Aman Madaan, Amrith Setlur, Tanmay Parekh, Barnabas Poczos, Graham Neubig, Yiming Yang, Ruslan Salakhutdinov, Alan W Black, and Shrimai Prabhumoye. 2020. Politeness transfer: A tag and generate approach. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1869-1881, Online. Association for Computational Linguistics.
|
| 266 |
+
Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Diptikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. Proceedings of the AAAI Conference on Artificial Intelligence.
|
| 267 |
+
Eric Malmi, Aliaksei Severyn, and Sascha Rothe. 2020. Unsupervised text style transfer with padded masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8671-8680, Online. Association for Computational Linguistics.
|
| 268 |
+
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics.
|
| 269 |
+
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1-38.
|
| 270 |
+
John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. TextAttack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 119-126, Online. Association for Computational Linguistics.
|
| 271 |
+
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
|
| 272 |
+
Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. SemEval-2013 task 2: Sentiment analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on
|
| 273 |
+
|
| 274 |
+
Semantic Evaluation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA. Association for Computational Linguistics.
|
| 275 |
+
Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188-197, Hong Kong, China. Association for Computational Linguistics.
|
| 276 |
+
Judea Pearl. 2018. Causal and counterfactual inference. The Handbook of Rationality, pages 1-41.
|
| 277 |
+
Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Graham Neubig, and Zachary C. Lipton. 2020. Learning to deceive with attention-based explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4782-4793, Online. Association for Computational Linguistics.
|
| 278 |
+
Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5043-5053, Hong Kong, China. Association for Computational Linguistics.
|
| 279 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 280 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
|
| 281 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.
|
| 282 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
|
| 283 |
+
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "why should I trust you?": Explain-
|
| 284 |
+
|
| 285 |
+
ing the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM.
|
| 286 |
+
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865, Melbourne, Australia. Association for Computational Linguistics.
|
| 287 |
+
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902-4912, Online. Association for Computational Linguistics.
|
| 288 |
+
Alexis Ross, Ana Marasović, and Matthew E. Peters. 2020. Explaining nlp models via minimal contrastive editing (mice).
|
| 289 |
+
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8732-8740.
|
| 290 |
+
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Association for Computational Linguistics.
|
| 291 |
+
Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. 2020. Learning what makes a difference from counterfactual examples and gradient supervision. In Computer Vision – ECCV 2020, pages 580–599, Cham. Springer International Publishing.
|
| 292 |
+
Vijay V Vazirani. 2013. Approximation algorithms. Springer Science & Business Media.
|
| 293 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
|
| 294 |
+
Sarah Wegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.
|
| 295 |
+
|
| 296 |
+
John Wieting and Kevin Gimpel. 2018. ParaNMT-50M: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Melbourne, Australia. Association for Computational Linguistics.
|
| 297 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 298 |
+
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, reproducible, and testable error analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 747-763, Florence, Italy. Association for Computational Linguistics.
|
| 299 |
+
Tongshuang Wu, Kanit Wongsuphasawat, Donghao Ren, Kayur Patel, and Chris DuBois. 2020. Tempura: Query analysis with structural templates. In CHI '20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, pages 1-12. ACM.
|
| 300 |
+
Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019a. Generating fluent adversarial examples for natural languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5564-5569, Florence, Italy. Association for Computational Linguistics.
|
| 301 |
+
Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM journal on computing, 18(6):1245-1262.
|
| 302 |
+
Yuan Zhang, Jason Baldridge, and Luheng He. 2019b. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1298-1308, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 303 |
+
Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texy- gen: A benchmarking platform for text generation models. In The 41st International ACM SIGIR Con- ference on Research & Development in Information Retrieval, SIGIR 2018, Ann Arbor, MI, USA, July 08- 12, 2018, pages 1097-1100. ACM.
|
| 304 |
+
|
| 305 |
+
<table><tr><td>Dataset</td><td>negation</td><td>quantifier</td><td>lexical</td><td>resemantic</td><td>insert</td><td>delete</td><td>restructure</td><td>shuffle</td><td>global</td></tr><tr><td>CAD</td><td>3,274</td><td>292</td><td>8,143</td><td>2,603</td><td>960</td><td>952</td><td>220</td><td>36</td><td>3,466</td></tr><tr><td>Contrast</td><td>336</td><td>436</td><td>1,607</td><td>1,291</td><td>589</td><td>586</td><td>275</td><td>149</td><td>877</td></tr><tr><td>HANS</td><td>50</td><td>0</td><td>0</td><td>0</td><td>3,926</td><td>3,926</td><td>494</td><td>1,602</td><td>2</td></tr><tr><td>ParaNMT</td><td>2,797</td><td>825</td><td>10,000</td><td>10000</td><td>6,442</td><td>6,205</td><td>5,136</td><td>1,417</td><td>10,000</td></tr><tr><td>PAWS</td><td>81</td><td>1,815</td><td>10,000</td><td>10000</td><td>3,630</td><td>3,403</td><td>4,551</td><td>10,000</td><td>10,000</td></tr><tr><td>WinoGrande</td><td>3,011</td><td>94</td><td>10,000</td><td>6,927</td><td>120</td><td>124</td><td>453</td><td>65</td><td>3184</td></tr><tr><td>Crawled</td><td>0</td><td>0</td><td>5,000</td><td>0</td><td>5,000</td><td>5,000</td><td>0</td><td>108</td><td>5,000</td></tr><tr><td>Total</td><td>9,549</td><td>3,462</td><td>44,750</td><td>30,821</td><td>20,667</td><td>20,167</td><td>11,129</td><td>13,377</td><td>32,529</td></tr></table>
|
| 306 |
+
|
| 307 |
+
Table 7: The datasets used for finetuning POLYJUICE, and the control code distributions.
|
| 308 |
+
|
| 309 |
+
# A GPT-2 as Counterfactual Generator
|
| 310 |
+
|
| 311 |
+
# A.1 Training Data and Parameters
|
| 312 |
+
|
| 313 |
+
We combine several datasets to finetune POLYJUICE.
|
| 314 |
+
|
| 315 |
+
Contrast set. Authors of 10 existing NLP dataset each manually perturbed 100-1,000 instances to change the gold label, so to inspect a model's local decision boundary (Gardner et al., 2020). The perturbation patterns vary based on the tasks and the annotators, allowing us to learn diverse strategies. To make sure we can use the contrast set to evaluate the Sentiment model, we excluded the IMDb movie review from the training.
|
| 316 |
+
|
| 317 |
+
Counterfactually-augmented data (CAD). Kaushik et al. (2020) crowdsourced counterfactuals for IMDb movie review (1.7k), which we split into paired sentences to match the text length of other datasets. CAD's perturbation patterns also vary based on the task, but can especially contribute to negation. As NLI is in our demonstrating applications, we did not use their 6.6k SNLI counterfactuals. $^{10}$
|
| 318 |
+
|
| 319 |
+
WinoGrande is a large-scale dataset of 44k instances for testing common sense problems (Sakaguchi et al., 2020). It contains sentences that differ only by one trigger word (e.g., one noun), making it most suitable for learning lexical exchanges.
|
| 320 |
+
|
| 321 |
+
ParaNMT-50M contains 50 million English-English sentential paraphrase pairs, covering various domains and styles of text, as well as different sentence structures (Wieting and Gimpel, 2018).
|
| 322 |
+
|
| 323 |
+
PAWS (Zhang et al., 2019b) contains pairs with high text overlaps, created through controlled word swapping, best demonstrating shuffle and restructure. We used its 49k Wikipedia parts.
|
| 324 |
+
|
| 325 |
+
HANS (McCoy et al., 2019), a challenge set for NLI, contains 10k pairs of premises and hypotheses created based on 10 heavily fallible syntactic templates, and therefore compensates rarer structural changes that may be missed by PAWS.
|
| 326 |
+
|
| 327 |
+
Crawled We additionally crawl naturally occurring sentence pairs from non-paired datasets boost some specific patterns and increase lexical diversity. This include (1) CommonGen (Lin et al., 2020), sentences with common sense concepts; (2) Natural Questions (Kwiatkowski et al., 2019), collections of queries issued to Google Engines (and therefore involve various paraphrases of similar user intents), and (3) SQuAD (Rajpurkar et al., 2016), whose paragraphs involve Wikipedia knowledge. We estimate close pairs using edit distance, and broadly accept those with less than $60\%$ editing. To exclude tricky cases (e.g., "how do I not be" can be incorrectly regarded as negation for "how do I recover it"), we only augment the most determined patterns: lexical, insert, delete, and shuffle.
|
| 328 |
+
|
| 329 |
+
To balance the distribution (Table 7), for each dataset, we extract control codes from all the $(x,\hat{x})$ , and randomly sample up to 10,000 instances per codes. Still, quantifier and negation have less training data compared to other codes. Fortunately, these codes tend to be limited to more specific patterns ("more than", "not", "never") when compared to "broad" codes like lexical, and thus even a small sample is enough to learn them. We finetuned an off-the-shelf GPT-2 model from Wolf et al. (2020) for 10 epochs with an initial learning rate 5e-5, a batch size of 8, and a sequence length of 120 (but any LM can potentially be used). We select the best epoch based on the evaluation loss on a holdout set of size 5,000. The training took around 8 hours on two Titan RTXs.
|
| 330 |
+
|
| 331 |
+
# A.2 Intrinsic Evaluation Details
|
| 332 |
+
|
| 333 |
+
# A.2.1 Closeness and Diversity
|
| 334 |
+
|
| 335 |
+
Similar to Madaan et al. (2021), we compare the diversity and closeness of Polyjuice with alternative generators, i.e., RoBERTa and T5, representing masked language models that prioritize word and span substitution, and original GPT-2, representing the standard generative model not conditioned on $x$ . For a given $x$ and its counterfactuals $\hat{\mathbf{X}}$ , we approx
|
| 336 |
+
|
| 337 |
+
imate diversity using self-BLEU (Zhu et al., 2018) within $\hat{\mathbf{X}}$ . Meanwhile, closeness is the average distance between $x$ and every $\hat{x} \in \hat{\mathbf{X}}$ , both with the normalized word level Levenshtein edit distance ((Levenshtein, 1966), used in MiCE (Ross et al., 2020)), and syntactic tree edit distance ((Zhang and Shasha, 1989) in GYC (Madaan et al., 2021)).
|
| 338 |
+
|
| 339 |
+
We run the three generators on 300 sentences in total. In GPT-2, we take the first two words of an $x$ as the input context (prompt), limit the length of the generation to be similar to $x$ , and collect 10 counterfactuals. As for RoBERTa and T5, we repeatedly perturb $x$ for three times, each time randomly placing up to three [MASK] tokens, and ask the generator to generate 5 counterfactuals through beam search, following Ribeiro et al. (2020). Polyjuice uses the same blank (mask) placement as in RoBERTa and T5, but we additionally enumerate through all control codes. For each $x$ , we randomly sample 5 counterfactuals to form $\hat{\mathbf{X}}$ per generator.
|
| 340 |
+
|
| 341 |
+
As shown in Table 2, POLYJUICE achieves a balance between diversity and closeness. Ideally, we would also like to compare POLYJUICE with concurrent work (Madaan et al., 2021; Ross et al., 2020), but these are yet to be open-sourced and require extensive implementation or finetuning.
|
| 342 |
+
|
| 343 |
+
# A.2.2 Controllability
|
| 344 |
+
|
| 345 |
+
To evaluate controllability, we compare POLYJUICE with T5, and GPT-2 finetuned on prompts without codes (called POLYJUICE -a), such that both baselines consider sufficient context. For each control code, we compare the control success rate of POLYJUICE and POLYJUICE-a on 300 prompts. For each prompt, we generate counterfactuals through beam search (beam = 5), and recompute the codes on the top three generated $\hat{x}$ . We deem the control successful if at least one of the three recomputed codes matches the desired control code (though in POLYJUICE-a, we only measure whether the code naturally occurs in the uncontrolled generation.) The success rate increases by $26\% \pm 13\%$ across all control codes, ranging from quantifier (increasing $6\%$ , from $50\%$ to $56\%$ ) to negation ( $42\%$ , from $5\%$ to $47\%$ ). Non-finetuned T5 also achieves less control (success rate decreases by $33\%$ on average.)
|
| 346 |
+
|
| 347 |
+
Common failure cases include (1) The control codes conflict with the blanks, e.g., "a dog is embraced by a [BLANK]" would not respond to negation. (2) $x$ does not have a corresponding pattern, e.g., shuffle is not applicable to "the movie
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
Figure 7: A sample labeling task: The crowdworkers annotate three counterfactuals based on their validity and class label, with respect to the original instance.
|
| 351 |
+
|
| 352 |
+
is good." (3) certain salient patterns dominate the generation probability, e.g., the model tends to perturb the quantifier "two" in "two dogs are running," regardless of the code.
|
| 353 |
+
|
| 354 |
+
# B Additional Train & Eval Details, §3
|
| 355 |
+
|
| 356 |
+
# B.1 MTurk Labeling Details
|
| 357 |
+
|
| 358 |
+
Procedure The study started with an introduction that explained the context and tasks. To familiarize crowdworkers with the task, we asked them to complete 1-2 training rounds, and explained the expected labels. Each annotator then completed 22 tasks, labeling 3 counterfactuals of a single example in each round, as in Figure 7. The 22 rounds consisted of 20 actual labeling tasks and 2 extra "gold rounds" with known correct labels. The gold cases later served to filter low-quality crowdworkers. The median annotation time was around 15 minutes, and participants received $2.5.
|
| 359 |
+
|
| 360 |
+
Participants. We recruited participants from MTurk, limiting the pool to subjects from within the US with a prior task approval rating of at least $97\%$ and a minimum of 1,000 approved tasks.
|
| 361 |
+
|
| 362 |
+
Data quality. We applied two filtering strategies: (1) High-quality worker. We only kept data from participants whose median labeling time per round was more than 18 seconds and correctly labeled at least 4 gold counterfactuals (out of 6), or who correctly labeled all gold ones. (2) Majority vote labeling. We collected two annotations per counterfactual, and only kept those that at least one annotator deemed valid, and both annotators agreed on a particular class label. One of the authors la
|
| 363 |
+
|
| 364 |
+

|
| 365 |
+
Figure 8: The accuracy trend on two Sentiment datasets, as the total training datasize $(m + n)$ varies. The blue line shows an augmentation of $m = 2k$ counterfactuals, and the blue one represents the corresponding m-baseline. Though the counterfactuals remains useful on datasets like SemEval across all $m + n$ , it appears too many counterfactuals may be harmful (Amzbook).
|
| 366 |
+
|
| 367 |
+
beled a subset of $100\hat{x}$ on $100x$ in Sentiment, and reached high agreement with the majority-voted results $(\kappa = 0.77$ raw labeling agreement $88\%$
|
| 368 |
+
|
| 369 |
+
# B.2 Training Details & $m / n$ Ratios, for §3.2
|
| 370 |
+
|
| 371 |
+
For each $(m,n)$ , we created three samples of training data. Each sample was further averaged over four random seeds. For each run, we heuristically picked the initial learning rates 1e-5, 2e-5, 2e-5 for Sentiment, NLI and QQP, and trained 20 epochs with a dropout rate of 0.1 and a batch size of 16. We selected the epoch that had the highest accuracy on the corresponding validation set, which takes 1/5 of the training data size, with the same ratio of $m/n$ counterfactual and original examples.
|
| 372 |
+
|
| 373 |
+
We further explore ratios of added counterfactuals. Take Sentiment as an example: while the counterfactual remains effective on most datasets, it hurts the model performance on Amzbook when the counterfactual takes a large proportion (Figure 8, Yelp followed a similar but more mild trend). We suspect that flipping out too much original data affects the data diversity, and in turn decreases the model performance. Similarly, Huang et al. (2020) asserted that augmenting $n = 1.7k$ NLI data with $m = 6.6k$ counterfactuals did not improve model generalization accuracy.
|
| 374 |
+
|
| 375 |
+
# C Additional Explanation Details §4
|
| 376 |
+
|
| 377 |
+
# C.1 Selection Methods
|
| 378 |
+
|
| 379 |
+
Because SHAP weights reflect the average effect of masking a token $t$ , we also focus on word features that are abnormal on average.
|
| 380 |
+
|
| 381 |
+
More concretely, we define the expected change-in-prediction for perturbing a token $t$ to be the SHAP importance on it, $\mathbf{H}[\mathrm{D}_{\mathrm{f}}(t,x)] = s(t)$ . In Figure 3, $s(t = \text{depression}) = 0.276$ . The actual
|
| 382 |
+
|
| 383 |
+
prediction change $\mathrm{D_f}(t,x)$ is the weighted average of $|\mathrm{f_p}(x) - \mathrm{f_p}(\hat{x})|$ for all the $\hat{x}$ that affect $t$ (depression $\twoheadrightarrow$ trouble, depression $\twoheadrightarrow$ a mood), where $\mathrm{f_p}(x)$ is the prediction probability of $f$ on $x$ . The weight corresponds to the number of words modified in $\hat{x}$ : If $e(\hat{x})$ denotes the set of edited words in $x$ , then $w(\hat{x}) = 1 / |e(\hat{x})|$ . Intuitively, the more words changed in $\hat{x}$ , the less impact each word has; In Figure 3D, we regard "depression" to be responsible for half of the impact in in depression $\twoheadrightarrow$ suicidal. We group $\hat{x}$ based on their affected words $G_{t} = \{\hat{x}\mid t\in e(\hat{x})\}$ . $\mathrm{D_f}(t,x)$ then becomes:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\mathrm {D} _ {\mathrm {f}} (t, x) = \frac {1}{| G _ {t} | + 1} \left(s (t) + \sum_ {\hat {x} \in G _ {t}} w (t) \cdot | \mathrm {f} _ {\mathrm {p}} (x) - \mathrm {f} _ {\mathrm {p}} (\hat {x}) |\right)
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
The additional SHAP weight $s(t)$ acts as a smoothing factor to penalize outliers. Then the gap between the expectation and reality is:
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\Delta \mathrm {D} _ {\mathrm {f}} (t, x) = \mathrm {D} _ {\mathrm {f}} (t, x) - \mathbf {H} [ \mathrm {D} _ {\mathrm {f}} (t, x) ]
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
We first find the abnormal tokens: (1) $t$ with small SHAP weight, but $\hat{x}$ that change $t$ experience large prediction change on average: $t_{L} = \arg \max_{t\in x}\Delta \mathrm{D}_{\mathrm{f}}(t,x)$ , and (2) $t$ with large SHAP weight, but $\hat{x}$ with $t$ changed usually have intact prediction: $t_{U} = \arg \max_{t\in x} - \Delta \mathrm{D}_{\mathrm{f}}(t,x)$ .
|
| 396 |
+
|
| 397 |
+
Then, we use the most extreme cases within the groups of $G_{t_L}$ and $G_{t_U}$ as the concrete counterfactual explanations, based on their prediction change $|\mathrm{f_p}(x) - \mathrm{f_p}(\hat{x})|$ , and the aggregated SHAP weights of all the changed tokens:
|
| 398 |
+
|
| 399 |
+
$$
|
| 400 |
+
\hat {x} _ {L} = \underset {\hat {x} \in G _ {t _ {L}}} {\arg \max } \left(| f _ {p} (x) - f _ {p} (\hat {x}) | - \sum_ {u \in r (\hat {x})} s (u)\right)
|
| 401 |
+
$$
|
| 402 |
+
|
| 403 |
+
# C.2 User Study Details
|
| 404 |
+
|
| 405 |
+
Figure 9 shows the sample interface. Participants started by just seeing the reference example and the model query box on the left hand side. When they chose to start the task or after they had exhausted their ten query chances, the query box was disabled, the tasks on the right were displayed, and the participants completed the tasks. We compensated participants \(20 for the one hour study.
|
| 406 |
+
|
| 407 |
+
# D Additional Err. Analysis Details §5
|
| 408 |
+
|
| 409 |
+
# D.1 Additional Case Study: Quantifiers
|
| 410 |
+
|
| 411 |
+
As a follow-up to Figure 6, we slice the data to find entailment instances that have numbers in the hypothesis sentence, and perturb their quantifiers.
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
Figure 9: A sample explanation task for §4
|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
Figure 10: The NLI model cannot perform the actual counting when the exact number is missing from $P$ .
|
| 418 |
+
|
| 419 |
+
The extracted templates show that the model does not perform actual counting. When changing one number to another (NUM $\rightarrow$ NUM), the model only flips the label in $64.7\%$ cases, while we would expect all cases to be like in Figure 10A. An inspection of instances indicates the model gets confused when the premise does not contain the same number explicitly. Indeed, when we filter for such instances (e.g. Figure 10B), the label flip rate of NUM $\rightarrow$ NUM is lowered to $30.2\%$ .
|
| 420 |
+
|
| 421 |
+
Further, the model only reacts to some quantifier phrase modifiers. +at least ("at least two women are at a bar") will always still result in entailment, prediction, +only and +exactly flip the predicted label to neutral $90\%$ of the time ("exactly two women are at a bar"), but the model only changes the prediction $52.6\%$ of the time when we add +more than ("more than two women are at a bar").
|
| 422 |
+
|
| 423 |
+
# D.2 Representative Perturbation Templates
|
| 424 |
+
|
| 425 |
+
Similar to Wu et al. (2020), the process of finding representative perturbation patterns takes two steps:
|
| 426 |
+
|
| 427 |
+
Extract template. For each $\hat{x}$ , we compare it with its $x$ , and translate the perturbed spans into templates using different combinations of texts, lemmas, sparse and fine-grained part-of-speech tags. We optionally include surround-
|
| 428 |
+
|
| 429 |
+
ing contexts determined by the dependency tree structure (tokens that share the same parents as the perturbed span). For example, "is not reading" can result in templates $t$ as fine-grained as is reading $\rightarrow$ is not reading, or as sparse as +PART. Meanwhile, "are not playing" also translates to +PART or +not, but not is reading $\rightarrow$ is not reading. As such, the $\hat{x}$ and templates form a many-to-many relationship: each $\hat{x}$ generates multiple templates, and each template covers a different group of $\hat{x}$ .
|
| 430 |
+
|
| 431 |
+
Select Representative Templates. To find representative changes, we prefer (1) templates that cover a large number of $\hat{x}$ . Meanwhile, to avoid overfitting to one instance (e.g., extracting a template red $\rightarrow$ ADJ only because "red" is repeatedly perturbed in one $x$ ), we prefer (2) templates that perturb various unique $x$ . We also prefer (3) finer-grained templates, to avoid being unnecessarily abstract (e.g., to avoid abstracting "not" when it is the only PART changed.)
|
| 432 |
+
|
| 433 |
+
With these intuitions, we form the template selection as a weighted set coverage problem. We see the union of counterfactuals for each $x$ , $\hat{\mathbf{X}}$ , as the entire set of elements. Then, each template $t \in T = t_1, \dots, t_m$ represents a subset of $\hat{\mathbf{X}}$ that contains a number of counterfactuals $|t|$ . We define the weight as $w(t) = g(t) / |t|_x$ , where $|t|_x$ quantifies the unique original $x$ covered by $t$ , and $g(t)$ represents the sparsity of $t$ (heuristically decreasing from text to POS). This way, templates that are too abstract or too focused on a certain $x$ are penalized by having a high weight. We use a classic greedy algorithm (Vazirani, 2013) to select a subset of $T^* \subset T$ , such that the aggregated coverage is maximized, and the weight is minimized.
|
data/2021/2101_00xxx/2101.00288/layout.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00292/93f9ad16-4637-4a8a-8e75-517130c9c482_content_list.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
data/2021/2101_00xxx/2101.00292/93f9ad16-4637-4a8a-8e75-517130c9c482_model.json
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|