Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 16
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 58520)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 16
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} A sediment transport process in coastal applications is a type of a two-phase fluid-solid flow with sea water as the fluid and pebbles and stones of varying sizes, and quartz sand as the solid. There are three modes of sediment transport: bed load, suspended load, and wash load transport. The bed load transport is characterized by motion of the sediment particles without detaching from the sediment bed for a significant amount of time, i.e. the sediment particles move by sliding, rolling, and saltating. There are a number of empirical models developed for the bed load transport, for example Meyer-Peter and Mueller \cite{meyer_and_muller_1948}, Fernandez Luque and Van Beek \cite{luque_beek_1976}, Nielsen \cite{nielsen_1992}, Ribberink \cite{ribberink_1998}. In the suspended load transport, the sediment particles suspended in water are advected with the water flow. These sediment particles, which are typically of a fine silt and clay size, remain suspended in water by turbulent flows and require a significant amount of time to settle on the sediment bed. Sediment particles in the wash load are transported without deposition while remaining close to the water surface in near-permanent suspension. Due to a limited effect of the wash load on the sediment bed morphology, effects of the wash load transport are not considered in the presented work. Hydrodynamic, sediment transport, and bed morphodynamic processes are closely interrelated: hydrodynamic parameters of a water flow affect sediment transport rates, these rates influence the bed morphology that in its turn affects the water flow and sediment transport. These hydro-sediment-morphodynamic processes driven by astronomical tides, winds, and long-wave currents in coastal areas attract a high degree of interest since morphological changes of a coastal area can negatively affect its infrastructure and environment. Elements of coastal infrastructure, such as bridges, piers, and levees, can become structurally compromised as a result of excessive erosion of the sediment bed due to scouring. Environmental concerns include shoreline and beach erosion that may damage natural habitats of endangered protected species, and the effect of sediment transport on contaminants, i.e. sediment deposits may serve as dangerous contaminant sinks or sources. It is thus evident that mathematical modeling of hydro-sediment-morphodynamic processes in coastal areas has clear engineering relevance. Deriving such models poses, however, a number of challenges since they have to couple non-linear hydrodynamic, sediment transport, and bed morphodynamic equations along with modeling their two-way interactions. A number of hydro-sediment-morphodynamic models, ranging from one to three dimensional models, have been developed for coastal applications over the last four decades. These models are discussed in detail in \cite{amoudry_2008} and \cite{amoudry_souza_2011}. A three-dimensional model has the capacity for a more accurate and detailed resolution of the process \cite{wu_etal_2000, fang_and_wang_2000, marsooli_and_wu_2015}; however, the amount of computational resources required to run any sizable simulation with such a model is prohibitively large. Therefore, application of three-dimensional models is typically limited to short-time simulations over small-size domains. As an alternative, a depth averaged two- or, in some cases, one-dimensional model can be used to resolve hydro-sediment-morphodynamic processes in coastal areas. One such model is formed by the shallow water hydro-sediment-morphodynamic (SHSM) equations, which are derived by integrating and averaging the three-dimensional mass and momentum conservation equations of motion (e.g. see Wu \cite{wu_2007}). In the SHSM equations, the nonlinear shallow water equations, which resolve water-sediment mixture hydrodynamics, are fully coupled with sediment transport and bed morphodynamic models (see Cao \emph{et al.} \cite{cao_etal_2017} for variations of the SHSM equations). Within the last decade, the SHSM equations have been successfully applied in studies of coastal hydro-sediment-morphodynamic processes (e.g. Xiao \emph{et al.}, 2010 \cite{xiao_etal_2010}, Zhu and Dodd, 2015 \cite{zhu_and_dodd_2015}, Kim, 2015 \cite{kim_2015}, Incelli \emph{et al.}, 2016 \cite{incelli_etal_2016}, Briganti \emph{et al.}, 2016 \cite{briganti_etal_2016}). Numerical solution algorithms for the SHSM equations are typically developed with finite volume methods for applications with unstructured grids. Cao \emph{et al.} \cite{cao_etal_2004} use the total-variation-diminishing (TVD) weighted average flux method (WAF) in conjunction with the Harten-Lax-van Leer-contact (HLLC) approximate Riemann solver to develop their numerical solution algorithm for the SHSM equations. Examples of works that employ HLLC as an approximate Riemann solver for numerical flux definitions include \cite{zhao_etal_2016}, \cite{zhao_etal_2019}, and \cite{hu_etal_2019}. Algorithms based on upwinding numerical fluxes and Roe-averaged states are developed in \cite{li_and_duffy_2011} and \cite{benkhaldoun_etal_2013}. Liu \emph{et al.} \cite{liu_etal_2015_0}, \cite{liu_etal_2015_1}, \cite{liu_and_beljadid_2017} develop numerical methods for the SHSM equations that employ a central-upwind scheme along with the Lagrange theorem to approximate the upper and lower bounds of the local wave speeds. Xia \emph{et al.} \cite{xia_etal_2017} use the operator-splitting technique for the source term and the FORCE (first-order centered) approximate Riemann solver for a numerical treatment of the model. Discontinuous Galerkin discretizations of the SHSM equations are used less often, see, \emph{e.g.}, \cite{kesserwani_etal_2014} and \cite{clare_etal_2020}. The nonlinear shallow water equations, which form the hydrodynamic part of the SHSM equations, have a number of advantages: a capacity to approximate water motion with a sufficient accuracy in the shallow water flow regime, a plethora of developed numerical solution algorithms (e.g. Zhao \emph{et al.} \cite{zhao_etal_1994}, Anastasiou and Chan \cite{anastasiou_and_chan_1997}, Sleigh \emph{et al.}, \cite{sleigh_etal_1998}, Aizinger and Dawson \cite{aizinger_and_dawson_2002}, Yoon and Kang \cite{yoon_and_kang_2004}, Kubatko \emph{et al.} \cite{kubatko_etal_2006_nswe}, Samii \emph{et al.} \cite{samii_etal_2019}), efficient parallelization strategies (e.g. hybrid MPI+OpenMP and HPX parallelization in Bremer \emph{et al.} \cite{bremer_etal_2019}), and its ability to approximate wave breaking effects in surf zones. However, this hydrodynamic model does not have a capacity to capture wave dispersion effects; and, therefore, an application of the SHSM equations is not feasible in areas where the dispersion effects are prevalent. An alternative depth-averaged hydrodynamic model that can reproduce dispersion effects is formed by the Green-Naghdi equations developed in \cite{green_naghdi_1976}. A number of numerical solution algorithms exist for the Green-Naghdi equations that use various discretization techniques, from finite difference to finite element methods, and a Strang operator splitting technique (e.g. see \cite{chazel_etal_2011, bonneton_etal_2011, panda_etal_2014, lannes_and_marche_2015, duran_marche_2015, duran_and_marche_2017, samii_and_dawson_2018, marche_2020}). The use of a Strang operator splitting in these algorithms provides the capacity to switch between the nonlinear shallow water equations and the Green-Naghdi equations whenever one of the hydrodynamic models is more accurate than the other \cite{duran_marche_2015}. The purpose of the presented work is to introduce dispersive wave effects into the SHSM equations. This is achieved through considering the Green-Naghdi equations which results in a dispersive wave hydro-sediment-morphodynamic model. Since the difference between the nonlinear shallow water equations and the Green-Naghdi equations is constituted by the dispersive term defined through a differential operator that forms an elliptic system \cite{bonneton_etal_2011}, this new model is formed by incorporating the dispersive term into the SHSM equations. The resulting model has the potential to be used in the simulation of morphodynamic processes in areas where dispersive wave effects are prevalent. Numerical solution algorithms for this model are developed employing a Strang operator splitting technique and discontinuous Galerkin finite element methods. A significant portion of this work comprises the development of a massively parallel solver that uses the developed numerical solution algorithms. The solver extends a C++ software package developed by Bremer and Kazhyken\footnote{The software is under development on the date of the publication, and can be accessed at \url{www.github.com/UT-CHG/dgswemv2}. Should there be any questions, comments, or suggestions, please contact the developers through the repository issues page.}. The rest of the paper is organized as follows. Section 2 presents the governing equations for the dispersive wave hydro-sediment-morphodynamic model. The developed numerical solution algorithms are introduced in Section 3. Section 4 presents a number of numerical tests, including one-dimensional and two-dimensional dam break simulations and solitary wave runs over an erodible sloping beach, that are used to perform verification and validation of the developed algorithms. Final conclusions are presented in Section 5. \section{Governing equations} A body of water can be represented by a domain $D_t \subset \mathbb R^{d+1}$, where $d$ is the horizontal spatial dimension that can take values 1 or 2, and $t$ represents the time variable. The domain $D_t$ is filled with a water-sediment mixture, modeled as an incompressible inviscid fluid, and bounded vertically by the bottom and top boundaries, $\Gamma_B$ and $\Gamma_T$, which the fluid particles cannot cross (cf. Fig.\ref{Fig:Domain}). It is assumed that $\Gamma_B$ and $\Gamma_T$ can be represented as graphs that vary in time: $\Gamma_B$ due to sediment transport and bed morphodynamic processes, $\Gamma_T$ as the evolving free surface of the body of water. The bathymetry, $b(X,t)$, and the free surface elevation, $\zeta(X,t)$, of the body of water are used in the parameterization of $\Gamma_B$ and $\Gamma_T$: \begin{linenomath} \begin{subequations} \begin{align} \Gamma_B &= \{(X,-H_0+b(X,t)):X\in \mathbb R^d\}, \\ \Gamma_T &= \{(X,\zeta(X,t)):X\in \mathbb R^d\}, \end{align} \end{subequations} \end{linenomath} and the domain $D_t$ is defined as a set of points $(X,z) \in \mathbb R^d \times \mathbb R$ where $-H_0+b(X,t) < z < \zeta(X,t)$. \begin{figure} \center \includegraphics[width=3in]{domain.eps} \caption{A model representation of a body of water as a domain $D_t \subset \mathbb R^{d+1}$.} \label{Fig:Domain} \end{figure} A depth-averaged model that can resolve water wave dynamics, and subsequent sediment transport and bed evolution in the domain $D_t$ is the shallow water hydro-sediment-morphodynamic (SHSM) equations (e.g. see Cao \emph{et al.} \cite{cao_etal_2004}). The hydrodynamic part of the equations is represented by the nonlinear shallow water equations, which provide a sufficiently accurate approximation to the water wave dynamics whenever the shallowness parameter $\mu=H_0^2/L_0^2$, where $L_0$ is the characteristic length, and $H_0$ is the reference depth, is less than unity. The present work aims to develop a hydro-sediment-morphodynamic model that has the capacity to capture wave dispersion effects, which the nonlinear shallow water equations are unable to resolve. Therefore, the nonlinear shallow water equations in the SHSM model are replaced with a single parameter variation of the Green-Naghdi equations, a depth-averaged hydrodynamic model which has the capacity to capture wave dispersion effects, introduced by Bonneton \emph{et al.} in \cite{bonneton_etal_2011}. This forms a set of equations defined over a horizontal domain $\Omega \subset \mathbb R^d$: \begin{linenomath} \begin{equation}\label{Eq:GNHSM} \partial_t \boldsymbol q + \nabla \cdot \boldsymbol F(\boldsymbol q) + \boldsymbol{D}(\boldsymbol q) = \boldsymbol S(\boldsymbol q), \end{equation} \end{linenomath} where the vector of unknowns $\boldsymbol q$ and the flux matrix $\boldsymbol F(\boldsymbol q)$ are \begin{linenomath} \begin{equation} \boldsymbol q = \begin{Bmatrix} h \\ h \mathbf u \\ hc \\ b \end{Bmatrix}, \quad \boldsymbol F(\boldsymbol q) = \begin{Bmatrix} h \mathbf u \\ h\mathbf u \otimes \mathbf u + \frac 1 2 g h^2 \mathbf I \\ h c \mathbf u \\ \mathbf q_b \end{Bmatrix}, \end{equation} \end{linenomath} the source term $\boldsymbol S(\boldsymbol q)$ is defined as \begin{linenomath} \begin{equation} \boldsymbol S(\boldsymbol q) = \begin{Bmatrix} \frac{E-D}{1-p} \\ -gh \nabla b - \frac{\rho_s-\rho_w}{2\rho}gh^2 \nabla c-\frac{(\rho_0-\rho)(E-D)}{\rho(1-p)} \mathbf u + \mathbf f \\ E-D \\ - \frac{E-D}{1-p} \end{Bmatrix}, \end{equation} \end{linenomath} $\mathbf u$ is the water velocity represented by a $d$ dimensional vector and $h$ is the water depth represented by the mapping $h(X,t) = \zeta(X,t) + H_0 - b(X,t)$ and assumed to be bounded from below by a positive value. Moreover, $c$ is the volume concentration of sediment in water-sediment mixture, $E$ and $D$ are the sediment entrainment and deposition rates, respectively, $p$ is the bed porosity, $\rho_w$ and $\rho_s$ are the water and the sediment densities, $\rho$ and $\rho_0$ are the water-sediment mixture and saturated bed densities defined as $\rho=(1-c)\rho_w+c\rho_s$ and $\rho_0=(1-p)\rho_s+p\rho_w$, $\mathbf q_b$ is the bed load sediment flux, $\mathbf f$ comprises additional source terms for the momentum continuity equation (e.g. the Coriolis, bottom friction, and surface wind stress forces), $g$ is the acceleration due to gravity, and $\mathbf I \in \mathbb R^{d\times d}$ is the identity matrix. Finally, the wave dispersion effects are introduced into the model through the dispersive term \begin{linenomath} \begin{equation} \boldsymbol D(\boldsymbol q) = \begin{Bmatrix} 0 \\ \mathbf w_1 - \alpha^{-1} g h \nabla \zeta \\ 0 \\ 0 \end{Bmatrix}, \end{equation} \end{linenomath} where $\mathbf w_1$ is defined through an elliptic system \begin{linenomath} \begin{equation}\label{Eq:w1} (\mathbf I + \alpha h \mathcal T h^{-1}) \mathbf w_1 = \alpha^{-1} g h \nabla \zeta + h \mathcal Q_1(\mathbf u), \end{equation} \end{linenomath} with operators $\mathcal T$ and $\mathcal Q_1$ defined as \begin{linenomath} \begin{subequations} \begin{align} \mathcal T(\mathbf w) =&\,\mathcal R_1(\nabla\cdot\mathbf w) + \mathcal R_2(\nabla b \cdot \mathbf w), \\ \mathcal Q_1(\mathbf w) =&-2\mathcal R_1\left(\partial_{x} \mathbf w \cdot \partial_{y} \mathbf w^\perp+(\nabla \cdot \mathbf w)^2\right)+\mathcal R_2\left(\mathbf w\cdot (\mathbf w \cdot \nabla)\nabla b\right), \end{align} \end{subequations} \end{linenomath} where operators $\mathcal R_1$ and $\mathcal R_2$ are \begin{linenomath} \begin{subequations} \begin{align} \mathcal R_1(w) &= -\frac 1 {3h} \nabla(h^3 w) - \frac {h}{2} w \nabla b,\\ \mathcal R_2(w) &= \frac 1 {2h} \nabla (h^2 w) + w \nabla b, \end{align} \end{subequations} \end{linenomath} and $\mathbf w^\perp = (-w_2, w_1)^\mathbf{T}$. Parameter $\alpha\in\mathbb R$ in the dispersive term is used to optimize dispersive properties of the presented hydro-sediment-morphodynamic model. By adjusting $\alpha$, the difference between the phase and group velocities coming from the Stokes linear theory and the Green-Naghdi equations can be minimized. A common strategy aims at minimizing the averaged variation over some range of wave number values \cite{bonneton_etal_2011}. In the presented model $E$, $D$ and $\mathbf q_b$ are defined through empirical equations. The sediment entrainment rate $E$ may be defined as in \cite{li_duffy_2011}: \begin{linenomath} \begin{equation} E= \begin{cases} \phi (\theta-\theta_c)\lvert\mathbf{u}\rvert h&\text{if}\,\,\,\, \theta>\theta_c \\ 0&\text{if}\,\,\,\,\theta\leq\theta_c \end{cases}, \end{equation} \end{linenomath} where $\phi$ is a calibration parameter, $\theta_c$ is the critical Shields parameter and $\theta$ is the Shields parameter given by $\theta=\lvert\boldsymbol{\tau}_b\rvert/\sqrt{sgd_{50}}$, where $\boldsymbol{\tau}_b$ is the bottom friction, $s=\rho_s/\rho_w-1$ is the submerged specific gravity, and $d_{50}$ is the mean sediment particle size. The sediment deposition rate $D$ can be estimated by an empirical model from \cite{cao_etal_2004}: \begin{linenomath} \begin{equation} D = \omega_o C_a(1-C_a)^2, \end{equation} \end{linenomath} where $\omega_o$ is the setting velocity of a sediment particle in still water, and $C_a = c\alpha_c$ is the near-bed sediment volume concentration with the coefficient $\alpha_c = \min(2, (1-p)/c)$. A number of empirical models for $\mathbf q_b$ is proposed as (see \cite{diaz_etal_2008, cordier_etal_2011} and all the references therein) \begin{linenomath} \begin{equation}\label{Eq:Qb} \mathbf q_b = A(h, \mathbf u)\mathbf u \lvert \mathbf u \rvert^{m-1}, \end{equation} \end{linenomath} where $1 \leq m \leq 3$ and $A(h, \mathbf u)$ is an empirical equation, e.g. the Grass model takes $A$ as a constant calibrated for the application under investigation and sets $m=3$, cf. \cite{grass_1981}. \section{Numerical methods} Discontinuous Galerkin finite element methods are used to discretize the governing equations. This choice facilitates the use of unstructured meshes that are well suited for irregular geometries of coastal areas. Thus, the problem domain $\Omega$ is partitioned into a finite element mesh $\mathcal T_h = \{K\}$ that provides an approximation to the domain: \begin{linenomath} \begin{equation} \Omega\approx\Omega_h=\sum_{K\in \mathcal{T}_h}K, \end{equation} \end{linenomath} where the subscript $h$ stands for the mesh parameter represented by the diameter of the smallest element in the mesh. The set of all mesh element faces, $\partial\mathcal T_h$, and the set of all edges of the mesh skeleton, $\mathcal{E}_h$, are defined as \begin{linenomath} \begin{subequations} \begin{align} \partial\mathcal T_h &= \lbrace\partial K : K\in \mathcal{T}_h\rbrace,\\ \mathcal{E}_h &= \lbrace e\in\bigcup_{K\in\mathcal{T}_h}\partial K\rbrace. \end{align} \end{subequations} \end{linenomath} Note that in $\mathcal E_h$ the common element faces appear only once but in $\partial \mathcal T_h$ they are counted twice. To develop variational formulations of the governing equations, inner products are defined for finite dimensional vectors $\boldsymbol u$ and $\boldsymbol v$ through: \begin{linenomath} \begin{subequations} \begin{align} (\boldsymbol u,\boldsymbol v)_\Omega &= \int_\Omega \boldsymbol u \cdot \boldsymbol v \, \dd X, \\ \langle \boldsymbol u, \boldsymbol v \rangle_{\partial \Omega} &= \int_{\partial \Omega}\boldsymbol u \cdot \boldsymbol v\, \dd X, \end{align} \end{subequations} \end{linenomath} for $\Omega \subset \mathbb R^d$ and $\partial \Omega \subset \mathbb R^{d-1}$. An approximating space of trial and test functions is chosen as the set of square integrable functions over $\Omega_h$ such that their restriction to an element $K$ belongs to $\mathcal Q^p(K)$, a space of polynomials of degree at most $p \ge 0$ with support in $K$: \begin{linenomath} \begin{equation} \mathbf V_h^{p,m} \coloneqq \{\boldsymbol v \in (L^2(\Omega_h))^{m}: \boldsymbol v|_K \in \mathcal (\mathcal Q^p(K))^{m} \quad \forall K \in \mathcal T_h \}, \end{equation} \end{linenomath} and, similarly, an approximation space over the mesh skeleton is chosen as \begin{linenomath} \begin{equation} \mathbf M_h^{p,m} \coloneqq \{\boldsymbol \mu \in (L^2(\mathcal E_h))^{m}: \boldsymbol \mu|_e \in \mathcal (\mathcal Q^p(e))^{m} \quad \forall e \in \mathcal E_h\}. \end{equation} \end{linenomath} A Strang operator splitting technique is used in the numerical solution of the hydro-sediment-morphodynamic model presented in Eq.(\ref{Eq:GNHSM}). To this end, the model is split into two separate parts: (1) the SHSM equations obtained by dropping the dispersive term of the equations, and (2) the dispersive correction part where the wave dispersion effects on flow velocities are introduced into the model through the dispersive term. If $\mathcal S_1$ is a numerical solution operator for the SHSM equations, i.e. $\mathcal S_1(\Delta t)$ propagates numerical solution by a time step $\Delta t$, and, similarly, $\mathcal S_2$ is a numerical solution operator for the dispersive correction part, then the numerical solution operator for the full hydro-sediment-morphodynamic model in Eq.(\ref{Eq:GNHSM}) can be approximated with the Strang operator splitting technique \cite{strang_1968}: \begin{linenomath} \begin{equation} \mathcal S(\Delta t) = \mathcal S_1(\Delta t/2) \mathcal S_2(\Delta t) \mathcal S_1(\Delta t/2), \end{equation} \end{linenomath} where $\mathcal S$ is a second-order temporal discretization if both $\mathcal S_1$ and $\mathcal S_2$ use a second-order time discretization method. A numerical solution operator $\mathcal S_1$ for the SHSM equations is developed using a discontinuous Galerkin finite element formulation where an approximate solution $\boldsymbol q_h \in \mathbf V_h^{p,d+3}$ is sought such that it satisfies the variational formulation \begin{linenomath} \begin{equation}\label{Eq:NSWEVarLoc} (\partial_t \boldsymbol{q}_h,\boldsymbol{v})_{\mathcal{T}_h}-(\boldsymbol{F}_h,\nabla \boldsymbol{v})_{\mathcal{T}_h}+\langle\boldsymbol{F}_h^*,\boldsymbol{v}\rangle_{\partial {\mathcal{T}_h}}-(\boldsymbol{S}_h,\boldsymbol{v})_{\mathcal{T}_h}=0\quad \forall \boldsymbol{v}\in\textbf{V}_h^{p,d+3}, \end{equation} \end{linenomath} where $\boldsymbol{F}_h=\boldsymbol F(\boldsymbol{q}_h)$ and $\boldsymbol{S}_h=\boldsymbol S(\boldsymbol{q}_h)$, ${\boldsymbol F_h^*}$ is a single valued approximation to $\boldsymbol F_h \mathbf n$ over element faces, called the numerical flux, and $\mathbf n$ is the unit outward normal vector to element face. To define the numerical flux, the bed update part of the SHSM equations is singled out for a separate treatment. The numerical flux for this formulation is then defined as \begin{equation} \boldsymbol{F}_h^*=\begin{Bmatrix}\boldsymbol{G}_h^* \\ \mathbf{q}_b^*\end{Bmatrix}, \end{equation} where $\mathbf{q}_b^*$ is the numerical bed load flux, and $\boldsymbol{G}_h^*$ is the numerical flux for the remaining part of the system where the vector of unknowns $\boldsymbol r$ and the flux matrix $\boldsymbol G(\boldsymbol r)$ are \begin{linenomath} \begin{equation} \boldsymbol r = \begin{Bmatrix} h \\ h \mathbf u \\ hc \end{Bmatrix}, \quad \boldsymbol G(\boldsymbol r) = \begin{Bmatrix} h \mathbf u \\ h\mathbf u \otimes \mathbf u + \frac 1 2 g h^2 \mathbf I \\ h c \mathbf u \end{Bmatrix}. \end{equation} \end{linenomath} Assuming that the sediment transport is always in the flow direction, the numerical flux $\mathbf q_b^*$ is defined as in \cite{mirabito_etal_2011}: \begin{linenomath} \begin{equation} \mathbf q_b^*= \begin{cases} \mathbf q_b^+&\text{if}\,\,\,\,\mathbf {\hat u} \cdot \mathbf n \geq 0 \\ \mathbf q_b^-&\text{if}\,\,\,\,\mathbf {\hat u} \cdot \mathbf n < 0 \end{cases}, \end{equation} \end{linenomath} where $\mathbf {\hat u}$ is the Roe-averaged velocity defined as \begin{linenomath} \begin{equation} \mathbf {\hat u} = \frac{\mathbf{u}^+\sqrt{h^+} + \mathbf{u}^-\sqrt{h^-}}{\sqrt{h^+}+\sqrt{h^-}}. \end{equation} \end{linenomath} Here and for the rest of this article, superscript $+$ denotes a variable value at $\partial K$ when approaching from the interior of an element $K$, and $-$ when approaching from the exterior. An upwinding scheme is employed for the numerical bed load flux $\mathbf q_b^*$ since computing the eigenvalues of the normal Jacobian matrix for the flux matrix $\boldsymbol F(\boldsymbol{q})$ requires computationally intensive numerical approximation techniques and does not guarantee real values except in the case where the Grass model is used for $\mathbf q_b$ \cite{diaz_etal_2008, cordier_etal_2011}. Therefore, using numerical flux definitions that involve the eigenvalues of the normal Jacobian matrix for the full system may prove to be unfeasible. The normal Jacobian matrix $\boldsymbol A = \partial_{\boldsymbol r} (\boldsymbol G \mathbf n)$ of the remaining part of the system has four real eigenvalues: $\lambda_{1,2} = \mathbf{u}\cdot\mathbf{n}\pm\sqrt{gh}$, $\lambda_{3,4} = \mathbf{u}\cdot\mathbf{n}$. A Godunov-type Harten–Lax–van Leer scheme is used to define the numerical flux for the remaining system \cite{harten_etal_1983}: \begin{linenomath} \begin{equation} \boldsymbol{G}_h^* = \begin{cases} \boldsymbol{G}_h^+\mathbf{n}&\text{if}\,\,\,\,S^+>0\\ \boldsymbol{G}_h^{\text{HLL}}&\text{if}\,\,\,\,S^+\leq0\leq S^-\\ \boldsymbol{G}_h^-\mathbf{n}&\text{if}\,\,\,\,S^-<0 \end{cases}, \end{equation} \end{linenomath} where $\boldsymbol{G}_h=\boldsymbol G(\boldsymbol{r}_h)$, the truncated characteristic speeds $S^+$ and $S^-$ are \begin{linenomath} \begin{subequations} \begin{align} S^+&=\min(\mathbf{u}^+\cdot\mathbf{n}-\sqrt{gh^+}, \mathbf{u}^-\cdot\mathbf{n}-\sqrt{gh^-}),\\ S^-&=\max(\mathbf{u}^+\cdot\mathbf{n}+\sqrt{gh^+}, \mathbf{u}^-\cdot\mathbf{n}+\sqrt{gh^-}), \end{align} \end{subequations} \end{linenomath} and the Harten–Lax–van Leer flux $\boldsymbol{G}_h^{\text{HLL}}$ is \begin{linenomath} \begin{equation} \boldsymbol{G}_h^{\text{HLL}}=\frac{1}{S^--S^+}((S^-\boldsymbol{G}_h^+-S^+\boldsymbol{G}_h^-)\mathbf{n}-S^+S^-(\boldsymbol{r}_h^+-\boldsymbol{r}_h^-)). \end{equation} \end{linenomath} A hybridized discontinuous Galerkin scheme may be used to define the numerical flux through $\widehat{\boldsymbol r}_h\in \mathbf M_h^{p,d+2}$, an approximation to ${\boldsymbol r}$ over the mesh skeleton called the numerical trace \cite{nguyen_peraire_2012}: \begin{linenomath} \begin{equation} \boldsymbol G_h^* = \widehat{{\boldsymbol G}}_h\mathbf n + \boldsymbol \tau ({\boldsymbol r}_h - \widehat{\boldsymbol r}_h), \end{equation} \end{linenomath} where $\widehat{\boldsymbol G}_h=\boldsymbol G(\widehat{\boldsymbol r}_h)$, and $\boldsymbol \tau =\lambda_{\max}(\widehat{\boldsymbol r}_h)$ is the stabilization parameter defined as the maximum eigenvalue of the normal Jacobian matrix $\boldsymbol A$: \begin{linenomath} \begin{equation} \lambda_{\max}(\boldsymbol r) = \lvert\mathbf{u}\cdot\mathbf{n}\rvert+\sqrt{gh}. \end{equation} \end{linenomath} The numerical trace $\widehat {\boldsymbol r}_h\in \mathbf M_h^{p,d+2}$ must be such that the numerical flux is conserved across all internal edges in the mesh skeleton, and boundary conditions are satisfied at all boundary edges through the boundary operator $\boldsymbol B_h$ defined according to an imposed boundary condition \cite{nguyen_peraire_2012}: \begin{linenomath} \begin{equation}\label{Eq:NSWEVarGlob} \langle \boldsymbol G^*_h , \boldsymbol \mu \rangle_{\partial \mathcal T_h \backslash \partial \Omega_h} + \langle \boldsymbol B_h , \boldsymbol \mu \rangle_{\partial \mathcal T_h \cap \partial \Omega_h}=0\,\,\,\forall \boldsymbol\mu\in \mathbf M_h^{p,d+2}. \end{equation} \end{linenomath} Eq.(\ref{Eq:NSWEVarLoc}) and Eq.(\ref{Eq:NSWEVarGlob}) along with the definition of $\mathbf q_b^*$ form a system of equations that is used to solve for an approximate solution $\boldsymbol q_h \in \mathbf V_h^{p,d+3}$. The boundary condition operator $\boldsymbol B_h$ is defined as \begin{linenomath} \begin{equation} \boldsymbol B_h = \boldsymbol A^+ \boldsymbol r_h - |\boldsymbol A| \widehat{\boldsymbol r}_h - \boldsymbol A^- {\boldsymbol r}_\infty, \end{equation} \end{linenomath} where $\boldsymbol A^{\pm} = \frac{1}{2}(\boldsymbol A \pm |\boldsymbol A|)$, and ${\boldsymbol r}_\infty$ is the weakly imposed boundary state \cite{nguyen_peraire_2012} . For a slip wall boundary condition, $\boldsymbol B_h$ is defined as \begin{linenomath} \begin{equation} \boldsymbol B_h = \widehat{\boldsymbol r}_h - {\boldsymbol r}_{\text{slip}}, \end{equation} \end{linenomath} where ${\boldsymbol r}_{\text{slip}} = \{(h)_h \quad (h\mathbf u)_h - ((h\mathbf u)_h \cdot \textbf{n}) \textbf{n} \quad (hc)_h\}^\mathbf{T}$ is a state with its normal velocity component truncated \cite{nguyen_peraire_2012}. In order to generate $\mathcal S_2$, a numerical solution operator for the dispersive correction part of the presented hydro-sediment-morphodynamic model, Eq.(\ref{Eq:w1}) is written as a system of first order equations using the definition for operator $\mathcal T$ \cite{samii_and_dawson_2018}: \begin{linenomath} \begin{empheq}[left=\empheqlbrace]{equation}\label{Eq:w1w2} \begin{split} &\nabla \cdot (h^{-1} \mathbf w_1) - h^{-3} w_2 = 0\\ &\mathbf w_1- \tfrac 1 3 \nabla w_2 - \tfrac {1}{2} h^{-1} w_2 \nabla b + \tfrac 1 2 \nabla (h \nabla b \cdot \mathbf w_1) + \mathbf w_1 \nabla b \otimes \nabla b = \mathbf{s}(\boldsymbol{q}) \end{split}, \end{empheq} \end{linenomath} where $\textbf{s}(\boldsymbol{q}) = \alpha^{-1}gh \nabla \zeta + h \mathcal Q_1(\mathbf u)$. A discontinuous Galerkin finite element discretization for Eq.(\ref{Eq:w1w2}) forms a global system of equations. A hybridized discontinuous Galerkin formulation can be used to reduce the dimension of the global system of equations. Therefore, the hybridized discontinuous Galerkin method developed by Samii and Dawson in \cite{samii_and_dawson_2018} is employed to treat numerically Eq.(\ref{Eq:w1w2}) to obtain an approximate solution $\mathbf{w}_{1h}\in\textbf{V}_h^{p,d}$. The result is then used in the dispersive correction to seek an approximate solution $\boldsymbol{q}_h\in\textbf{V}_h^{p,d+3}$ that satisfies the variational formulation \begin{linenomath} \begin{equation} (\partial_t \boldsymbol{q}_h,\boldsymbol{v})_{\mathcal{T}_h}+\left(\boldsymbol D_h,\boldsymbol{v}\right)_{\mathcal{T}_h}=0\quad \forall \boldsymbol{v}\in\textbf{V}_h^{p,d+3}, \end{equation} \end{linenomath} where $\boldsymbol D_h = \boldsymbol D(\boldsymbol{q}_h)$. High order derivatives of $\mathbf{u}_h$, present in $\mathcal Q_1(\mathbf{u}_h)$, are computed weakly using a discontinuous Galerkin method with centered numerical fluxes. In the developed depth-averaged hydro-sediment-morphodynamic model, it is assumed that the water depth $h$ is bounded from below by a positive value. This assumption implemented by a wetting-drying algorithm which ensures that the water depth remains positive. The numerical solution operator $\mathcal S_2$ does not affect the water depth; therefore, the wetting-drying algorithm should work in conjunction with the numerical solution operator for the SHSM equations $\mathcal S_1$. In the presented work, the wetting-drying algorithm developed for the nonlinear shallow water equations by Bunya \emph{et al.} in \cite{bunya_etal_2009} is adapted to the SHSM equations. In the adapted version of the Bunya \emph{et al.} wetting-drying algorithm, the sediment term $hc$ in the SHSM equations is treated the same way as the momentum term $h\mathbf{u}$ and the rest of the algorithm remains the same. The bed update part of the equations does not affect the water depth and, therefore, it does not require the wetting-drying algorithm. Finally, in the dispersive correction part of the equations the wet-dry front is modeled as a slip wall boundary. Using the Green-Naghdi equations as the hydrodynamic part of the presented model allows capturing wave dispersion effects; however, the Green-Naghdi equations are limited to parts of the problem domain that are free from discontinuities in numerical solutions \cite{duran_and_marche_2017}. This poses certain limitations on the application of the Green-Naghdi equations, e.g. wave breaking phenomena in surf zones present themselves as a water depth discontinuity in numerical solutions. While the Green-Naghdi equations cannot accurately resolve wave breaking, the nonlinear shallow water equations are more suitable for such areas \cite{duran_and_marche_2017}. Using the Strang operator splitting allows switching to the nonlinear shallow water equations from the Green-Naghdi equations by setting $\mathcal{S}_2=1$ in regions with discontinuities in numerical solutions. Thus, a discontinuity detection criterion is required to dynamically switch to $\mathcal{S}_2=1$. In the presented work, the numerical solution algorithm is augmented with the water depth discontinuity detection criterion adopted by Duran and Marche in \cite{duran_and_marche_2017} from Krivodonova \emph{et al.} \cite{krivodonova_etal_2004}. A water depth discontinuity is identified over an element $K$ if the parameter \cite{krivodonova_etal_2004,duran_and_marche_2017} \begin{linenomath} \begin{equation} \mathbb I_K = \frac{\sum_{F\in\partial K_{\text{in}}}\vert\int_{F}(h^+-h^-)\dd X\vert}{\mathfrak{h}_K^{\frac{p+1}{2}}\,\vert \partial K_{\text{in}}\vert\,\Vert h\Vert_{L^{\infty}(K) }} \end{equation} \end{linenomath} is greater than a specified threshold that is typically $O(1)$. In this description of the parameter $\mathbb I_K$, $\mathfrak{h}_K$ is the element diameter, $\partial K_{\text{in}}$ are the inflow faces of the element where $\mathbf{u}\cdot\mathbf{n}<0$, and $\vert \partial K_{\text{in}}\vert$ is the total length of the inflow faces. Since $\mathcal{S}_2$ is not applied in regions with discontinuities in the numerical solutions, a slope limiting is not needed for the dispersive correction part of the presented model. However, whenever discontinuities occur in the numerical solutions to the SHSM equations a slope limiting algorithm is required in order to remove the oscillations at sharp discontinuities and to preserve numerical stability. Thus, the Cockburn-Shu limiter \cite{cockburn_shu_2001} is incorporated into the numerical solution algorithm and applied in conjunction with the operator $\mathcal S_1$. The details of the limiter are not presented here, but readers are encouraged to consult the original source. \section{Numerical experiments and discussion} The developed numerical model has been implemented in a software framework written in C++ programming language with the use of open source scientific computing libraries, such as Eigen \cite{eigen}, Blaze \cite{blaze}, and PETSc \cite{petsc}. The software has been parallelized for shared and distributed memory systems with the use of a hybrid OpenMP+MPI programming, and HPX \cite{hpx}. Performance comparison between the hybrid programming and HPX has been performed by Bremer \emph{et al.} in \cite{bremer_etal_2019}. The presented numerical model is validated in five numerical examples. In the first four set-up only the numerical solution operator for the SHSM equations is validated against four dam break experiments. In these experiments the dispersive wave effects are negligible; therefore, $\mathcal S_2=1$ in the simulations. The last example uses the full dispersive wave hydro-sediment-morphodynamic model to simulate water waves, sediment transport, and bed morphodynamics caused by solitary wave runs over a sloping beach. The first-order Dubiner polynomials from \cite{dubiner_1991} are used for the approximating space $\mathbf V_h$, and the first-order Legendre polynomials are used for the approximating space $\mathbf M_h$. In all presented examples, numerical solutions are computed using two different definitions of the numerical flux $\boldsymbol{G}^*_h$: (1) the Harten–Lax–van Leer discontinuous Galerkin scheme (HLL DG), (2) the Nguyen-Peraire hybridized discontinuous Galerkin scheme (NP HDG). Consequently, the numerical results obtained using these two definitions for the numerical flux are compared against each other. \subsection{1D dam break} \begin{figure}[!t] \center \includegraphics[trim=0.75in 0.25in 0.75in 0.25in,clip,width=4.75in]{EX1.eps} \caption{Free surface elevation and bathymetry from the 1D dam break simulation compared with Cao \emph{et al.} experiment \cite{cao_etal_2004}.} \label{Fig:EX1} \end{figure} In this numerical experiment the SHSM equations are used to simulate a 1D dam break over a mobile bed. Initial conditions for this experiment are set as a clear ($c_0(x)=0$) still water ($\mathbf{u}_0(x)=0$) with its depth distributed as \begin{linenomath} \begin{equation} h_0(x) = \begin{cases} 40&\text{if}\,\,\,\,x\leq0\\ 2&\text{if}\,\,\,\,x>0 \end{cases}, \end{equation} \end{linenomath} and the bathymetry set to $b_0(x)=0$. The mobile bed in this experiment has the sediment density $\rho_s=2650\,\text{kg}/\text{m}^3$, the bed porosity $p=0.4$, the critical Shields parameter $\theta_c=0.045$, and the mean sediment particle size $d_{50}$ set as 4mm and 8mm for two separate simulation runs. For the sediment entrainment rate model, the calibration parameter is set as $\phi=0.015$. The bed load transport is not considered in this numerical experiment by setting $\mathbf{q}_b=0$. The bottom friction force is introduced into the model through the source term $\boldsymbol S(\boldsymbol q)$ by setting \begin{linenomath} \begin{equation} \mathbf f = \frac{gn^2}{h^{1/3}}{\vert\mathbf{u}\vert\mathbf{u}}, \label{Eq:f} \end{equation} \end{linenomath} with the Manning’s roughness coefficient $n=0.03$. The problem domain $\Omega = (-5000,5000)\times(-10,10)\,\text{m}^2$ is partitioned into a finite element mesh with $500\times1$ square cells each containing 2 triangular elements. The explicit Euler time stepping scheme is employed with the time step $\Delta t = 0.1$s. Two simulations with varying mean sediment particle sizes are run for 2 minutes, and their results are compared to the numerical experiments carried out for the same 1D dam break problem by Cao \emph{et al.} in \cite{cao_etal_2004}. The results of the numerical simulations at $t=\{60, 120\}$s for $d_{50}=\{4, 8\}$mm are presented in Fig.\ref{Fig:EX1}. Smaller sediment particle sizes imply larger magnitude for sediment entrainment rate $E$, which presents itself as a larger bed erosion for $d_{50}=4\text{mm}$. The numerical results for both the free surface elevation, $\zeta$, and the bathymetry, $b$, are in good agreement with the results obtained by Cao \emph{et al.} The numerical results obtained with HLL DG and NP HDG schemes closely match each other except in the area of the hydraulic jump where NP HDG scheme provides a smoother solution for the free surface elevation. \subsection{1D dam break with wetting-drying} \begin{figure}[!t] \center \includegraphics[trim=0.75in 0.5in 0.75in 0.5in,clip,width=4.75in]{EX2.eps} \caption{Free surface elevation and bathymetry from the 1D dam break with wetting-drying simulations compared with the Louvain \cite{fraccarollo_capart_2002} and Taipei \cite{capart_young_1998} experiments.} \label{Fig:EX2} \end{figure} This example simulates a 1D dam break over a mobile dry bed and is used to validate the wetting-drying algorithm employed in the presented numerical model. Numerical simulations for this experiment are performed with the SHSM equations where water is initially in clear still state, the water depth is set to \begin{linenomath} \begin{equation} h_0(x) = \begin{cases} 0.1&\text{if}\,\,\,\,x\leq0\\ 0&\text{if}\,\,\,\,x>0 \end{cases}, \end{equation} \end{linenomath} and the initial bathymetry is $b_0(x)=0$. Two physical experiments have been performed for this setup: (1) the Louvain experiment by Fraccarollo and Capart \cite{fraccarollo_capart_2002}, (2) the Taipei experiment by Capart and Young \cite{capart_young_1998}. These experiments are set up similarly except for the sediment properties. In the Louvain experiment the sediment density $\rho_s=1540\,\text{kg}/\text{m}^3$, the bed porosity $p=0.3$, the critical Shields parameter $\theta_c=0.05$, and the mean sediment particle size $d_{50}=3.5$mm. On the other hand, in the Taipei experiment the sediment density $\rho_s=1048\,\text{kg}/\text{m}^3$, the bed porosity $p=0.28$, the critical Shields parameter $\theta_c=0.05$, and the mean sediment particle size $d_{50}=6.1$mm. The calibration parameter for the sediment entrainment rate model, $\phi$, is set as 4.0 for the Louvain experiment, and 2.5 for the Taipei experiment. In both experiments, the bed load transport is disregarded by setting $\mathbf{q}_b=0$, and the Manning's friction model from Eq.(\ref{Eq:f}) is used for the bottom friction force with $n=0.025$. The problem domain $\Omega = (-1,1)\times(-2\cdot10^{-3},2\cdot10^{-3})\,\text{m}^2$ is partitioned into a finite element mesh with $500\times1$ square cells each containing two triangular elements. The explicit Euler time stepping scheme with the time step $\Delta t = 5\cdot10^{-4}$s is used to propagate simulations in time for 1s. The simulations of the 1D dam break over mobile dry bed are carried out with the parameters from the Louvain and Taipei experiments. The results are compared with the Louvain experiment at $t=\{5t_0,7t_0,10t_0\}$ and with the Taipei experiment at $t=\{3t_0,4t_0,5t_0\}$, where $t_0=\sqrt{g/h_0}\approx0.101$s ($h_0=0.1$m), in Fig.\ref{Fig:EX2}. The numerical solution algorithm successfully models the wetting-drying process while providing sufficiently accurate numerical results for the free surface elevation, $\zeta$, and the bathymetry, $b$. Similar to the previous example, HLL DG and NP HDG results closely match each other everywhere other than the hydraulic jump area. \subsection{2D flume with abrupt widening} \begin{figure}[t!] \center \includegraphics[trim=0.25in 0.25in 0.25in 0.25in,clip,width=4.75in]{EX3.eps} \caption{Sediment erosion/deposition measurements from the 2D flume with abrupt widening experiment compared with Goutiere \emph{et al.} results \cite{goutiere_etal_2011}.} \label{Fig:EX3} \end{figure} A 2D dam break is simulated in an "L-shaped" flume which is 0.25m wide in its initial 4m and has an abrupt widening on one side to 0.5m for the remaining 2m. The flume bed is covered with 0.1m of sediment ($b_0(x)=0.1$) with the following properties: the sediment density $\rho_s=2630\,\text{kg}/\text{m}^3$, the bed porosity $p=0.39$, the critical Shields parameter $\theta_c=0.047$, the mean sediment particle size $d_{50}=1.72$mm. In this experiment, only the suspended load in taken into account while setting the calibration parameter for the sediment entrainment rate model, $\phi$, to 0.35. Initial conditions for the SHSM equations simulations are clear still water with its initial depth \begin{linenomath} \begin{equation} h_0(x) = \begin{cases} 0.25&\text{if}\,\,\,\,x\leq3\\ 0&\text{if}\,\,\,\,x>3 \end{cases}, \end{equation} \end{linenomath} which implies that the abrupt expansion of the flume is located 1m downstream from the dam break location. The Manning's friction model from Eq.(\ref{Eq:f}) is used for the bottom friction force with $n=0.0165$. The "L-shaped" problem domain $\Omega$ for this simulation is partitioned into nearly $4\cdot10^4$ triangular elements. The explicit Euler time integration scheme is used for this numerical simulation with the time step $\Delta t = 2\cdot10^{-4}$s. The simulation is run for 20s after which the sediment erosion/deposition measurements are taken at 4 lateral sections located at $x=\{4.1(\text{S1}), 4.2(\text{S2}), 4.3(\text{S3}), 4.4(\text{S4})\}$m. These measurements are compared with the results of the physical experiment performed by Goutiere \emph{et al.} in \cite{goutiere_etal_2011} in Fig.\ref{Fig:EX3}. The results of the numerical simulation generally agree with the results of the physical experiment. A general tendency for sediment erosion on the left side and sediment deposition on the right side of the flume is captured in the numerical simulation. The model is also able to capture large sediment deposition on the right side at Sections 3 and 4 where the water flow experiences sudden deceleration due to an impact with the side wall \cite{goutiere_etal_2011}. No significant differences can be observed between HLL DG and NP HDG schemes in this example. \subsection{2D partial dam break} \begin{figure}[t!] \center \includegraphics[trim=0.25in 0.25in 0.25in 0.25in,clip,width=4.75in]{EX4.eps} \caption{Sediment erosion/deposition measurements from the 2D partial dam break experiment compared with Soares-Fraz\~ao \emph{et al.} results \cite{soares_etal_2012}.} \label{Fig:EX4} \end{figure} A partial 2D dam break is simulated in a flume that consist of two 3.6m wide reservoirs that are connected by a 1m long and 1m wide channel with a gate in the middle, which is removed at the beginning of the experiment to simulate a partial dam break. The channel connects the reservoirs along their longitudinal axes. The wet reservoir that holds water is 10m long, and the dry reservoir is 15m long. The bed of the dry reservoir is covered by 0.085m of sediment with the sediment density $\rho_s=2630\,\text{kg}/\text{m}^3$, the bed porosity $p=0.42$, the critical Shields parameter $\theta_c=0.047$, and the mean sediment particle size $d_{50}=1.61$mm. The bed load transport is not taken into account in this experiment, and the calibration parameter for the sediment entrainment rate model $\phi=0.05$. Initially, the wet reservoir water is in clear still state and is 0.47m deep. The bottom friction force is modeled with the Manning's friction model from Eq.(\ref{Eq:f}) with $n=0.0165$. The problem domain $\Omega$ for this numerical experiment is partitioned into over $10^5$ triangular elements. The numerical simulation is propagated in time with the explicit Euler time stepping scheme with the time step $\Delta t = 5\cdot10^{-4}$s. After 20s of the numerical simulation, the sediment erosion/deposition measurements are taken at 3 longitudinal sections of the dry reservoir located at $y=\{0.2(\text{S1}), 0.7(\text{S2}), 1.45(\text{S3})\}$m away from the longitudinal axis of the reservoir. Fig.\ref{Fig:EX4} presents the measurements and compares them with the results of the physical experiment performed by Soares-Fraz\~ao \emph{et al.} in \cite{soares_etal_2012}. The results of the numerical simulation are in good agreement with the results of the physical experiment. The sediment is mostly eroded near the channel, where the bed is nearly completely scoured away and deposited downstream by the water flow from the dam break, as is evident from the measurements at Section 1. In this example, HLL DG and NP HDG schemes did not lead to significantly different numerical solutions. \subsection{Solitary wave over a sloping beach} \begin{figure}[t!] \center \includegraphics[trim=0.5in 0.5in 0.5in 0.5in,clip,width=4.75in]{EX5_1.eps} \caption{Free surface elevation measurements at 5 measuring stations compared to the experimental results by Sumer \emph{et al.} \cite{sumer_etal_2011}.} \label{Fig:Wave} \end{figure} In this experiment, the full dispersive wave hydro-sediment-morphodynamic model is used to simulate water waves, and subsequent sediment transport and bed evolution during run up and run down of a solitary wave over a linearly sloping beach. This experiment showcases a number of features of the presented model: (1) the use of the Green-Naghdi equations as a hydrodynamic component of the model since wave dispersion effects play a significant role during run up of a solitary wave over a sloping beach, (2) switching to the nonlinear shallow water equations as a hydrodynamic model in swash zones since solitary waves in this experiment have a sufficiently high amplitude to experience wave breaking, (3) solitary waves that run over a sloping beach in this experiment cause significant erosion/deposition of the beach bed; thus, the ability of the model to estimate sediment transport and bed morphology can be evaluated. Initial conditions for solitary waves in this experiment are characterized by equations \begin{linenomath} \begin{equation} h_0(x) = H_0 + a_0\sech^2\left(\kappa(x-x_0)\right), \quad (h\mathbf u)_0(x) = c_0 h_0(x) - c_0 H_0, \end{equation} \end{linenomath} where $a_0$ is the solitary wave height, $x_0$ the initial wave position, and \begin{linenomath} \begin{equation} \kappa = \frac {\sqrt{3 a_0}} {2H_0 \sqrt{H_0+a_0}}, \quad c_0 = \sqrt{g (H_0+a_0)}. \end{equation} \end{linenomath} Initially, a simulation has been performed over a rigid bed to validate the dispersive wave hydrodynamic model. To carry out this numerical simulation, the problem domain $\Omega = (-10,10)\times(-2.5\cdot10^{-2},2.5\cdot10^{-2})\,\text{m}^2$ is partitioned into a finite element mesh comprised of $400\times1$ square cells containing two triangular elements. A two-stage second-order Runge-Kutta method is used to perform time integration with the time step $\Delta t=5\cdot10^{-3}$s. The Manning’s roughness coefficient $n=0.03$ is used for the bottom friction force. The toe of the sloping beach for this simulation is located at $x=0$ where an initially flat bed starts climbing linearly up at a 1:14 rate. The parameters for the solitary wave in this simulation are: $H_0=0.4$m, $a_0=0.071$m, and $x_0=-5$m. This simulation setup corresponds to the solitary wave run over a sloping beach experiment performed by Sumer \emph{et al.} \cite{sumer_etal_2011}. Fig.\ref{Fig:Wave} presents numerical solutions for the free surface elevations recorded at 5 measuring stations located at $x=\{0.0(\text{Toe}), 4.63(\text{S1}), 4.87(\text{S3}), 5.35(\text{S5}), 5.85(\text{S8})\}$m during 20s of the simulation and compares them to the experimental results provided by Sumer \emph{et al.} The experimental results suggest that wave breaking occurs somewhere between Sections 3 and 5. This is accurately captured with the dispersive wave hydrodynamic model. However, the free surface elevation measurements at the onshore Section 8 show that the hydrodynamic model is less precise in resolving water waves in the swash zone. Subsequently, the hydrodynamic model is unable to simulate accurately the water motion during the run down stage. Nevertheless, considering complexities associated with modeling water motion induced by solitary waves over a sloping beach, the results of the simulation can be regarded as satisfactory. \begin{figure}[t!] \center \includegraphics[trim=0.5in 0.0in 0.5in 0.0in,clip,width=4.75in]{suspended.eps} \caption{Sediment erosion/deposition measurements for a simulation with the suspended load transport compared with the results by Young \emph{et al.} \cite{young_etal_2010}.} \label{Fig:EX5_suspended} \end{figure} To validate the sediment transport and bed morphodynamic part of the model, solitary wave run simulations have been performed over the problem domain $\Omega = (-8,42)\times(-5\cdot10^{-2},5\cdot10^{-2})\,\text{m}^2$. The problem domain is partitioned into $500\times1$ square cells each containing two triangular elements, and a two stage second-order Runge-Kutta method with the time step $\Delta t=2.5\cdot10^{-3}$s is used for temporal discretization. The toe of the sloping beach in the simulation is located at $x=12$m where the flat rigid bed starts climbing at 1:15 rate. The sloping part of the beach is covered with mobile sediment with the sediment density $\rho_s=2650\,\text{kg}/\text{m}^3$, the bed porosity $p=0.4$, the critical Shields parameter $\theta_c=0.045$, the mean sediment particle size $d_{50}=0.2$mm. The Manning’s roughness coefficient $n=0.008$ is used for the bottom friction force. The solitary wave in this simulation is parametrized with $H_0=1$m, $a_0=0.6$m, and $x_0=2$m. A physical experiment with the same setup has been performed by Young \emph{et al.} in \cite{young_etal_2010} where a number of solitary waves have been run over a sloping beach and subsequent sediment erosion/deposition has been recorded. Two simulations are performed: (1) a simulation where only the suspended load transport is taken into account with its results presented in Fig.\ref{Fig:EX5_suspended}, and (2) a simulation where both the suspended and bed load transport are considered with its results presented in Fig.\ref{Fig:EX5_full}. For the suspended load, the calibration parameter for the sediment entrainment rate model, $\phi$, is set to 0.35; and the Grass model with $A=2\cdot10^{-4}$ is used as a model for the bed load flux $\mathbf{q}_b$. In both of these simulations sediment erosion/deposition measurements are taken after 3 solitary waves have been run over the sloping beach for 2m each, which is a sufficient time for water to substantially settle. The results of these measurements are compared with the experimental results by Young \emph{et al.} and they are in good agreement. The experimental results indicate that \cite{young_etal_2010}: (1) during the initial run up sediment is entrained in water and deposited onshore at the maximum excursion point where the water flow stalls, (2) during the run down process a shallow high velocity flow causes net sediment erosion in the region between $x=24$m and $x=35$m, (3) this entrained sediment is then deposited offshore in the vicinity of the hydraulic jump, which is formed by the retreating water, due to sudden deceleration of the sediment-rich flow. The numerical model is able to capture the sediment transport and bed morphodynamics features observed in the experiment accurately. \begin{figure}[t!] \center \includegraphics[trim=0.5in 0.0in 0.5in 0.0in,clip,width=4.75in]{full.eps} \caption{Sediment erosion/deposition measurements for a simulation with the suspended and bed load transport compared with the results by Young \emph{et al.} \cite{young_etal_2010}.} \label{Fig:EX5_full} \end{figure} \section{Conclusions} A dispersive wave hydro-sediment-morphodynamic model has been developed by introducing the dispersive term of a single parameter variation of the Green-Naghdi equations into the SHSM equations. The model can be used to simulate water waves, and the resulting sediment transport and bed morphodynamic processes in areas where wave dispersion effects are prevalent. A numerical solution operator has been developed for the model which employs the second-order Strang operator splitting technique. In order to employ this technique, the dispersive term has been singled out for a separate numerical treatment with a hybridized discontinuous Galerkin method developed by Samii and Dawson in \cite{samii_and_dawson_2018}, and Harten–Lax–van Leer discontinuous Galerkin, and Nguyen-Peraire hybridized discontinuous Galerkin schemes have been developed for the remaining SHSM equations. The splitting technique makes it possible to select regions where the dispersive term is not applied, e.g. in wave breaking regions where the dispersive wave model is no longer valid. The numerical model is augmented with a wave breaking detection mechanism that can dynamically determine regions where the dispersive term is not applied. To facilitate the use of the developed model in problems where water may completely recede from parts of the problem domain, the wetting-drying algorithm by Bunya \emph{et al.} \cite{bunya_etal_2009} has been incorporated into the numerical model. The numerical model has been validated against a number of numerical examples. Dam break simulations have been performed to validate the numerical solution schemes developed for the SHSM equations. The results of the simulations indicate that the developed schemes are able to capture hydro-sediment-morphodynamic processes with a sufficient accuracy. Since empirical models are used for the suspended and bed load transport, a close calibration for the empirical models' parameters may be required to improve the accuracy of the presented model. Simulations of a solitary wave run-up over a sloping beach have been performed to validate the full dispersive wave hydro-sediment-morphodynamic model. The results of the simulations indicate that the use of the presented model is justified for flows where the wave dispersion effects are prevalent. Subsequently, the use of the presented model for such flows accurately captures sediment transport and bed morphodynamic processes driven by these flows. \section{Acknowledgments} This work has been supported by funding from the National Science Foundation Grant 1854986, and the Portuguese government through Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT), I.P., under the project DGCOAST (UTAP-EXPL/MAT/0017/2017). Authors would like to acknowledge the support of the Texas Advanced Computing Center through the allocation TG-DMS080016N used in the parallel computations of this work.
{ "timestamp": "2020-10-14T02:11:15", "yymm": "2010", "arxiv_id": "2010.06167", "language": "en", "url": "https://arxiv.org/abs/2010.06167" }
\section{Introduction} Entity linking (EL) is the task of grounding entity mentions by linking them to entries in a given database or dictionary of entities. Traditional EL approaches often assume that entities linked at the test time are present in the training set. Nevertheless, many real-world applications prefer the zero-shot setting, where there is no external knowledge and a short text description provides the only information we have for each entity \cite{sil2012linking,wang2015language}. For zero-shot entity linking \cite{logeswaran2019zero}, it is crucial to consider the context of entity description and mention, so that the system can generalize to unseen entities. However, most of the BERT-based models are based on a context window with 512 tokens, limited to capturing the long-range of context. This paper defines a model's \textbf{Effective-Reading-Length (ERLength)} as the total length of the mention contexts and entity description that it can read. Figure \ref{example} demonstrates an example where long ERlengths are more preferred than short ones. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figure/example.png} \caption{Only models with large ERLength can solve this entity linking problem because only they can get valuable critical information in the mention contexts and entity description.} \label{example} \end{figure} \indent Many existing methods that can be used to expand ERLength \cite{sohoni2019low, dai2019transformer}, however, often need to completely re-do pre-training with the masked language modeling objective on the vast general corpus (like Wikipedia), which is not only very expensive but also impossible in many scenarios.\\ \indent This paper proposes a practical way, Embeddings-repeat, to expand BERT's ERLength by initializing larger position embeddings, allowing reading all information in the context. Note our method differs from previous works since it can directly use the larger position embeddings initialized from BERT-Base to do fine-tuning on downstream tasks without any retraining. Extensive experiments are conducted to compare different ways of expanding ERLength, and the results show that Embeddings-repeat can robustly improve performance. Most importantly, we improve the accuracy from 76.06\% to 79.08\% in Wikia's zero-shot EL dataset, from 74.57\% to 82.14\% for its long data. Since our method is effective and easy to implement, we expect our method will be useful for other downstream NLP tasks. \section{Related work} \paragraph{Zero-shot Entity Linking} Most state-of-the-art entity linking methods are composed of two steps: candidate generation \cite{sil2012linking, vilnis2018hierarchical, radford2018improving} and candidate ranking \cite{he2013learning, sun2015modeling, yamada2016joint}. \citet{logeswaran2019zero} proposed the zero-shot entity linking task, where mentions must be linked to unseen entities without in-domain labeled data. For each mention, the model first uses BM25 \cite{robertson2009probabilistic} to generate 64 candidates. For each candidate, BERT \cite{devlin2018bert} will read a sequence pair combining mention contexts and entity description and produce a vector representation for it. Then, the model will rank the candidates based on these vectors. This paper discusses how to improve \citet{logeswaran2019zero} by efficiently expanding the ERLength. \paragraph{Modeling long documents} The simplest way to work around the 512 limit is to truncate the document\cite{xie2019unsupervised, liu2019roberta}. It suffers from severe information loss, which does not meet sufficient information in the zero-shot entity linking. Recently there has been an explosive amount of efforts to improve long-range sequence modeling \cite{sukhbaatar2019adaptive, rae2019compressive, child2019generating, ye2019bp, qiu2019blockwise, lample2019large, lan2019albert}. However, they all need to initialize new position embeddings and do expensive retraining on the general corpus (like Wikipedia) to learn the positional relationship in longer documents before fine-tuning downstream tasks. Moreover, the exploration of the impact of long-range sequence modeling on entity linking is still blank. So in this study, we will explore a different approach, which initializes larger position embeddings based on the existing small one in BERT-Base, and can be used directly in the fine-tuning without expensive retraining. \begin{table*} \centering \begin{tabular}{l|c|cccccc|c} \hline \multirow{2}*{Method} & \multirow{2}*{ERLength} & \multicolumn{6}{|c|}{eval} & \multirow{2}*{test} \\ & & $set_1$ & $set_2$ & $set_3$ & $set_4$ & avg & long & \\ \hline \citet{logeswaran2019zero} & 256 & \small{83.40} & \small{79.00} & \small{73.03} & \small{68.82} & 76.06 & 74.57 & 75.06\\ BERT & 512 & \small{83.45} & \small{80.03} & \small{71.88} & \small{72.53} & 76.97 & 78.54 & -\\ \citet{logeswaran2019zero} (DAP) & 256 & \small{82.82} & \small{81.59} & \small{75.34} & \small{72.52} & 78.07 & 76.89 & 77.05\\ $E_{repeat}$ & 1024 & \small{87.02} & \small{81.52} & \small{73.48} & \small{74.37} & 79.08 & 82.14 & 77.58\\ $E_{repeat}$ + DAP & 1024 & \small\textbf{89.67} & \small\textbf{83.53} & \small\textbf{75.37} & \small\textbf{74.96} & \textbf{80.88} & \textbf{82.14} & \textbf{79.64}\\ \hline \end{tabular} \caption{Our methods with long ERlength outperform state of the art. Especially, the accuracy of the long data increases from 74.57\% to 82.14\% compared with the benchmark. Here, we call all data whose DLength exceeds 512 (the maximum number BERT can read) as \textbf{long} data. If we also use DAP, the best accuracy is 80.88\% in validation data, and 79.64\% in test data. Note: $set_1$: Coronation street, $set_2$: Muppets, $set_3$: Ice hockey, $set_4$: Elder scrolls, DAP: Domain Adaptive Pre-training \cite{logeswaran2019zero}.} \label{baseline} \end{table*} \section{Method} \subsection{Overview} \begin{figure}[ht] \centering \includegraphics[width=\linewidth, trim = {0 0 0 0}, clip]{figure/bert_model.png} \caption{BERT doing entity linking with larger position embeddings} \label{bert_model} \end{figure} Figure \ref{bert_model} describes how to use BERT for zero-shot entity linking tasks with larger position embeddings. Following \citet{logeswaran2019zero}, we adopt a two-stage pipeline consisting of a fast candidate generation stage, followed by a more expensive but powerful candidate ranking stage \cite{ganea2017deep, kolitsas2018end, wu2019zero}. We use BM25 for the candidate generation stage and get 64 candidate entities for every mention. For the candidate ranking stage, as in BERT, the mention contexts $m$ and candidate entity description $e$ are concatenated as a sequence pair together with special start and separator tokens: ([CLS] $m$ [SEP] $e$ [SEP]). The Transformer \cite{vaswani2017attention} will encode this sequence pair, and the position embeddings inside will capture the position information of individual words. At the last hidden layer, the Transformer produces a vector representation $h_{m,e}$ of the input pair through the special pooling token [CLS]. And then entities in a given candidate set are scored as $softmax(\omega^\top{h}_{m,e})$ where $\omega$ is a learned parameter vector. \\ \indent Since the size of position embeddings is limited to 512 in BERT, how to capture position information beyond this size is what we hope to improve. In general, for new and larger position embeddings, we often need to re-initialize it with the larger size, and then retrain on general corpus like Wikipedia to learn the positional relationship in longer documents. However, we found that the relationship between different positions in the text is related. We can initialize larger position embeddings from the small ones in BERT-Base, and then without any expensive retraining, directly use it to complete the fine-tuning on the downstream tasks. \subsection{Position embeddings initialization} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figure/diff_init_method.png} \caption{BERT model with larger position embeddings which initialized from different method} \label{model} \end{figure} It is reasonable to assume that the larger position embeddings have similar first 512 values with the small one since they all express the corresponding relationship between tokens when the input length is less than 512. For those positions over 512, we introduce a particular method \textbf{Embeddings-repeat ($E_{repeat}$)} to initialize larger position embeddings by repeating the small one from BERT-Base as analysis of BERT’s attention heads shows a strong learned bias to attend to the local context, including the previous or next token \cite{clark2019does}. We assume using $E_{repeat}$ preserves this local structure everywhere except at the partition boundaries. For example, for a 1024 position embeddings model, we will initialize the first 512 positions and the last 512 positions, respectively, from BERT-Base. \\ \indent To verify the rationality of $E_{repeat}$, we also proposed two other methods as the comparison. $E_{head}$ assumes only the first 512 positions in the larger position embeddings are similar to that in the small one, so it initializes the first 512 positions from BERT-Base and randomly initializes those exceeding 512. $E_{constant}$ also uses position embeddings in BERT-Base to initialize its first 512 positions. However, it uses the value of position 512 to initialize those exceeding 512, since it assumes the relationship between two tokens over a long distance tend to be constant. In the following experimental part, we show that at least in this task, using $E_{repeat}$ to expand the ERLength of BERT is most effective. \section{Experiments} \begin{table*}[!htbp] \centering \resizebox{\textwidth}{24mm}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{12}{|c|}{Proportion of data in different DLength intervals}\\ \hline \cline{3-12} \multicolumn{2}{|c|}{\multirow{2}*{DLength}}&\multirow{2}*{(0,200)}&\multirow{2}*{[200,300)}&\multirow{2}*{[300,400)}&\multirow{2}*{[400,500)}&\multirow{2}*{[500,600)}&\multirow{2}*{[600,700)}&\multirow{2}*{[700,800)}&\multirow{2}*{[800,900)}&\multirow{2}*{[900,1000)}&\multirow{2}*{[1000,$+\infty$)}\\ \multicolumn{2}{|c|}{}&&&&&&&&&&\\ \hline \cline{3-12} \multicolumn{2}{|c|}{$\%$ Of total}& 10.62 & 14.62 & 11.92 & 9.96 & 15.18 & 14.11 & 8.56 & 4.91 & 3.39 & 6.83 \\ \hline \multicolumn{12}{|c|}{Accuracy for model with different ERLength on different DLength interval}\\ \hline \multirow{7}*{\rotatebox{90}{ERLength}} &64 & 60.76 & 62.10 & 62.20 & 58.61 & 64.18 & 61.50 & 62.45 & 60.52 & 65.41 & 62.57 \\ \cline{2-12} &128 & 73.57 & 70.57 & 71.31 & 67.64 & 72.34 & 69.34 & 70.87 & 70.12 & 68.49 & 71.69 \\ \cline{2-12} &256 & \textcolor{red}{75.52} & 74.16 & 75.50 & 73.34 & 75.35 & 75.47 & 75.11 & 73.71 & 77 & 74.34 \\ \cline{2-12} &384 & \textcolor{red}{75.72} & \textcolor{red}{77.11} & \textcolor{red}{78.44} & \textcolor{red}{74.03} & \textcolor{red}{78.00} & 75.84 & 77.07 & 78.44 & 75.51 & 78.96 \\ \cline{2-12} &512 & \textcolor{red}{76.64} & \textcolor{red}{75.81} & \textcolor{red}{78.00} & \textcolor{red}{74.68} & \textcolor{red}{77.99} & \textcolor{red}{77.33} & \textcolor{red}{78.97} & \textcolor{red}{81.34} & 75.48 & 79.49 \\ \cline{2-12} &768 & \textcolor{red}{75.56} & \textcolor{red}{75.66} & \textcolor{red}{79.08} & \textcolor{red}{75.15} & \textcolor{red}{77.87} & \textcolor{red}{78.54} & \textcolor{red}{78.54} & \textcolor{red}{82.11} & 78.22 & 79.96 \\ \cline{2-12} &1024 & \textcolor{red}{75.80} & \textcolor{red}{76.40} & \textcolor{red}{80.54} & \textcolor{red}{75.47} & \textcolor{red}{77.7} & \textcolor{red}{77.51} & \textcolor{red}{79.65} & \textcolor{red}{81.60} & \textcolor{red}{81.27} & \textcolor{red}{83.58} \\ \hline \end{tabular}} \caption{The table shows the proportion of different DLength data and the accuracy of different ERLength models on different DLength data. Red represents the accuracy of the leading echelon in certain DLength data. It shows a cascading downward trend, which means that for larger DLength data, only models with larger ERLength can perform well, and even if ERLength is much larger than DLength, accuracy will not decline.} \label{impactOfERLength} \end{table*} \subsection{Dataset and experiment setup} We use Wikia's zero-shot EL dataset constructed by \citet{logeswaran2019zero}, which to our knowledge, is the best zero-shot EL benchmark. To show the importance of long-range sequence modeling, we define the data's \textbf{DLength} as the total length of the mention contexts and entity description and examine the distribution of DLength on the dataset. As shown in Table \ref{impactOfERLength}, We found about half of the data have a DLength exceeding 512 tokens. Furthermore, $93\%$ of them are less than 1024. So we set the model's ERLength range from 0 to 1024, with which we explore how continuously expanding the model's ERLength will affect its performance on Wikia's zero-shot EL dataset. When we increase ERLength, we will assign the same size growth to the mention contexts and entity description, which we find is the most reasonable through our related experiments.\\ \indent For all experiments, we follow the most recent work in studying zero-shot entity linking. We use the BERT-Base model architecture in all our experiments. The Masked LM objective \cite{devlin2018bert} is used for unsupervised pre-training. For fine-tuning language models (in the case of multi-stage pre-training) and fine-tuning on the Entity-Linking task, we use a small learning rate of 2e-5, following the recommendations from \citet{devlin2018bert}. All models are implemented in Tensorflow and optimized with Adam. All experiments were conducted with v3-8 TPU on Google Cloud.\\ \indent Like \citet{logeswaran2019zero}, our entity linking performance is evaluated on the subset of test instances for which the gold entity is among the top-k candidates retrieved during candidate generation. Our IR-based candidate generation has a top-64 recall of 76\% and 68\% on the validation and test sets, respectively. Strengthening the candidate generation stage improves the final performance, but this is outside our work scope. Average performance across a set of domains is computed by macro-averaging. Performance is defined as the accuracy of the single-best identified entity (top-1 accuracy).\\ \subsection{Comparison of different initializations} \begin{figure}[ht] \centering \includegraphics[width=\linewidth, trim = {0 0 0 0}, clip]{figure/diff_init_acc.png} \caption{The accuracy of the model with different position embeddings initialization methods in long and short data. Note: We call all data whose DLength exceeds 512 as \textbf{long} data, otherwise, \textbf{short} data.} \label{initialization method} \end{figure} \indent The results of different position embeddings initialization methods are shown in figure \ref{initialization method}. It can be found that for both long and short data, $E_{repeat}$ has achieved the best results, especially its performance on long data is impressive. When the model's ERLength exceeds 512, only using $E_{head}$ produces worse results, which shows the importance of using the information of the first 512 positions to initialize the latter part. The model with $E_{constant}$ starts to decrease after its ERLength reaches about 768, which shows that its assumption is only reasonable when the model's ERLength is less than 768. Only when using $E_{repeat}$ to initialize we will see a stable and continuous improvement, which shows that only its "local structure" assumption applies to almost all theoretical lengths here (from 0 to about 1024). This also makes it an ideal method to explore the impact of increasing ERLength.\\ \indent Table~\ref{baseline} suggests our method improves state of the art on Wikia's zero-shot EL dataset. Compared to \citet{logeswaran2019zero}, if we use $E_{repeat}$ to increase the model's ERLength to 1024, we improve the accuracy from 76.06\% to 79.08\%, and for the long data, the improvement is from 74.57\% to 82.14\%. What's more, we also try the Domain Adaptive Pre-training (DAP) method in \citet{logeswaran2019zero}. The combination of DAP and 1024 ERLength raises the result to $80.88\%$. \subsection{Impact of increasing ERLlength} We further explore the impact of BERT's ERLength on the zero-shot EL task. The red in the table \ref{impactOfERLength} represents the accuracies in the first echelon in each column (for data within a specific DLength interval). It shows a clear step-down trend, which means data with a larger DLength often requires a model with a larger ERLength. What's more, for any column, if we continue to increase the model's ERLength, the accuracy will stabilize within a specific range after the ERLength exceeds most data's DLengths. So the last row in the table is always red, which means that the model with the largest ERLength can always achieve the best level of accuracy on all data of different DLengths. \begin{figure}[ht] \centering \includegraphics[width=\linewidth, trim = {0 0 0 0}, clip]{figure/win_case_VS_fail_case_with_more_information.png} \caption{The proportion of win/fail cases during the increase in ERLength. We define \textbf{win} case as the initially wrong data but is now correct after increasing ERLength, and define \textbf{fail} case as the initially correct data is now wrong after increasing ERLength.} \label{win&fail case} \end{figure} \indent Figure \ref{win&fail case} shows the changes of win and fail cases when expanding the BERT's ERLength. Generally speaking, when the model can read more content, its accuracy will increase for more valuable information (win case) and decrease for more noise (fail case). The results illustrate that BERT can always use more useful information to help itself while being less disturbed by noise. This once again demonstrates the power of the BERT's full-attention mechanism. This is also the basis on which we can continuously expand BERT's ERLength and continue to benefit. Therefore, for a particular dataset, when we set the ERLength of the BERT, letting it exceed more data's DLength can always bring more improvements. \begin{figure}[ht] \centering \includegraphics[width=\linewidth, trim = {0 0 0 0}, clip]{figure/importance_of_mc_and_ed.png} \caption{Importance of mention contexts and entity description} \label{important_of_mc_ed} \end{figure} \indent Also, in the figure \ref{important_of_mc_ed} we explore the importance of mention contexts and entity descriptions. On Wikia's zero-shot EL dataset, in our settings for BERT with 1024 ERLength, the mention contexts and entity description account for 512, respectively. In figure \ref{important_of_mc_ed}, if we unilaterally reduce the mention contexts and entity description from 512 to 50, the change of accuracy is shown in the figure. It can be found that the two are basically equally important, and no matter which side is reduced, the accuracy will gradually decrease. Therefore, when increasing the BERT ERLength here, the best way is to increase the content of mention contexts and entity description at the same time. \section{Conclusions and future work} We propose an efficient position embeddings initialization method called Embeddings-repeat, which initializes larger position embeddings based on BERT models. For the zero-shot entity linking task, our method improves the SOTA from 76.06\% to 79.08\% on its dataset. Our experiments suggest the effectiveness of increasing ERLength as large as possible (e.g., the length of the longest data in the EL experiments). Our future work will be to extend our methods to other NLP tasks. \section*{Acknowledgments} We thank the research supported with Cloud GPUs and TPUs from Google's TensorFlow Research Cloud (TFRC).
{ "timestamp": "2020-10-14T02:06:55", "yymm": "2010", "arxiv_id": "2010.06065", "language": "en", "url": "https://arxiv.org/abs/2010.06065" }
\section{Adding context}\label{sec:context} Can we improve GTs by conditioning their generation on more context? To evaluate this hypothesis we considered two context variations, one in which we frame the topic and the other in which we frame the claim. \textbf{Framing the topic}. We prepend to the topic the first sentence from the Wikipedia page describing the topic, to explore whether this added knowledge could guide models to generate more relevant and meaningful GTs. The motivation for selecting the first sentence from Wikipedia is to provide the model a concise guidance towards the respective topic via the main terms it may relate to, which usually appear in the first Wikipedia sentence. The relevant Wikipedia page is found by Wikifying the topic, as described in \sectionRef{sec:data}. \textbf{Framing the claim}. We also tried to append to the topic a short sentence describing an aspect relevant to discussing it, hypothesizing that adding a concrete aspect will guide the generation process in that direction. Unfortunately, this did not work well, and details are deferred to the appendix. \paragraph{Evaluation:} We fine-tune GPT-2 from scratch on the modified training data of Rank-30k{} and LN55k{} and refer to the new models as \textit{GPT-Rank{}-FWS}, \textit{GPT-LN{}-FWS} (First Wikipedia Sentence, when framing the topic). We generate a sample of 5 (\textit{GPT-Rank{}-FWS}) or 10 (\textit{GPT-LN{}-FWS}) GTs per dev topic. \input{added_context_results} \paragraph{Results:} Table \ref{table:resultsContext} presents the results for the FWS models. For both FWS models the perplexity has improved, as well as the plausibility of GTs, presumably, since the added context helps to avoid some illogical phrases. For example, the GT \textit{The human condition is the greatest human achievement} for the topic \textit{We should subsidize the human mission to Mars} which was generated by GPT-LN{} was considered implausible, whereas all GTs for this topic generated by GPT-LN{}-FWS were considered plausible. After stance labeling, the advantage of GPT-LN{}-FWS remains, while GPT-Rank{}-FWS performs slightly worse. In addition, the GPT-Rank{}-FWS is slightly worse in predicted quality and stance. Thus, for further experiments, we chose the GPT-LN{}-FWS and GPT-Rank{} models. \section{Using Claim Detection to Rank GTs} When constructing our pipeline, we examined $3$ models for ranking GTs according to their coherence and relevance, concluding that the Claim Detection (CD) model is most successful. This model is obtained by fine-tuning BERT on a similar dataset to what was used to fine-tune GPT-LN{} (the main difference is that the data used to fine-tune BERT included also negative examples from the same corpus), thus reminiscent of bootstrapping. Indeed, this method of using a classifier fine-tuned on the same data as GPT-2 to filter generated samples has already proven to be effective in the context of augmenting low-resource datasets with generated texts \cite{anabytavor2019data}. \section{Examples of Labeled GTs} Tables \ref{table:examplesLxnxTest} and \ref{table:examplesSbcTest} detail GTs generated over the test set by the two models. Each table details two GTs which were labeled implausible, two that were labeled as plausible but lacking stance toward the topic prompt, and two that are coherent GTs for the topic. \begin{table*}[th] \scriptsize \begin{center} \begin{tabular}{ |p{2.5cm}|p{4cm}|p{1.3cm}|p{5.5cm}|} \hline \textbf{Topic} & \textbf{GT} & \textbf{Label} & \textbf{Comment} \\ \hline We should close landfills & landfills are not a sustainable way to fund the nation's health& implausible& The text is coherent. However, landfills were never intended to provide revenue that will somehow contribute to health funds, which makes this claim implausible.\\ \hline Criticism of the Iraq War is justified& the Iraq War is a costly and costly mistake & implausible& The odd repetition of the term ``costly" is why this claim is considered implausible.\\ \hline We should nationalize the health care industry& The health care sector is one of the fastest growth sectors of the economy.& no stance& The claim is praising the effects of the health care industry on the economy, thus suggesting to keep things as they are. However, it could be also interpreted as something that governments could profit from, thus suggesting to support the topic. \\ \hline We should abandon Valentine's day & The Valentine's Day holiday is one of the most misunderstood and misused holidays. & no stance& The attitude towards Valientine's Day can be understood as negative, as it is misunderstood, or as positive, as something that should be actually getting better respect from the public. In addition, the use of the verb ``misused" is rather \textit{misused} in this context. Either way, there is no clear stance towards abandoning it. \\ \hline We should disband the United Nations Security Council & the United Nations Security Council is an essential forum for maintaining the international community's collective efforts to fight terrorism& plausible and has stance &\\ \hline We should subsidize transportation network companies& the introduction of regional mass transit networks in the country will help boost economic growth, provide enhanced fuel efficiency and reduce the use of private vehicles & plausible and has stance &\\ \hline \end{tabular} \end{center} \caption{Examples of GTs generated by GPT-LN{}-FWS on the test set.} \label{table:examplesLxnxTest} \end{table*} \section{Conclusions}\label{sec:conclusions} We suggest a claim-generation pipeline, based on a fine-tuned GPT-2 model augmented by framing the topic, and filtered using Claim Detection tools. Results on a diverse set of $96$ new topics demonstrate the merit of our approach. As expected, fine tuning on a larger dataset of claims leads to more accurate generation. Yet, the coherency of the dataset also matters; simple merging of datasets of different flavors does not improve generation, and may even hamper it. To evaluate the generation models we examined several measures, which roughly estimate how ``good" the generated text is. But since they do so from different perspectives, they are often not consistent with one another \cite{wachsmuth-etal-2017-argumentation}. Here they were combined heuristically, but future work should explore this more rigorously. Our work highlights some of the relations between Claim Generation, Claim Retrieval, and Claim Detection. In our pipeline, Claim Detection is used to weed out poorly-generated claims. Further, we show that Claim Retrieval is a sufficient basis -- alongside a powerful language model -- for building a claim generation pipeline; and that Claim Generation may augment Claim Retrieval with additional novel claims. Here, GPT-2 was used with a ``default" setting. However, there is clearly an interesting trade-off between creativity and coherence, and balancing the two to fit an intended use case -- perhaps even interactively -- which we intend to explore in future research. Finally, the claims generated by our pipeline display both subjective opinions and factual assertions. In the latter case, our initial analysis indicates that the generated claims of a factual nature are often, but certainly not always, factually true. Thus, our work highlights a new emerging front in the rapidly expanding area of fact verification -- that of distinguishing valid factual statements from non--valid ones, on top of automatically generated texts. \section{Ethical note} Argument generation has the potential of being misused \cite{solaiman2019release}, as it can potentially allow to automatically generate a variety of false assertions regarding a topic of interest. In addition, GPT-2 text generations have been shown to exhibit different levels of bias towards different demographics \cite{DBLP:conf/emnlp/ShengCNP19}. Nonetheless, the way to address these dangers is for the community to recognize and better understand the properties of such generated texts, and we hope this work provides a step forward in this direction. As, to the best of our knowledge, this is the first work leveraging GPT-2 in the context of argumentation, such work can be used to advance research in the argument generation community, by surfacing issues of such systems. Furthermore, in our setting we allow for arguments to be generated on both sides of the topic, thus if one side is misrepresented, it would be easily uncovered. \section{Experimental Details}\label{sec:experimental_details} \subsection{Data} \label{sec:data} We compare the performance of fine-tuning GPT-2 on three argument datasets, two publicly available and one proprietary. \textbf{Rank-30k{}}. This dataset includes $30k$ arguments for $71$ topics, labeled for their quality \cite{gretz2019largescale}. For fine-tuning GPT-2 we consider all arguments with quality score (denoted there as WA-score) $> 0.9$, resulting in $10{,}669$ arguments. These arguments are typically $1$-$2$ sentences long. \textbf{CE2.3k{}}. This dataset consists of $2.3k$ manually curated claims extracted from Wikipedia for $58$ topics \cite{rinott-etal-2015-show}. These claims are usually sub-sentence, concise phrases. We exclude claims for topics which are part of our dev set (see below). Further, we ``wikify" each topic, i.e., automatically map each topic to a corresponding Wikipedia title \cite{shnayderman2019fast}, and remove topics for which no such mapping is found. After this filtering, we remain with $1{,}489$ claims for $29$ topics. \textbf{LN55k{}}. This proprietary dataset consists of $55{,}024$ manually curated claims for the $192$ topics in the train set of \citet{EinDor2019CorpusWA}. These claims were extracted from a corpus of some $400$ million newspaper articles provided by LexisNexis,\footnote{\url{https://www.lexisnexis.com/en-us/home.page}} as done in \citet{EinDor2019CorpusWA} for evidence rather than claims. Whereas fine-tuning is done on varied data-sources, for evaluation we focus on the dev and test topics from \citet{EinDor2019CorpusWA}. We exclude from both sets topics that are present in the Rank-30k{} dataset, resulting in a dev set of $35$ topics and test set of $96$ topics (see Appendix). Throughout this work, we consider debatable topics which correspond to a single Wikipedia title, phrased as a suggestion for a policy -- e.g., \textit{We should increase the use of telemedicine}, or as a valuation analysis -- e.g., \textit{telemedicine brings more harm than good}. \subsection{Model} For all experiments we fine-tune the medium-size GPT-2-355M model \cite{radford2019language}, utilizing the gpt-2-simple library.\footnote{\url{https://github.com/minimaxir/gpt-2-simple}} In order for the model to condition on topics, we represent each (\textit{topic}, \textit{claim}) pair from the training data as a single sequence, separated by a delimiter. In generation, the model is provided with a prompt in the form of a topic followed by a delimiter. We used \textit{top-k} truncation with $k=40$ and a conservative temperature of $0.7$, to accommodate a more readable, coherent output, while maintaining a level of creativity. We leave exploring other sampling techniques (e.g., \citet{holtzman2019curious}) to future work. We restricted the length of each generated text to $50$ BPE tokens, as preliminary experiments showed that very few GTs were longer. In addition, GTs were cleaned by removing non-ascii characters, parenthesis, single quotation marks, and some other erroneous symbols. \subsection{Automatic Evaluation} \label{sec:automaticEvaluation} For evaluation, we consider perplexity and prefix ranking accuracy \cite{fan-etal-2018-hierarchical}, considering the claims extracted by \citet{ajjour-etal-2019-modeling} alongside their listed topics \footnote{This dataset contains $12{,}326$ claims from $465$ topics extracted from \url{debatepedia.org}. We rephrase topics therein to fit our phrasing by adding the text ``We should support" before of the listed topic.} For prefix ranking accuracy we condition each such claim on its real topic, as well as on $9$ other random topics, and compute the fraction of times where conditioning on the real topic yields the highest probability by the fine-tuned model. For both evaluation measures, we report statistics for $10$ samples of $100$ claims sampled uniformly. Importantly, this dataset is independent of all the ones examined here, and so presumably not biased in favor of any of them. Due to the difference in style and topics from the training sets, the fine-tuned models may exhibit high perplexity, so it should be taken as a comparative measure, rather than an absolute one. In addition, we evaluate the GTs by their \textit{quality} and \textit{stance} scores. For obtaining a quality score, we fine-tune BERT \cite{devlin2018bert} on Rank-30k{}, as in \citet{gretz2019largescale}. This score aims to capture how well the output is written, giving preference to grammar, clarity and correct spelling. For obtaining a stance score, we utilize a proprietary internal service, based on a BERT model fine-tuned over the LN55k{} claims which were manually labeled for stance \cite{bar-haim-etal-2017-stance}. A positive score indicates that a claim supports the topic, a negative score that it contests it, while a score close to zero suggests no clear stance. Since we are only interested in whether or not a sentence has a clear stance, we take the absolute value of the score. For both scores, we report statistics for $10$ samples of $100$ GTs sampled uniformly from the respective set. \subsection{Annotation Tasks} \label{sec:annotationTasks} To further assess the quality of GTs we annotate their \textit{plausibility} and \textit{stance}. We do this in a cascade -- only GTs considered plausible are subsequently annotated for their stance. The motivation for these two tasks is that together they enable us to assess the ``claimness" of GTs, i.e., to determine to what extent the GTs represent coherent claims, relevant to the given topic. We used the Appen crowd-sourcing platform,\footnote{\url{www.appen.com}} with $7$ annotators to annotate each GT. To control for annotation quality, we included hidden test questions, comprised of previously annotated rows with high confidence. Annotations by annotators with low accuracy on the test questions were removed (below $75\%$ for plausibility and $80\%$ for stance). Further, we relied on a channel of annotators which performed well on previous related tasks. For each task, we report inter-annotator agreement defined as the average Cohen's Kappa of annotators which have at least $50$ common judgements with at least $5$ other annotators. \textbf{Plausibility}. In this task, given the GT only, without the context of its respective topic, the annotator should determine if it is plausible that a human would make this claim, considering grammar, coherence, and general ``common sense". This task can be considered an extension of the \textit{readability} task that is usually used to evaluate the quality of generated text (e.g., \citet{beers2009syntactic}), while further asking to utilize common knowledge to judge that the content itself makes sense. For example, in the GT \textit{making blood donation free will help promote primary care}, the notion of \textit{making blood donation free} does not make sense as it is a voluntary act, hence the GT should be deemed implausible. A GT is considered plausible if $\geq 70\%$ of the annotators considered it as such. The average inter-annotator Cohen's Kappa obtained in this task is $0.37$, which is common for such a subjective task (see, e.g., \citet{EinDor2019CorpusWA} and \citet{Boltuzic2014BackUY}). \textbf{Stance}. In this task we presented the annotators with GTs that were considered plausible, together with their respective topics. Annotators were asked to determine if the text \textit{supports} the topic, \textit{contests} it, or does not have a stance towards it. The label of the GT is determined by the majority vote, and if there is no majority label, it is considered as having no stance. As in the automatic measure of stance, we are mainly interested in evaluating if a GT bears \textit{any} stance towards the topic, thus we consider both \textit{supports} and \textit{contests} labels as positives when reporting stance. The average inter-annotator Cohen's Kappa obtained in this task is $0.81$. Table \ref{table:exampleClaims} shows examples of three types of labeled GTs -- plausible and stance-bearing; plausible with no stance; and implausible. The results of these annotation tasks are made available as part of this work.\footnote{\url{https://www.research.ibm.com/haifa/dept/vst/debating\_data.shtml}} The complete annotation guidelines are shared in the Appendix. \section{Factual, Opinion, and Generic Claims} An interesting facet when considering argumentative claims, is whether they attempt to convey facts, or rather personal opinions. Thus, we explored if GTs generated by our two models are characterized as more factual or opinionated. Further, given growing concern over misuse of language models such as GPT-2 to spread fake news and misinformation \cite{NIPS2019_9106,solaiman2019release}, we assessed the truth value of GTs deemed factual. For this purpose, we first sampled $200$ plausible and stance-bearing GTs each generated by GPT-LN{}-FWS and GPT-Rank{}, respectively, and annotated all $400$ GTs for being an opinion or (ostensibly) factual, using the Appen platform, and relying on similar annotation controls as described in \sectionRef{sec:annotationTasks}. The results of this annotation task are made available as part of this work, and the annotation guidelines are shared in the Appendix. The average inter-annotator agreement was $0.25$. When considering labels with a majority vote of at least $70\%$, $70$ of the GTs generated by GPT-Rank{} are considered factual and $63$ opinion, as opposed to $46$ and $105$ of those generated by GPT-LN{}-FWS, respectively. A possible explanation is that Rank-30k{} claims -- on which GPT-Rank{} was fine-tuned -- tend to be more elaborate and explanatory, describing a cause and effect that correspondingly yields more factual GTs; e.g., the GT \textit{genetic engineering can help further scientific developments in cancer treatment, as well as improve the long term prognosis of such diseases as help maintain a safe and effective regulatory regime for their development}, for the topic \textit{We should further exploit genetic engineering}. By contrast, LN55k{} claims are often short and concise, and perhaps more prone to express the journalist opinion; hence, training on these data yields more opinionated GTs, e.g., \textit{the ``sex" revolution has failed} or \textit{the gender pay gap is unfair}. Indeed, the average number of tokens in factual GTs is $17.3$, compared to $14.2$ for opinion GTs. Next, we aimed to assess whether factual GTs are indeed true. A random sample of $23$ and $40$ factual GTs generated by GPT-LN{}-FWS and GPT-Rank{}, respectively, were labeled for their truth value by a professional debater experienced in this task, that also was asked to assess whether the ``fake facts" were nonetheless common in contemporary discourse. Of the $23$ GPT-LN{}-FWS GTs, $13$ were considered true, the others being a mix of false or non-factual GTs. The true GTs include some simple, almost trivial statements such as \textit{Speed limits are designed to help reduce road fatalities}, or more evidence-based facts such as \textit{rat poisons have been linked to the development of Parkinson's disease, Alzheimer's disease and migraines}. Among the $4$ false GTs, it is interesting, albeit perhaps unsurprising, to find that $2$ were marked as common in discourse: \textit{Flu vaccinations are associated with higher rates of adverse drug reactions and serious health complications}, and \textit{poly-amorous relationships are linked to higher levels of sexual risk}. For the $40$ GPT-Rank{} factual GTs, $21$ were deemed true. Overall, the ratio of true GTs is similar to that of GPT-LN{}-FWS GTs. It seems that some of the other GTs are mixed, characterized by opening with an opinionated statement, which is followed by a factual claim, e.g., \textit{we should not abandon chain stores} (Opinion) \textit{as they provide a steady supply of goods and services to the community} (True fact). One of the $3$ false GTs could be considered common in discourse, \textit{the alternative vote would cause voters to be disenfranchised}. The aforementioned short GTs suggested that GTs tend to be rather generic, in the sense that stating that something ``has failed" or ``is unfair", can be done (coherently) for a great variety of contexts. Indeed, such GTs are reminiscent of those generated by \citet{bilu-slonim-2016-claim}. To assess to what extent such GTs are generic, we sampled $100$ of them, and annotated them ourselves. In this sample, $54$ of the GTs were deemed generic, suggesting that such GTs are prevalent, but by no means the only types of texts being generated. \section{Framing Claims} \label{sec_framing} In an attempt to frame the GTs, we append to the topic a short sentence describing an aspect related to the claim, hypothesizing that adding a concrete aspect will guide the generation process in that direction. We consider the aspects (or frames) appearing $\geq 100$ times in the dataset of \citet{ajjour-etal-2019-modeling}, and manually map each aspect to a related list of Wikipedia pages. Using Wikification, we keep in the training set only claims that reference at least one of these Wikipedia pages. Finally, we manually phrase each aspect as a framing sentence, e.g., \textit{Consider how this relates to the economy} for the \textit{Economy} aspect, and append it to the topic separated by a delimiter. For evaluation, we generated $15$ GTs per aspect per topic. We compared the results to the GPT-LN{} and GPT-Rank{} models, using the same measures as described in the main text. Doing an internal manual assessment of a sample of $40$ GTs for each model, we found that adding aspect context did not improve the plausibility and relevance of GTs, not even when introducing heuristics to detect aspects that are more relevant to the topic. A possible explanation for this is that the selection of appropriate aspects should be handled more carefully (e.g., as in \citet{schiller2020aspect}). Such an approach is beyond the scope of this work, and we leave it for future work. \section{Further observations}\label{sec:further_obs} \textbf{What characterizes implausible GTs?} We considered the $51$ GPT-LN{}-FWS test-set GTs which were deemed implausible. More than half seem to contradict common sense, often by connecting pairs of unrelated terms as in the titular \textit{the workweek is the best time to start a family}, for the topic \textit{We should increase the workweek}; or via connecting related terms in an odd manner as in \textit{LGBT adoption is a critical component of a child's life} for the topic \textit{We should legalize LGBT adoption}. Other reasons for implausibility include weird phrasings (e.g., \textit{the housing in public housing is disastrously unaffordable}) and bad grammar (e.g., \textit{that the benefits of the MRT network outweigh its costs}). \noindent \textbf{COVID-19 debates.} Our pipeline relies heavily on the massive pre-training of GPT-2, that naturally included sentences pertaining -- at least to some extent -- to topics in our dev and test sets. It is therefore interesting to examine the GTs obtained for topics which were presumably less abundant in the pre-training data. Hence, while sheltering at home, we have generated $20$ GTs for each of the following two topics: \textit{We should subsidize the COVID-19 drug development} and \textit{Coronavirus face masks should be mandatory} using the GPT-LN{}-FWS model. For the first topic, only $4$ of the $20$ GTs were coherent and relevant, while many of the others talked about HIV, alluded to the opioid crisis, or were outright absurd -- \textit{the use of artificial sweeteners in food should be a crime}. The four ``good" ones were of generic form, yet some showed an ability to extrapolate to relevant terms, without them being mentioned explicitly in the prefix. For example, in the GT \textit{the COVID-19 vaccine will be a very effective vaccine as compared to other vaccines}, while ``COVID-19" and ``vaccine" are mentioned separately in the prefix (i.e., in the first sentence of the Wikipedia page \textit{COVID-19 drug development}), the term ``COVID-19 vaccine" is not. For the second topic, $12$ of the GTs are coherent and relevant, presumably because the use of face masks to prevent disease is more general, and may have have been discussed in the pre-training data. It has probably been true of previous airborne viruses that, for example, \textit{the use of face masks is the best way to keep people safe}. Among the irrelevant GTs there is mention of other medical conditions, such as Ebola, diarrhoea and mosquito bites. The full list of GTs for these two topics, as well as $3$ additional ones, are made available as part of this work. \section{Claim Generation vs. Claim Retrieval} Given a controversial topic, Claim Generation and Claim Retrieval both aim to provide claims pertaining to it. It is therefore interesting to understand the interplay between the two tasks. Specifically, thinking of Claim Generation as a mean to augment the output of Claim Retrieval, we ask whether GTs tend to be novel, or a repetition of retrieved claims, and how does the quality of the two compare. In addition, we explore how Claim Retrieval can facilitate the training of the Claim Generation pipeline suggested in this work. \textbf{How novel are the generated claims?} Similar to the manually-curated claims of the LN55k{} dataset, we also had access to such claims pertaining to $34/35$ topics in the dev set (henceforth, the LN claims). For comparison we used $169$ GTs ($5$ per topic, one duplicate removed) from the GTs generated by GPT-LN{} for these $34$ topics (see Section \sectionRef{sec:initial_gen}). To measure similarity between GTs and LN claims we fine-tuned BERT on a Semantic Text Similarity benchmark \cite{cer2017semeval}. The resultant model was used to find for each GT the top matching LN claim. Manual examination suggests that a score of $0.75$ roughly differentiates pairs with semantically similar claims and those which are not (Table \ref{table:matchedClaims}). \input{matched_sentences_table} Note that semantically similar claims may still have opposing stance, but in this case we also consider the GT as appearing in the corpus (in its negated form). Taking all pairs with score $\geq 0.75$, we get that only $20/169$ of the GTs have a semantically-similar counterpart among the LN claims, suggesting that GTs tend to be novel. Moreover, we see that the match score is well correlated with the number of annotators who labeled a GT as plausible (Pearson's $\rho=0.31$) or as having a stance ($\rho=0.47$). Similarly, in general, $127/169$ GTs were determined by human annotators to be plausible and $114/169$ as having a stance. In comparison, $19/20$ GTs with match score $\geq 0.75$, were deemed both plausible and as having a stance. This suggests, as may be expected, that GTs are more likely to represent valid claims if they already appear in some phrasing within a human-authored corpus. Future work might use this to validate GTs, or, conversely, to guide claim retrieval. \textbf{How good are the generated claims?} Having matched GTs to ``real" claims allows us to compare not only their novelty, but also their quality. Namely, for each of the $169$ pairs we asked crowd annotators which of the two claims ``would have been preferred by most people to discuss the topic?", using the same process as in section \sectionRef{sec:experimental_details}. Among these pairs, in $41$ cases both claims appeared to be similarly good (a $3$:$4$ split); in $57$ the GT is preferred; and in $71$ the LN claim is considered better. Among the $20$ pairs which are highly similar, in $4$ both claims are equally good, in $13$ the GT is better and in $4$ the LN claim is preferred. Thus, at least in this small sample, when the two claims are conveying a similar message, human annotators seem to prefer the GPT-2 version over the human authored one. \textbf{Can claim retrieval facilitate generation?}\label{sec:retrieved} The suggested pipeline assumes access to a dataset of actual claims to fine-tune GPT-2. However, initial analysis suggest that even with no {\it a-priory\/} labeled data, having access to a high quality Claim Retrieval engine, can be enough to facilitate Claim Generation. Using a propriety Claim Retrieval server, we first query Wikipedia to retrieve sentence candidates, in a similar process to that described in \citet{EinDor2019CorpusWA} for retrieving Evidence candidates. We then rank them according to the Claim Detection model described in \sectionRef{sec:ranking}. Overall, we obtain $4427$ (ostensible) claims from Wikipedia for the $192$ train topics. We fine-tuned GPT-2 on them, and evaluated the results as done for the other datasets (\sectionRef{sec:initial_gen}). Since these data are not manually curated, some of the texts used for fine-tuning are not actual claims. Nonetheless, human annotators deemed $124/175$ GTs as plausible; average perplexity is $264$, mean prefix ranking accuracy is $0.61$, and average argument quality is $0.75$. These results are comparable to those obtained over the much larger Rank-30k{} dataset, suggesting that a good solution to the Claim Retrieval task embodies a good solution to the Claim Generation task. \section{Initial Generation}\label{sec:initial_gen} Our first question was to examine the impact of the data used for fine-tuning GPT-2, aiming to identify an effective model that relies on publicly available data, and a presumably superior one that further relies on proprietary data of a much larger size. \noindent \textbf{Publicly available data}. We considered Rank-30k{} alone, and combined with CE2.3k{}. We fine-tuned GPT-2 for $2k$ steps on the former, and $4k$ steps on the latter. We denote the obtained models GPT-Rank{} and GPT-Rank-CE{}, respectively. \noindent \textbf{Proprietary data}. We considered LN55k{} alone, as well as combined with all publicly available data. We fine-tuned GPT-2 for $8k$ steps on both. We denote the obtained models GPT-LN{} and GPT-ALL{}, respectively.\footnote{In section \ref{sec:retrieved}, we describe the retrieval of $4.5k$ (ostensible) claims from Wikipedia using a proprietary Claim Retrieval server. These claims are included in GPT-ALL{}.} For each of the $4$ models we generated a total of $175$ GTs, $5$ conditioned on each of the $35$ dev topics. Note that the models are fine-tuned on datasets containing both supporting and contesting arguments, thus they may generate GTs of both stances as well. The manual and automatic evaluation of these GTs is described next. \input{intial_generation_results} As seen in Table \ref{table:resultsInitial} both proprietary models -- fine-tuned on much larger datasets -- yield more plausible and stance-bearing GTs than their counterparts. Among the proprietary-based models, while GPT-ALL{} has an advantage in plausibility, perplexity, and prefix ranking accuracy, GPT-LN{} is better when considering the ratio of GTs which are both plausible and stance-bearing - with $68\%$ ($119/175$) such GTs, compared to $62.3\%$ ($109/175$) for GPT-2-ALL. It seems that adding more data, varied in type and style, could negatively impact the relevance and usefulness of GTs. Thus, we choose GPT-LN{} as the model to utilize for subsequent experiments. As for the publicly-based models, GPT-Rank-CE{} has a small advantage in plausible and stance-bearing GTs, compared to GPT-Rank{}. However, the performance of the latter is typically much better in the automatic measures. Especially, we note the advantage in predicted quality - as expected, generated arguments from the GPT-Rank{} model have higher quality, as both this model and the argument quality model were trained on a similar type of data. However, when adding the CE2.3k{} dataset to the training set, the quality of GTs declines. Thus, even though the differences between the two models are overall not substantial, we choose GPT-Rank{} for subsequent experiments. \input{examples} It should be noted that there is a clear difference between the GTs of GPT-LN{} and GPT-Rank{}, as evident in Table \ref{table:exampleClaims}. The former are short ($12.4$ tokens on average), and may contain utterances with as few as $3$-$4$ tokens (as in the GT in row 3). By contrast, GTs generated by GPT-Rank{} contain $23$ tokens on average, and $22/175$ of them contain at least two sentences (as in the GT in row 4). In addition, shorter GTs tend to be plausible - on average, plausible GTs from GPT-LN{} have $12.1$ tokens, compared to $15.4$ tokens for implausible GTs. Likewise, plausible GTs from GPT-Rank{} contain $20.5$ tokens, on average, compared to $26$ tokens for implausible GTs. We note that for all models, the predicted quality and stance strength are only slightly lower than their counterpart measures on the training set, suggesting that generation tends to maintain these values. \section{Introduction}\label{sec:intro} Argument Mining had traditionally focused on the detection and retrieval of arguments, and the classification of their types and of the relations among them. Recently, there has been growing interest in argument synthesis. Here we suggest a pipeline for addressing this task relying on the GPT-2 language model \cite{radford2019language}, examine how it can be enhanced to provide better arguments, and analyze the types of arguments being produced. Specifically, we are interested in \textit{Claim Generation}, where the input is a debate topic, phrased as a proposed policy, and the output is a concise assertion with a clear stance on this topic. We start by fine-tuning GPT-2 on a collection of topics and associated claims. Since several such datasets are available, we examine which of them tend to yield better claims, and observe that merging all such sources together does not necessarily yield better results. In addition, we explore two ways in which context can be added to the generation process, beyond providing the topic itself: (i) framing the topic with the first sentence from its corresponding Wikipedia page; and (ii) framing the claim by directing it to consider a specific aspect. We find that the former can improve the generated output, but the latter does not -- at least in the way it is done here. Following \citet{bilu-slonim-2016-claim}, we also examine a post-generation ranking step that aims to select the correctly generated claims. We find that existing \textit{Claim Detection} tools can serve as a filter to significantly enhance generation quality. Our evaluation incorporates automatic measures and manual labeling. Specifically, we introduce an annotation task aiming to assess the \textit{plausibility} of generated claims, i.e., to what degree is it plausible that a human will make it. We report results on a test set of $96$ topics, demonstrating the validity of our approach to topics not seen in training or development. In addition, we manually annotate the generated claims for whether they are factual claims, or opinion based, and further aim to assess whether the former represent true facts. Finally, we observe that manually labeled datasets used to fine-tune GPT-2 are not essential, and that relying on the output of a \textit{Claim Retrieval}\footnote{Given a topic of interest, Claim Retrieval is the task of retrieving relevant claims from a corpus; Claim Detection is the task of determining whether a given text is a relevant claim.} engine for this fine-tuning, may suffice. In addition, we compare the generated claims to an existing large-scale collection of claims for the same topics, and conclude that the generated claims tend to be novel, and hence may augment traditional Argument Mining techniques in automatically providing claims for a given topic. Henceforth, we denote the initial output of GPT-2 for a given prompt as \textit{generated text (GT)}. Thus, our task is to define a process by which as many of the GTs as possible will represent claims that are relevant to the provided prompt. \section{The Complete Pipeline}\label{sec:classify} \subsection{Ranking Generated Claims} \label{sec:ranking} So far we have assessed the overall ability of the models to generate relevant claims. A natural question is whether one can efficiently rank the obtained GTs, retaining only the most attractive ones for downstream tasks. This could be considered somewhat analogous to Claim Retrieval tasks, where first a large amount of argument candidates is retrieved, and are then ranked according to their relevance (e.g., \citet{levy-etal-2014-context,stab-etal-2018-cross,EinDor2019CorpusWA}). We considered three existing models for ranking GTs - the argument quality and stance models described in \sectionRef{sec:automaticEvaluation}, and a Claim Detection (CD) proprietary service, obtained by training a BERT model on LN55k{}. The data for training the model is augmented with negative samples from the same corpus -- sub-sentential fragments which were labeled as non-claims. The objective of the model is to differentiate between claims and non-claims, and is similar to that described in \citet{EinDor2019CorpusWA} for Evidence detection. For evaluation we considered GTs generated on the dev set by GPT-Rank{} and GPT-LN{}-FWS for which we had a definite label for relevance to the topic. Specifically, GTs which were annotated as ``implausible" by a majority of annotators were assigned a label of $0$. GTs which were annotated as plausible, and then annotated for stance, were labeled according to the latter annotation: $1$ if they were annotated as \textit{Pro} or \textit{Con}, and $0$ otherwise. In total, we considered $211$ positive and $120$ negative GTs. Overall, the CD score is best correlated with the labels - Pearson's $\rho=0.41$, compared to $0.12$ for (absolute) stance, and $0.01$ for argument quality. In addition, we ranked the GTs within each topic w.r.t each score, and calculated the ratio between the number of positives in the top $3$ and bottom $3$. As before, CD is preferred, with $81/40$ positives in the top/bottom, compared to $70/56$ (stance) and $71/67$ (argument quality). See a short discussion about this result in the Appendix. Accordingly, we defined the generation pipeline as follows: (i) Fine-tune GPT-2 to obtain GPT-Rank{} (Model-1) or GPT-LN{}-FWS (Model-2); (ii) Generate with the topic as a prompt (Model-1), or prepend the -- automatically extracted -- first sentence of the associated Wikipedia article to the topic and use the resultant text as a prompt; (iii) rank the obtained GTs according to their CD score. In principle, one could set a strict threshold on the CD score, and generate a large number of texts until a sufficient number pass this threshold. We plan to investigate this direction in future work. \subsection{Test Set Results} With the above pipeline, we now proceed to generate $20$ GTs for each of the $96$ topics in the test set, using the GPT-LN{}-FWS and GPT-Rank{} models. We then take the top $7$ GTs according to the CD score, per topic, resulting in $672$ GTs overall for each model. As done for the dev set, we label these GTs for plausibility and stance, as well as calculate their predicted quality and stance. \input{test_results} Results are presented in Table \ref{table:resultsTest}. The overall ratio of GTs perceived as both plausible and carrying stance for the GPT-LN{}-FWS model and the GPT-Rank{} model are $79.5\%$ and $57\%$, respectively, conveying the advantage of fine-tuning on much larger data (see the appendix for examples). In addition, our test set results echo the results obtained on the dev set, suggesting that our analysis on the dev set is relevant for the test set as well, and that our models generalize well to unseen topics \section{Related Work}\label{sec:related} In classical Natural Language Generation (NLG) tasks -- Machine Translation, Summarization, and Question Answering -- the semantic content of the output strongly depends on the input. Argument Generation, alongside Story Generation \cite{fan-etal-2018-hierarchical}, occupies a parallel venue, where the output should satisfy stylistic and rhetorical constraints -- yet no well-defined semantic goal -- with much room and desire for innovation. Approaches to argument generation have included traditional NLG architectures \cite{zukerman1998bayesian, carenini2006generating}; assembling arguments from given, smaller argumentative units \cite{walton2012carneades, reisert2015computational, wachsmuth-etal-2018-argumentation, el-baff-etal-2019-computational}; welding the topic of the debate to appropriate predicates \cite{bilu-slonim-2016-claim}; and using predefined argument templates \cite{bilu2019argument}. Of particular interest is the generation of counter arguments, for which solutions include an encoder-decoder architecture \cite{hidey-mckeown-2019-fixed}, which may be augmented by a retrieval system \cite{hua-etal-2019-argument-generation, hua-wang-2018-neural}, or alternatively offering ``general purpose" rebuttal based on similarity to predefined claims \cite{orbach-etal-2019-dataset}. Concurrent with our work, and most similar, is \citet{schiller2020aspect}, who frame the Aspect-Controlled Argument Generation problem as follows - given a topic, a stance and an aspect, generate an argument with the given stance towards the topic, which discusses the given aspect. They fine-tune CTRL \cite{keskar2019ctrl} over claims from $8$ controversial topics, and mostly use automatic measures to assess claim generation over the same $8$ topics. By contrast, here we are interested in a less restricted setting and explore the properties of the generated claims. Specifically, we fine-tune GPT-2 on claims coming from diverse sets of $71$-$192$ topics, and evaluate claims generated for $96$ novel topics. In this work, we assess the contribution of context to the quality of generated claims. In \citet{durmus-etal-2019-role}, context is defined as the path from a thesis (topic) node to a leaf (claim) node in an argument tree. In this work, however, we consider only arguments of depth 1, directly addressing the topic, and leave context of larger depth to future work. Additionally, for development and evaluation we use human annotations alongside automatic measures, aiming to answer nuanced questions - is it plausible that the claims be asserted by a human? do the generated claims tend to be opinions or factual? and, when they are the latter, do they tend to be factually true? \subsection{Annotation Task Guidelines} Figures \ref{fig:plausibility}-\ref{fig:factualExample} present the guidelines for the plausibility, stance and factual vs. opinion annotation tasks, as appearing in the Appen crowd-sourcing platform. \begin{figure*}[t] \caption{Guidelines for the plausibility annotation task.} \centering \includegraphics[scale=0.7]{plausibility_guidelines.png} \label{fig:plausibility} \end{figure*} \begin{figure*}[t] \caption{Example of a plausibility annotation.} \centering \includegraphics[]{plausibility_q_examples.png} \end{figure*} \begin{figure*}[t] \caption{Guidelines for the stance annotation task.} \centering \includegraphics[scale=0.7]{stance_guidelines.png} \end{figure*} \begin{figure*}[t] \caption{Example of a stance annotation.} \centering \includegraphics[]{stance_q_examples.png} \end{figure*} \begin{figure*}[t] \caption{Guidelines for the factual vs. opinion annotation task.} \centering \includegraphics[scale=0.5]{factual_guidelines.png} \includegraphics[scale=0.5]{factual_guidelines_2.png} \end{figure*} \begin{figure*}[t] \caption{Example of a factual annotation.} \centering \includegraphics[]{factual_q_examples.png} \label{fig:factualExample} \end{figure*} \section{Lists of topics} \subsection{List of dev set topics} We should legalize doping in sport \\ We should protect endangered species \\ We should legalize insider trading \\ We should lower the drinking age \\ We should abolish temporary employment \\ We should ban free newspapers \\ We should abolish the US Electoral College \\ We should ban lotteries \\ We should legalize ivory trade \\ We should further exploit green technology \\ We should ban abortions \\ We should further exploit geothermal energy \\ We should raise the retirement age \\ We should ban alternative medicine \\ We should subsidize public service broadcasters \\ We should abolish term limits \\ We should abandon Gmail \\ We should not subsidize single parents \\ We should introduce school vouchers \\ Prenatal diagnosis should be mandatory \\ We should prohibit tower blocks \\ We should increase airport racial profiling in the United States \\ We should increase international volunteering \\ We should subsidize the human mission to Mars \\ The use of AI should be abandoned \\ We should fight for Palestinian independence \\ We should further exploit natural gas \\ We should abandon democracy \\ We should ban fishing \\ We should ban gratuities \\ We should increase government regulation \\ Community service should be mandatory \\ We should further exploit solar energy \\ Tattoos should be banned \\ We should support a phase-out of lightweight plastic bags \\ \subsection{List of test set topics} We should end the use of solitary confinement \\ We should disband the United Nations Security Council \\ We should end the use of mass surveillance \\ Child labor should be legalized \\ We should cancel the pledge of allegiance to the flag \\ We should ban multi-level marketing \\ We should adopt environmental justice \\ We should ban media conglomerates \\ We should end the use of traffic enforcement cameras \\ We should introduce a national identity card \\ We should subsidize transportation network companies \\ We should ban burqas \\ We should ban conversion therapy \\ We should introduce the alternative vote \\ Force-feeding should be banned \\ We should abandon tabloid journalism \\ We should legalize LGBT adoption \\ We should abandon Twitter \\ We should abandon chain stores \\ We should further exploit mixed-use development \\ We should subsidize open access journals \\ We should end child benefits \\ We should increase the use of telemedicine \\ We should abandon the sexual revolution \\ We should adopt polyamory \\ We should end the use of bailouts \\ Begging should be banned \\ We should adopt catholicism \\ We should abolish credit scores \\ We should fight environmental degradation \\ We should increase environmental protection \\ Flu vaccination should be mandatory \\ We should close landfills \\ We should further exploit filibusters \\ Minority groups should be protected \\
{ "timestamp": "2020-10-14T02:12:11", "yymm": "2010", "arxiv_id": "2010.06185", "language": "en", "url": "https://arxiv.org/abs/2010.06185" }
\section*{Introduction}\label{sec:intro} The overarching goal of ultra-reliable and low-latency communication (URLLC) lies in satisfying the stringent reliability and latency requirements of mission and safety-critical applications. In order to achieve these stringent requirements, current 5G URLLC solutions come at the cost of low spectral efficiency due to channel probing and estimation. In addition, 5G URLLC presumes a static channel model that fails to capture non-stationary channel dynamics and exogenous uncertainties (e.g., out-of-distribution or other under-modeled rare events), which are germane to uncontrolled environments~\cite{park2020}. To overcome these fundamental limitations, driven by the recent advances in machine learning (ML) and computer vision, one key enabler for beyond-5G URLLC is leveraging visual data (e.g.,{} RGB depth (RGB-D) camera imagery, LiDAR point cloud, etc.) generated from a variety of vision sensors that are prevalent in intelligent machines such as robots, drones, and autonomous vehicles. From a wireless standpoint, these visual data enable a more accurate prediction of wireless channel dynamics such as future received power and channel blockages, as well as constructing high-definition 3D environmental maps for improved indoor positioning and navigation \cite{Nandakumar2018}. This line of works is referred to as \emph{view to communicate} (V2C). \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{Figures/Fig1_teaser_fig_v3.pdf} \caption{An illustration of vision aided wireless communication, i.e., \emph{view to communicate (V2C)}, for millimeter-wave (mmWave) channel prediction and predictive handover, and RF signal assisted imaging, i.e., \emph{communicate to view (C2V)} for image inpainting.} \label{fig:teaser} \end{figure} On the other hand, in some scenarios computer vision is vulnerable to occlusions of visible light by walls, human body, and other environmental artifacts such as lighting. This can be addressed by leveraging radio frequency (RF) sensing such as using Wi-Fi signals to diffract and detour blockages as opposed to visible light, thereby precisely tracking user locations even behind walls \cite{Alahi:2015aa}. More recently, exploiting millimeter-wave (mmWave) and terahertz (THz) signals can provide even higher-resolution sensing capabilities that can penetrate body tissues for non-invasive medical imaging \cite{Doddalla2018}. This research direction is referred to as \emph{communicate to view} (C2V). Motivated by the aforementioned confluence of computer vision and RF-based wireless transmission, this article sheds light on the synergies and complementarities of the integration of both visual and RF modalities for enabling URLLC in 5G and beyond. To this end, we discuss the challenges and research opportunities in V2C and C2V. Then, their feasibility is demonstrated using selected use cases, ranging from vision-aided mmWave channel prediction and proactive handover decisions, to image reconstruction of lost visual parts caused by packet loss. Finally, we conclude this article by laying down future research directions. \section*{V2C: Vision-Aided Wireless Systems} \label{sec:VAW} A new paradigm in beyond-5G wireless systems is to leverage non-RF data, among which visual images complement traditional RF-based systems \cite{park2020}. For instance, one can predict future mmWave channel conditions using a sequence of camera images containing mobile blockage patterns \cite{Nishio2019}, thereby enabling proactive decision-making (e.g.,{} handover, beamforming, multi-path transmission, etc.). In what follows, the rationale related works, and future research opportunities of V2C are elaborated. \subsection*{Vision-Based RF Channel Prediction} \vspace{5pt}\noindent\textbf{Motivation.}\quad In beyond-5G systems, mmWave and THz signals are envisaged to play an important role thanks to their abundant bandwidth. However, these signals are highly directional and vulnerable to blockages, such as moving pedestrians, vehicles, and so forth. Hence, predicting the occurrence of blocked and non-blocked channels, that is line-of-sight (LOS) and non-LOS (NLOS), is crucial in ensuring reliable connectivity, notably for mission-critical applications. Predicting such events using past RF signals is extremely challenging while consuming spectral resources. To obviate this problem, visual data such as RGB-D images and 3D point cloud that capture a variety of hidden features in wireless environments (e.g., object locations, shapes, materials, and mobility patterns) can be exploited. In so doing, one can accurately predict future mmWave and THz channel conditions without consuming RF resources to probe and estimate the channels. \vspace{5pt}\noindent\textbf{Related Works.}\quad RGB-D images are useful for accurately predicting the future received power in mmWave (i.e., above 6 GHz) and sub-6 GHz carrier frequencies. In~\cite{Nishio2019}, the future mmWave received power is predicted by feeding past RGB-D images into a deep neural network (DNN), in which two randomly moving people block the communication link in an indoor experiment. Similarly, in~\cite{Ayva2019}, future 2.4 GHz channel states in an indoor experiment are accurately predicted using RGB-D images fed into a DNN. As demonstrated by these prior experiments, vision-based solutions can achieve accurate RF channel prediction without consuming any RF resources. This is in stark contrast to traditional channel prediction methods that frequently exchange RF pilot signals for high prediction reliability, which is not feasible in URLLC due to the stringent latency requirements. \vspace{5pt}\noindent\textbf{Opportunities.}\quad Beyond the aforementioned received power prediction, V2C has far more potential in predicting packet error rates, the number of reflectively propagating paths, optimal beam directions, to mention a few. Furthermore, in addition to indoor environments, it is worth studying the effectiveness of V2C in urban outdoor environments wherein the channel prediction becomes more challenging due to highly dynamic mobile blockage patterns, higher number of blockers and reflectively propagating paths. Last but not least, it is important to develop sample-efficient prediction techniques since conventional DNN training frameworks often require a large number of data samples. Alternatively, by exploiting meta learning and transfer learning, one can pre-train a DNN using easily accessible data (e.g.,{} data collected from public repository, ray-tracing simulations, etc.), and then fine-tune the DNN with only a few on-site data. \subsection*{Hetero-Modal Vision-Based RF Channel Prediction}\label{sec:hetero-modal} \vspace{5pt}\noindent\textbf{Motivation.}\quad Fusing visual data with other modalities can enrich the useful features of wireless environments while complementing the missing features in the visual modality. Because vision is vulnerable to object occlusion while having restricted field-of-views (FoVs), audio data can partly complement such limitations; for example, by hearing the Doppler effect, one can predict a vehicle's moving direction and speed. Another example is inertial measurement unit (IMU) data tracking the user movements during blockages which can also be used to track relative velocities to blockages. \vspace{5pt}\noindent\textbf{Related Works.}\quad DNNs are capable of fusing heterogeneous data modalities. In \cite{Ding2015}, 2D images and 3D face renderings are vectorized, and concatenated at the input layer of a convolutional neural network (CNN) for learning facial representations. In \cite{Koda:2019ac}, to predict future channel conditions, received RF signal power and RGB-D images are fused using a multi-modal split learning architecture, while RGB-D images captured from different FoVs are integrated via an average pooling layer. Such fusion can be immediately achieved without incurring any extra latency, as opposed to traditional data fusion algorithms consuming non-negligible computing time. \vspace{5pt}\noindent\textbf{Opportunities.}\quad Understanding the pros and cons of each data modality is crucial. As an example, for user localization, Wi-Fi signals (sub 6 GHz) are useful to cope with blockages \cite{Adib2013}, yet can hardly achieve high precision since the blockages result in NLOS communications By contrast, mmWave signals (above 6 GHz) are vulnerable to blockages and require denser deployment. Therefore, it is mostly LOS communications and can achieve high-precision localization. This highlights the importance of selecting and matching useful data types. Furthermore, the accuracy and cost of channel prediction hinge on how to fuse the hetero-modal data. For instance, compared to average pooling, concatenation consumes more energy due to the increase in the model size, in return for achieving higher prediction accuracy. Hence, it is important to optimize the fusion framework, consisting of DNN architectures, training algorithms, and data pre/post-processing, subject to energy requirements. To this end, split learning is a promising framework, in which a DNN is split into multiple subnetworks that are individually stored by each device \cite{Koda:2019ac}. By adjusting the cut layer, one can reliably satisfy each device's energy constraint. \subsection*{Vision-Based Proactive Decision-Making}\label{sec:decision_making} \vspace{5pt}\noindent\textbf{Motivation.}\quad Based on predicted future channel information, one promising way is to proactively carry out decision-making in wireless systems. For example, by predicting future blockage occurrences at each base station (BS), one can seamlessly handover users in order not to experience NLOS channels. In a similar vein, content can be proactively cached at the user prior to a blockage. \vspace{5pt}\noindent\textbf{Related Works.}\quad The effectiveness of predictive beamforming has been demonstrated in \cite{Iimori2020}. Therein, blockage patterns are learned by a statistical learning method, and a proactive beamforming algorithm is applied to reduce the link outage probability. Moreover, vision-based handover methods have been investigated in \cite{Koda:2019ab}, in which a reinforcement learning (RL) framework learns an optimal mapping from visual data onto handover strategies. More details of this work will be elaborated in a later section as a selected use case. \vspace{5pt}\noindent\textbf{Opportunities.}\quad Towards supporting URLLC, a single proactive decision based on a wrong prediction may cause catastrophic consequences. This calls for designing robustness against prediction failures and for increased prediction accuracy. For example, prediction errors due to video packet loss, dead camera pixels or stand vision sensors can be reduced by using RF data to reconstruct the distorted visual data, emphasizing the importance of the research direction of C2V, to be discussed in the following section. \section*{C2V: Wireless-Aided Computer Vision} \label{sec:RF-CV} Traditional computer vision is based on the imagery captured using visible light, so is limited within line-of-sight (LOS). Compared to visible light, RF signals are more diffractive, thereby enabling non-LOS imaging (e.g., see-through-walls \cite{Adib2013, Doddalla2018}) that is necessary for non-intrusive inspection in mission-critical and time-sensitive applications. Furthermore, fusing the visible imagery and RF signals, one can improve the imagery resolution while reconstructing distorted or occluded objects. From the perspective of such a C2V research direction, the rationale, related works, and future opportunities are elaborated next. \subsection*{RF-Based Imaging} \vspace{5pt}\noindent\textbf{Motivation.}\quad Ultra-high frequencies such as mmWave and THz bands are expected to be a key enabler for high-resolution NLOS imaging. Building walls and floors typically behave to a first order as mirrors and reflect the high-frequency signals, especially THz signals, which enables seeing behind walls and around corners assuming sufficient reflection or scattering paths \cite{rappaport2019}. In addition to NLOS imaging, mmWave and THz based imaging are less impacted by weather and ambient light compared to optical cameras. Another advantage is short exposure time. The typical exposure time of optical camera is several to few tens milliseconds, while that of RF-imaging is microseconds, which enables high speed RF cameras to track fast movement. \vspace{5pt}\noindent\textbf{Related Works.}\quad The feasibility of THz-based NLOS imaging was demonstrated through imaging examples in the 220--330 GHz band using common building materials \cite{Doddalla2018}. A mmWave-based gait recognition method was studied for recognizing persons from their walking postures, which is expected to be still effective under non-line-of-sight scenarios \cite{meng2020}. \vspace{5pt}\noindent\textbf{Opportunities.}\quad Severe signal attenuation of mmWave and THz signals induced by pathloss and blockage limits the coverage of RF-based imaging. In 5G/6G networks based on mmWave and THz bands, highly directional antenna and densely deployment are exploited to compensate the signal attenuation, and hence these solutions could be utilized in mmWave/THz-based imaging. However, interference among both RF-based imaging and mmWave/THz communication systems remains a critical issue. For enabling co-existence of RF-based imaging and communication systems, one can exploit wireless resource scheduling and multiple access mechanisms such as time division multiple access (TDMA), carrier sense multiple access/collision avoidance (CSMA/CA), and non-orthogonal multiple access (NOMA). Another way to mitigate the interference is interference cancellation which processes the known transmitted imaging or communication signal to generate a negative that, when added to the composite signal, reverts the effect of the interference. \subsection*{Multi-Band RF-Based Imaging} \vspace{5pt}\noindent\textbf{Motivation.}\quad Improving resolution and accuracy is an important challenge in RF-based imaging. To this end, joint use of multiple signals on different frequency band is a promising way. Recent wireless networks can leverage multiple frequency bands. For example, Wi-Fi devices will be able to utilize sub-GHz, 6 GHz, and mmWave (60 GHz) in addition to 2.4 and 5 GHz. Such different frequencies have different propagation characteristics, resulting in different resolution and FoV on imaging. Thus, cooperatively using multiple signals could improve image resolution and sensing accuracy. \vspace{5pt}\noindent\textbf{Related Works.}\quad A super-resolution of multi-band radar data on 3--12\,GHz bands and decimeter-level localization leveraging multi-band signals on 900\,MHz, 2.4\,GHz, and 5\,GHz have been studied in \cite{Zhang2014} and \cite{Nandakumar2018}, respectively. These works demonstrated that cooperative use of multiple signals on different frequency bands can improve imaging resolution or localization accuracy. \vspace{5pt}\noindent\textbf{Opportunities.}\quad With increase in range of frequency bands (e.g.,{} joint use of sub-6\,GHz and mmWave signals), we need to consider resolution-coverage trade-off. MmWave and THz signals enable high-resolution imaging, but the high attenuation limits the coverage of RF-based imaging. On the other hand, lower frequency (sub-GHz) generates lower-resolution images than mmWave/THz imaging, but their coverage is wider than mmWave/THz imaging. Therefore, adaptive use of multiple frequency bands is expected to achieve better trade-off between the resolution and coverage. Moreover, utilizing multiple channels and wider bandwidth could cause severe interference with multiple communication systems and make the interference management more difficult. Thus, the co-existence mechanism of communication and imaging systems becomes more important. \subsection*{Hetero-modal RF-Based Imaging}\label{sec:multi_modal} \vspace{5pt}\noindent\textbf{Motivation.}\quad Leveraging heterogeneous modalities could be another solution to improve resolution and reliability of imaging. Smart devices such as smart phones, vehicles, and drones have multiple imaging sensors (e.g.,{} camera and LiDAR) and RF modules (e.g.,{} Wi-Fi, Bluetooth, 4G/5G, and WiGig). We can exploit these modalities cooperatively for imaging and sensing. However, there is an open issue of how to integrate the heterogeneous modalities. \vspace{5pt}\noindent\textbf{Related Works.}\quad In computer vision, deep learning based multi-modal image fusion is studied for improving image quality \cite{LIU2018}. Multi-modal images (e.g.,{} Visual, IR, CT, and MRI images) are fused based on their pixel values via some fusion rule, which is called as pixel level fusion. There are other fusion approaches; feature level fusion and decision level fusion. In the feature level fusion, prominent features (e.g.,{} edges, corner points, and shapes) are extracted from different images and combined into a feature map. In the decision level fusion, the different images are pre-processed and leveraged for decision making separately. Then, the individual decisions are integrated to provide more accurate decision. \vspace{5pt}\noindent\textbf{Opportunities.}\quad Although the pixel level fusion generally requires a heavier computation than other level fusion techniques, it is still widely used in many fields such as remote sensing because of higher accuracy. A major issue of the multi-modal imaging is spatial and temporal misregistration induced by different image scales, resolutions, and deployed angles and locations of the sensors. Moreover, in RF-based imaging, there could be a new fusion level, that is the signal level fusion. In the fusion, RF signals are directly fused, and new features are generated for more accurate imaging. The next section details a case of the signal level fusion for predicting a missing part of an image. \section*{Selected Use Cases}\label{sec:use_case} \subsection*{Hetero-Modal mmWave Received Power Prediction}\label{sec:use_case1} As discussed in the earlier section, past image sequences are informative to forecast sudden LOS and NLOS transitions, which is hardly observable from RF received power sequences. On the contrary, past RF received power sequences are informative to predict future received powers highly correlated with the past ones under LOS conditions. To benefit from these two modalities and thereby achieve better accuracy, the prediction method fusing these two modalities are studied as follows. \vspace{3pt}\noindent\textbf{Scenario.}\quad Consider a depth camera with 30\,Hz frame rate monitoring a mmWave link that is intermittently blocked by two moving pedestrians. Our objective is to predict future received powers with a look ahead horizon of 120\,ms based on a past depth image sequence and a received power sequence. To this end, a split NN architecture is designed to integrate the depth image and received power sequences, thereby performing mmWave received power prediction with the two types of modalities. Specifically, the split NN comprises convolutional layers that extract image features and a recurrent layer that concatenates the sequence of image features and RF received powers and performs time-series prediction of mmWave received power\,\cite{Koda:2019ab}. \vspace{3pt}\noindent\textbf{Results.}\quad In Fig.~\ref{fig:use_case_non_rf}, showing the prediction accuracy in root-mean-square error (RMSE) in different channel conditions, we demonstrate that the prediction using both images and RF received powers (\textsf{Img+RF}) achieves higher prediction accuracy than the prediction using either one (\textsf{Img} and \textsf{RF}). \textsf{Img+RF} does not only predict LOS/NLOS transitions as well as \textsf{Img}, but also predicts received powers correlated with the input received power sequence for a given LOS and NLOS conditions better than \textsf{Img}. This result exactly demonstrates the feasibility of the benefit from integrating image and RF modalities. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Figures/Fig4_RGB-D_v2.pdf} \caption{Hetero-modal mmWave received power prediction. The error between actual \gls{mmw} \gls{rss} and its predicted value by fusing past \gls{rss} data and \gls{rgbd} images. } \label{fig:use_case_non_rf} \end{figure} \subsection*{Multi-Vision Based Predictive mmWave BS Handover} This section introduces the use case of handover management in mmWave communications to illustrate the importance of vision-based proactive decision-making. \vspace{3pt} \noindent \textbf{Scenario.} \quad Handover can result in disruptions of the communication link, and hence, in determining handover timings, one should be aware of not only current RF conditions, but also how well each BS performs in a long run to prevent myopic decisions. Traditionally, handover strategy is formed based on current RF conditions (e.g., channel state or received power); however, RF conditions are not necessarily informative to forecast sudden transitions between LOS and NLOS conditions. This is where image modality comes as a rescue, wherein one can form a handover strategy being aware of mobility of obstacles and thereby predicting future LOS and NLOS transitions \cite{Koda:2019ac}. Moreover, recent advancement of RL, namely deep RL, helps us achieve the aforementioned objective by feasibly handling a higher dimensionality of image modalities. For example, in a simple scenario with two static BSs and single static STA. The BS serving higher received power in LoS conditions (termed BS~1) initially associates with the STA. The other BS termed BS~2 is a candidate to which a handover is performed and thereby compensates for the degraded data rates provided by BS~1 due to LOS blockages. The BSs are served by depth cameras with depth images. The associated BS runs deep RL to learn the optimal \textit{action value} associated with each depth image that quantifies how well each BS performs in a long time horizon. Thereby, the optimal timing to perform a handover is determined. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/DRL_part_v3.pdf} \caption{RL-based predictive handover. Action value for selecting each BS learned in \textsf{Img-RL} using depth images and in \textsf{RF-RL} using RF received powers.} \label{fig:DRL_HO} \end{figure} \vspace{3pt} \noindent \textbf{Results.}\quad Fig.~\ref{fig:DRL_HO} shows the learned action value using images (\textsf{Img-RL}) and RF received powers (\textsf{RF-RL}). The action value for selecting BS~1 learned in \textsf{Img-RL} decreases as a pedestrian approaches a LoS path while that in \textsf{RF-RL} does not. This result exactly indicates that \textsf{Img-RL} feasibly forms a handover policy being aware of future blockage events. Therein, \textsf{Img-RL} triggers a handover earlier than \textsf{RF-RL}, and thereby, avoids the blockage event. Thus, \textsf{Img-RL} exhibits a higher throughput (118\,Mbit/s) than \textsf{RF-RL} (113\,Mbit/s). \subsection*{Hetero-Modal Image Reconstruction}\label{sec:use_case3} This section presents the setting of mmWave signal-aided image reconstruction as a use case of hetero-modal RF-based imaging. The goal of this work is image inpainting with hetero-modal information, in which a missing part of an image is reconstructed from the defective image and a sequence of mmWave received power values. \vspace{3pt} \noindent \textbf{Scenario.} \quad Consider a depth camera monitoring a mmWave link that is intermittently blocked by two moving pedestrians, but a part of the image is missing due to occlusion or failure on the camera. The objective is to reconstruct the missing part of the image based on signal attenuation on the mmWave link. As shown in the previous use cases, the mmWave signals are strongly attenuated when an obstacle blocks LOS path, and the timing and intensity of the attenuation suggest where the obstacle moved. Deep auto-encoder is leveraged for the hetero-modal inpainting. The encoder has two input layers for an image with occlusion and sequence of received power of RF signals. The input layer for image is followed by convolution layers, and the input layer for RF signal is followed by fully connected layers. The outputs of these layers are concatenated at the end of the encoder part and inputted to the decoder part consisting of convolution layers, which depicts an image including missing parts. \vspace{3pt} \noindent \textbf{Results.}\quad Fig.~\ref{fig:NLOS_image} depicts samples of depth-camera image without missing parts (ground truth), defective image (input image), and images reconstructed from both the input image and RF signal (\textsf{Img+RF}), or only from RF signal (\textsf{RF only}). Even though the reconstructed imaging uses limited features of RF signal, that is 32 points of mmWave received power sampled at 66\,ms intervals, \textsf{Img+RF} depicts the missing part on the input images as similar to the ground-truth images for both cases where the pedestrian was in the missing part or not. Such inpainting of imagery with a large missing part is difficult for the conventional image inpainting which leverages only the imagery, because there is no information about the missing part. In contrast, the \textsf{Img+RF} reconstruction can obtain the information about the missing part from RF signals and reconstruct it. Moreover, the \textsf{Img+RF} reconstruction depicts images more accurately than \textsf{RF only}. Thereby we can find even the direction of the pedestrian on the image reconstructed by \textsf{Img+RF}. These results demonstrate the feasibility of image inpainting with RF signals and the benefit from integrating image and RF modalities. \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/RF_assisted_image_reconstruction_v2.pdf} \caption{Hetero-modal image reconstruction. A missing part in the image is reconstructed from the received power of RF signals and an image with failure by deep auto-encoder.} \label{fig:NLOS_image} \end{figure} \section*{Conclusions}\label{sec:conclusions} This article outlined the vision of fusing computer vision and wireless communication to spearhead the next generation of URLLC toward beyond-5G/6G mission-critical applications. This convergence opens up untapped research directions that go beyond the scope of this paper. An interesting direction is to investigate how much visual information is contained in RF signals of wireless communications. As demonstrated in the selected use cases, mmWave communication signals contain visual information of obstacles and help image inpainting. However, the current inpainted images are mimicry of training data, and it is still unclear what information (e.g.,{} shape and location of obstacles) can and cannot be retrieved from RF signals, calling for a novel visual information capacity analysis for a given task. Another interesting direction is the creation and update of a real-time digital replica of the physical space around wireless access points using hetero-modal sensing that combines RGB-D, LiDAR, and RADAR. Such a replica can keep track of and predict the movement of people and objects through space. The resulting 3D model can also be utilized to perform ray tracing simulations to predict RF link quality. Moreover, RGB-D cameras can capture faces and behaviors of mobile users, through which their quality of experiences (QoEs) can be accurately predicted. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2020-10-14T02:12:19", "yymm": "2010", "arxiv_id": "2010.06188", "language": "en", "url": "https://arxiv.org/abs/2010.06188" }
\section{Introduction} Recall that an $\omega_1$-tree is said to be Souslin if it has no uncountable chain or antichain. In \cite{Fuchs} and \cite{FuchsHamkins}, Hamkins and Fuchs considered various notions of rigidity on Souslin trees and studied the affect of their rigidity notions on the following question: how many branches can Souslin trees, satisfying certain rigidity requirements, add to themselves? In \cite{Fuchs}, Fuchs asks a few questions which motivates this work. In this paper we show that it is possible to have a Souslin tree $S$ which has the highest level of rigidity but it adds $\aleph_2$ many branches to itself where $2^{\omega_1} = \omega_2$. \begin{thm} It is consistent with $\mathrm{GCH}$ that there is a Souslin tree $S$ such that: \begin{itemize}\label{main} \item $\Vdash_S ``S$ is Kurepa", \item $\Vdash_S$ ``$S \upharpoonright C$ is rigid for all club $C \subset \omega_1$." \end{itemize} \end{thm} \noindent Theorem \ref{main} answers all questions in \cite{Fuchs}. We refer the reader to \cite{FuchsHamkins} and \cite{Fuchs} for motivation and history. Our work is also related to a question due to Baumgartner. In \cite{baumgartner_Other}, he proves that under $\diamondsuit^+$ there is a Souslin tree with a lexicoraphic order which is minimal as a tree and as a linear order. At the end of his construction, he asks whether or not it is possible to construct a minimal Aronszajn line from $\diamondsuit$. Motivated by Baumgartner's question, we show the following proposition. \begin{prop} It is consistent with $\diamondsuit$ that if $S$ is a Souslin tree then there is a dense $X \subset S$ which does not have a copy of $S$. Here $X$ is considered with the inherited tree order from $S$. \end{prop} This proposition shows that it is impossible to follow the same strategy as Baumgartner's in \cite{baumgartner_Other}, in order to show $\diamondsuit$ implies that there is a minimal Aronszajn line. More precisely, it is impossible to find a lexicographically ordered Souslin tree which is minimal as a tree and as an uncountable linear order. Note that this does not answer Baumgartner's question. \section{Minimality of Souslin trees and $\diamondsuit$} In this section we will show that it is consistent with diamond that no Souslin tree is minimal. Here a Souslin tree $U$ is said to be minimal if for all uncountable subset $X \subset U$, there is a tree embedding from $U$ into $X$. Note that if we require that $X$ is downward closed then $X$ contains a cone of $U$. Moreover, under $\diamondsuit$ there is a Souslin tree which embeds into all of its cones. Here $\mathbb{P}$ is the countable support iteration $\Seq{P_i, \dot{Q}_j : i \leq \omega_2, j < \omega_2}$, where $Q_j$ is the poset for adding a Cohen subset of $\omega_1$ by countable conditions. Also $N$ is the set of all successor ordinals between $\omega$ and $\omega_1$. If $U$ is a Souslin tree, without loss of generality we can assume that $Q_0$ adds a generic subset of $U \upharpoonright N$. Let $\dot{X}$ be the canonical $Q_0$-name for adding a generic subset to $U \upharpoonright N$. Note that if $f: U \longrightarrow U\upharpoonright N$ is a tree embedding then $\operatorname{ht}(t)< \operatorname{ht}(f(t))$. For $p \in \mathbb{P}$, $t \in U$, and $\dot{f}$ a $\mathbb{P}$-name for an embedding from $U$ to $\dot{X}$, let $\varphi(p,t) = \{ s \in U : \exists \bar{p} \leq p \textrm{ } \bar{p} \Vdash \dot{f}(t) = s \}.$ Obviously, if $s \leq t$ then $\varphi(p,s)$ is a subset of the downward closure of $\varphi(p,t)$. In particular, if $\varphi(p,t)$ is a chain then $\varphi(p,s)$ is a chain. \begin{lem} \label{nonchain} Assume $U$ is a Souslin tree, $p \in \mathbb{P}$ and $\dot{f}$ is a $\mathbb{P}$-name for an embedding from $U$ to $\dot{X}$. Then there is $\alpha \in \omega_1$ such that for all $t \in U \setminus U_{< \alpha} $, $\varphi(p,t)$ is not a chain. \end{lem} \begin{proof} Let $Y_p = \{ y \in U : \varphi(p,y) \textrm{ is a chain} \}.$ Note that $Y_p$ is downward closed and if it is countable we are done. Fix $p \in \mathbb{P}$ and assume for a contradiction that $Y_p$ is uncountable. Define $A_p$ to be $ \{ t \in U : p \Vdash t \in \dot{X} \textrm{ or } p\Vdash t \notin \dot{X} \}.$ Obviously, $A_p$ is countable. Fix $\alpha$ greater than $ \sup \{ \operatorname{ht}(a): a \in A_p \}$ and $y \in Y_p \setminus U_{\leq \alpha}$. Since $U$ is an Aronszajn tree and $\varphi(p,y) $ is a chain, we can choose $\beta \in \omega_1 \setminus \sup \{ \operatorname{ht}(s): s \in \varphi(p,y) \}$. Note that $\mathbb{1} \Vdash \operatorname{ht}(y) < \operatorname{ht}(\dot{f}(y))$, so for all $s \in \varphi(p,y)$, $\alpha < \operatorname{ht}(s) < \beta$. Now, extend $p$ to $q$ so that $q \Vdash \dot{X} \cap (U_{\leq \beta} \setminus U_{< \alpha}) = \emptyset.$ But this contradicts the fact that $p \Vdash \dot{f}(y) \in \varphi(p,y)$. \end{proof} \begin{lem}\label{Silver1} Assume $U \in \textsc{V}$ is an Souslin tree and $\mathbb{P}$ is as above. In $\textsc{V}^{\mathbb{P}}$, there is a dense $X \subset U$ which does not have a copy of $U$. \end{lem} \begin{proof} Let $N$ and $\dot{X}$ be as above. It is obvious that $\dot{X}$ is going to be dense in $U$. We will show $\mathbb{P}$ forces that $\dot{X}$ has no copy of $U$. Assume for a contradiction that $p \in \mathbb{P}$ forces that $\dot{f}$ is an embedding from $U$ to $\dot{X}$, where $\dot{f}$ is a $\mathbb{P}$-name. Assume $M$ is a suitable model for $\mathbb{P}$ which contains $U, p, \dot{f}$. Also let $\Seq{D_n : n \in \omega}$ be an enumeration of all dense open sets in $M$. Fix $t \in U_{\delta}$, where $\delta = M \cap \omega_1$. For each $\sigma \in 2^{< \omega}$, we inductively define $p_\sigma$, $s_\sigma$ and $t_{|\sigma|}$, such that: \begin{enumerate} \item $t_{|\sigma|}< t $, \item if $\sigma \sqsubset \tau$ then $p_\tau \leq p_\sigma$ and $s_\sigma \leq s_\tau$, \item if $\sigma \perp \tau$ then $s_\sigma \perp s_\tau$, \item $p_\sigma \in D_{|\sigma|} \cap M$, \item $p_\sigma \Vdash \dot{f}(t_{|\sigma|}) = s_\sigma$. \end{enumerate} In order to see how these sequences are constructed, let $t_0 < t$ be arbitrary and $p_{\emptyset}, s_\emptyset$ be such that $p_\emptyset \Vdash`` \dot{f}(t_0)= s_\emptyset"$ and $p_\emptyset \in D_0 \cap M$. Assuming these sequences are given for all $\sigma \in 2^n$, use Lemma \ref{nonchain} to find $ t_{n+1} < t$ such that $\varphi(p_\sigma, t_{n+1})$ is not a chain, for all $\sigma \in 2^n$. Let $s_{\sigma \frown 0}, s_{\sigma \frown 1}$ be in $\varphi(p_\sigma , t_{n+1}) \cap M$ such that $s_{\sigma \frown 0} \perp s_{\sigma \frown 1}$. Now find $p_{\sigma \frown 0} , p_{\sigma \frown 1}$ in $M \cap D_{n+1}$ which are extensions of $p_\sigma$ such that $$p_{\sigma \frown i} \Vdash ``\dot{f}(t_{n + 1}) = s_{\sigma \frown i}", \textrm{ for } i=0,1.$$ For each real $r \in 2^\omega$, let $p_r$ be a lower bound for $\{ p_\sigma : \sigma \sqsubset r \}$ and let $b_r \subset U \cap M$ be a downward closed chain such that $p_r \Vdash \dot{f}[\{ s \in U : s < t \}] \subset b_r$. Note that $b_r$ intersects all the levels of $U$ below $\delta$. It is obvious that $p_r$ is an $(M,\mathbb{P})$-generic condition below $p$. Moreover, if $r,r'$ are two distinct real numbers then $b_r \neq b_{r'}$. Let $r \in 2^\omega$ such that $U$ has no element on top of $b_r$. Then $p_r$ forces that $\dot{f}(t)$ is not defined, which is a contradiction. \end{proof} Now the following theorem is obvious. \begin{thm} It is consistent with $\diamondsuit$ that for every Souslin tree $U$ there is a dense $X \subset U$ which does not have a copy of $U$. \end{thm} \section{A highly homogeneous Souslin tree which adds a lot of branches to itself} \begin{defn} Let $\Lambda$ be a set of size $\aleph_1$. The poset $Q$ is the set of all conditions $p=(T_p, \Pi_p)$ such that the following holds. \begin{enumerate} \item $T_p \subset \Lambda$ is a countable tree of height $\alpha_p$ such that for all $t \in T_p$ and for all $\beta \in \alpha_p$ there is $s \in (T_p)_\beta$ which is comparable with $t$. \item $\Pi_p = \Seq{\pi_\xi^p: \xi \in D_p}$ such that: \begin{enumerate} \item $D_p$ is a countable subset of $\omega_2$, \item for each $\xi \in D_p$ there are $x,y$ of the same height in $T_p$ such that $\pi_\xi^p : (T_p)_x \longrightarrow (T_p)_y$ is a tree isomorphism. \end{enumerate} \end{enumerate} We let $q \leq p $ if the following holds. \begin{enumerate} \item $T_q$ end extends $T_p$ and $D_p \subset D_q$. \item For all $\xi \in D_p$, $\pi_\xi^q \upharpoonright T_p = \pi_\xi^p $. \end{enumerate} \end{defn} It is easy to see that the poset $Q$ is $\sigma$-closed. Moreover, for all $\alpha \in \omega_1$, the set of all conditions $q$ with $\alpha_q > \alpha$ is dense in $Q$. Therefore, the generic tree produced by the poset $Q$ is an $\omega_1$-tree. \begin{lem} Assume $\mathrm{CH}$. Then $Q$ has the $\aleph_2$-cc. \end{lem} \begin{proof} Assume $A \subset Q$ has size $\aleph_2$. By thinning $A$ out, if necessary, we can assume that for all $p,q$ in $A$, $T_p = T_q$. Also the set $\{ D_p : p \in A \}$ forms a $\Delta$-system with root $R$. Moreover, we can assume that $|\{ \Seq{\pi_\xi^p : \xi \in R}: p \in A \}|=1$. Now every two conditions $p,q$ in $A$ are compatible. \end{proof} \begin{lem} Assume $G \subset Q$ is $\textsc{V}$-generic, and $T = \bigcup\limits_{p \in G}T_p$. Then $T$ is a Souslin tree. \end{lem} \begin{proof} Assume $\tau$ is a $Q$-name and $p \in Q$ forces that $\tau$ is a maximal antichain in in the generic tree $T$. Assume $M \prec H_\theta$ is countable, $\theta$ is regular and $\mathcal{P}(Q), \tau$ are in $ M$. Let $p_n = (T_n , \Pi_n)$, $n \in \omega$, be a descending $(M,Q)$-generic sequence such that $p_0 = p$. Let $\pi_\xi^{p_n}= \pi_\xi^n$, $\delta = M \cap \omega_1$, and $R = \bigcup\limits_{n \in \omega} T_n$. Note that $\operatorname{ht}(R) = \delta$ and $M \cap \omega_2 = \bigcup\limits_{n \in \omega}D_{p_n}$. Let $\mathcal{F}$ be the set of all finite compositions $g_0 \circ g_1 \circ ... \circ g_n$ such that for each $i \leq n$ there is $\xi \in M \cap \omega_2 $ such that either $g_i$ or $g_i^{-1}$ is equal to $\bigcup\limits_{n \in \omega} \pi_\xi^n$. Note that for every $\alpha < \delta$ there is $n \in \omega$ such that $p_n$ decides $\tau \cap R_{< \alpha}$. Let $A$ be the set of all $t \in R$ such that for some $n \in \omega$, $p_n$ forces that $t \in \tau$. Let $\Seq{f_n : n \in \omega}$ be an enumeration of $\mathcal{F}$ with infinite repetition. Let $\Seq{t_m : m \in \omega}$ be an increasing sequence in $R$ such that whenever $t_m \in \operatorname{dom}(f_n)$ then $f_n(t_m)$ is above some element of $A$. Let $q$ be the lower bound for $\Seq{p_n: n \in \omega}$ described as follows. $T_q = R \cup T_\delta$ and for each cofinal branch $c \subset R$ there is a unique $t \in T_\delta$ above $c$ if and only if $\{f_n(t_m): m \in \omega \}$ is cofinal in $c$ for some $n \in \omega$. For each $\xi \in M \cap \omega_2$, let $\pi_\xi^q \upharpoonright R = \bigcup\limits_{n \in \omega} \pi_\xi^n$. Note that this determines $\pi_\xi^q$ on $T_\delta$ as well. It is obvious that $\pi_\xi^q (t)$ is defined for all $t \in T_\delta$. \end{proof} Assume $G \subset Q$ is $\textsc{V}$-generic. We use the following notation in this paper. $T = \bigcup\limits_{p \in G}T_p$, and $\pi_\xi = \bigcup\limits_{p \in G}\pi_\xi^p$. Note that by standard density arguments if $x \in \operatorname{dom}(\pi_\xi) \cap \pi_\eta$ then there is $\alpha > \operatorname{ht}(x)$ such that for all $y \in T_\alpha \cap T_x$, $\pi_\xi (y) \neq \pi_\eta(y)$. This will make the following lemma obvious. \begin{lem} Forcing with $T$, makes $T$ a Kurepa tree. \end{lem} \section{Highly rigid dense subsets of $T$} In this section, $P$ is the poset consisting of all countable partial functions from $\omega_1$ to $2$, ordered by reverse inclusion. If $U$ is an $\omega_1$-tree we can assume that $P$ is the poset consisting of all countable partial functions from $U$ to $2$. \begin{lem} Assume $U$ is a Souslin tree and $G \subset P$ is $\textsc{V}$-generic. Then $G$ is a dense subset of $U$ and it is a Souslin tree when it is considered with the tree order inherited from $U$. Moreover, for all clubs $C \subset \omega_1$, $G \upharpoonright C$ is rigid. \end{lem} \begin{proof} Assume for a contradiction that $p \in P$ and it forces that $\dot{f}: \dot{G} \upharpoonright \dot{C} \longrightarrow \dot{G} \upharpoonright \dot{C}$ is a nontrivial tree embedding. Let $\Seq{M_\xi : \xi \in \omega +1}$ be a continuous $\in$-chain of countable elementary submodels of $H_\theta$ such that $\theta$ is regular and $p, \dot{f},\mathcal{P}(U) $ are in $ M_0$. For each $\xi \leq \omega$, let $\delta_\xi = M_\xi \cap \omega_1$ and $t \in T_{\delta_\omega}$. Let $t_n = t \upharpoonright \delta_n$. For each $\sigma \in 2^{< \omega}$ we find $q_\sigma$ and $s_\sigma$ such that: \begin{enumerate} \item $q_0 \leq q$, and $q_\sigma \in M_{|\sigma|+1}$, \item if $\sigma \subset \tau$ then $q_\tau \leq q_\sigma$, \item $q_\sigma$ is $(M_{|\sigma|}, P)$-generic and $q_\sigma \subset M_{|\sigma|}$, \item $q_\sigma$ forces that $\dot{f}(t_{|\sigma|-1}) = s_\sigma$, \item if $\sigma \perp \tau$ then $s_\sigma \perp s_\tau$, \item if $\sigma \subset \tau$ then $q_\tau$ forces that $t_{|\sigma|} \in \dot{G} \upharpoonright \dot{C}$. \end{enumerate} First assume that $q_\sigma$ and $s_\sigma$ are given for all $\sigma \in 2^{n}$ satisfying the conditions above. We find $q_{\sigma \frown 0}, q_{\sigma \frown 1}, s_{\sigma \frown 0}, s_{\sigma \frown 1}$, for each $\sigma \in 2^n$. Let $\bar{q}_\sigma = q_\sigma \cup \{( t_n,1) \}$. Note that $\bar{q}_\sigma \Vdash t_n \in \dot{G} \upharpoonright \dot{C}$. It is easy to see that for all $\sigma \in 2^n$, the set $\{ s \in U : \exists r \leq \bar{q}_\sigma$ $r \Vdash \dot{f}(t_n)=s \}$ is uncountable. In $M_{n+1}$, find $r_0,r_1$ below $\bar{q}_\sigma$ and $s_{\sigma \frown 0}, s_{\sigma \frown 1}$ such that $s_{\sigma \frown 0} \perp s_{\sigma \frown 1}$ and $r_i$ forces that $\dot{f}(t_n) = s_{\sigma \frown i}$. Now let $q_{\sigma \frown i} < r_i$ be an $(M_{n+1}, P)$-generic condition which is a subset of $M_{n +1}$, and which is in $M_{n + 2}$. Let $r \in 2^\omega$ such that $\{ s_\sigma : \sigma \subset r \}$ does not have an upper bound in $U$. Let $p_r$ be a lower bound for $\{p_\sigma : \sigma \subset r \}$. Then $p_r$ forces that $\dot{f}(t)$ is not defined which is a contradiction. \end{proof} The same strategy can be used in order to prove the following lemma. \begin{lem} Assume $U$ is a Souslin tree and $G \subset P$ is $\textsc{V}$-generic. Then $G$ is a dense subset of $U$ and it is a Souslin tree when it is considered with the tree order inherited from $U$. Moreover, for all clubs $C \subset \omega_1$ and $x,y \in G$, $G_x \upharpoonright C$ does not embed into $G_y \upharpoonright C$.\footnote{Here $G_x$ is the set of all $t \in G$ which are comparable with $x$.} \end{lem} Recall that $T$ is the homogeneous Souslin tree that we introduced in the previous section. We assume that $P$ adds a generic subset $S \subset T$. In other words we identify $P$ with the set of all countable partial functions from $T$ to $2$. This is because we could take $\Lambda = \omega_1$ in the definition of $Q$. We consider the poset $Q * P * \dot{T}$ over $\textsc{V}$. Now let's work in $\textsc{V}^Q$. Let $\dot{f}$ be a $P*T$-name for a one to one tree order preserving map from $\dot{S}_x$ to $\dot{S}_y$, where $x,y$ are distinct elements in $T$. For each $t \in T, u \in T_x$ and $p \in P$ let $\psi (t,u,p)= \{ s \in T:$ for some $t' > t$ and $\bar{p} \leq p$, $(\bar{p},t')$ forces that $u \in \dot{S}$ and $\dot{f}(u) = s \}$. \begin{lem} Let $\dot{f}$ be a $P*T$-name in $\textsc{V}^Q$, for a one to one tree order preserving map from $\dot{S}_x$ to $\dot{S}_y$. Here, $x,y$ are distinct elements in $T$. Assume $t \in T$, $u \in T_x$ and $p \in P$. Then there are $t' > t$, $u'>u$ in $T$ such that $\psi(t',u',p)$ is not a chain. \end{lem} \begin{proof} Assume for a contradiction that for all $t' > t$, $u'>u$ in $T$, $\psi(t',u',p)$ is a chain. Without loss of generality we can assume that there is no $v$ above $t$ or $u$ such that $p$ decides $v \in \dot{S}$. For each $t',u'$ above $t,u$ let $\alpha_{t',u'} = \sup \{ \operatorname{ht}_T(s): s \in \psi(t',u',p) \}$. Let $M_0 \in M_1$ be countable elementary submodels of $H_\theta$ such that $\theta$ is a regular cardinal and $p,t',u', \dot{f}$ are in $M_0$. Let $\delta_i = M_i \cap \omega_1$. \begin{itemize} \item $\bar{p}$ forces that for all $v> u$ of height less than $\delta_1$, $v \in \dot{S}$, but \item for all $v >y$ in $M_1 \setminus M_0$, $\bar{p}$ forces that $v \notin \dot{S}$. \end{itemize} Now let $t_0> t,u_0>u$ be elements in $T \cap (M_1 \setminus M_0)$. Then $$\alpha_{t_0,u_0} \geq \sup \{ \operatorname{ht}_T(s): s \in \psi(t_0,u_0,\bar{p}) \} \geq \delta_1.$$ But this is a contradiction. \end{proof} Now we are ready for the Silver argument. It is obvious that the next lemma finishes the proof of Theorem \ref{main}. \begin{lem} Assume $S$ is the generic subset of $T$ that is added by $P$. Let $x, y$ be two distinct elements in $T$ and $C \subset \omega_1$ be a club. Then $\Vdash_T$ ``$S_x \upharpoonright C$ does not embed into $S_y \upharpoonright C$." \end{lem} \begin{proof} Assume $(q,p,t)$ is a condition in $Q *P * \dot{T}$ and it forces that $\dot{f}: \dot{S}_x \upharpoonright \dot{C}\longrightarrow \dot{S}_y \upharpoonright \dot{C}$ is a tree embedding. Let $\Seq{M_\xi : \xi \in \omega +1}$ be a continuous $\in$-chain of countable elementary submodels of $H_\theta$ such that $\theta$ is regular and $(q,p,t), \dot{f},\mathcal{P}(U) $ are in $ M_0$. Let $q' \leq q$ be an $(M_\omega, Q)$-generic condition such that $q' \cap M_\xi \in M_{\xi+1}$ is $( M_{\xi }, Q) $-generic and $q' \subset M_\omega$. Let $R= T_{q'}$. By using the previous lemma, for each $n \in \omega$ and $\sigma \in s^{< \omega}$ we find sequences $t_n \in R$, $u_n \in R$, $p_\sigma \in P$ and $s_\sigma \in R$ such that: \begin{enumerate} \item $p_0 \leq p$, and $p_\sigma \in M_{|\sigma|+1}$, \item if $\sigma \subset \tau$ then $p_\tau \leq p_\sigma$, \item $(q',p_\sigma)$ is $(M_{|\sigma|}, Q*P)$-generic and $p_\sigma \subset M_{|\sigma|}$, \item $(q', p_\sigma,t_{|\sigma|)}$ forces that $\dot{f}(u_{|\sigma|-1}) = s_\sigma$, \item if $\sigma \perp \tau$ then $s_\sigma \perp s_\tau$, \item if $\sigma \subset \tau$ then $(q',p_\tau)$ forces that $u_{|\sigma|} \in \dot{S} \upharpoonright \dot{C}$. \end{enumerate} Now let $q''< q'$ in $Q$ which decides $T_{\delta_\omega}$ and which forces that $u_n,t_n$ have upper bounds in the generic tree $T$. Let $t,u$ be the corresponding upper bounds. Let $r \in 2^{< \omega}$ such that $\{ s_\sigma : \sigma \subset r \}$ does not have an upper bound in the $T_{q''} $. Then $(q'',\bigcup\limits_{\sigma \subset r} p_\sigma \cup \{(u,1) \} , t)$ forces that $u \in \dot{S} \upharpoonright \dot{C}$ but $\dot{f}(u)$ is not defined, which is a contradiction. \end{proof} \def\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D{\leavevmode\lower.6ex\hbox to 0pt{\hskip-.23ex \accent"16\hss}D}
{ "timestamp": "2020-10-14T02:09:53", "yymm": "2010", "arxiv_id": "2010.06125", "language": "en", "url": "https://arxiv.org/abs/2010.06125" }
"\\section{Introduction}\n\nLanguage models (LMs; \\cite{church-1988-stochastic,kneser1995improved,b(...TRUNCATED)
{"timestamp":"2020-10-28T01:26:50","yymm":"2010","arxiv_id":"2010.06189","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\\label{intro}\n\n\nIsospin symmetry breaking (ISB) in finite nuclei reflect(...TRUNCATED)
{"timestamp":"2021-05-11T02:22:24","yymm":"2010","arxiv_id":"2010.06204","language":"en","url":"http(...TRUNCATED)
"\\section{Background}\n\\label{sec:introduction}\nCooperatively controlling and optimizing multiple(...TRUNCATED)
{"timestamp":"2020-10-14T02:11:05","yymm":"2010","arxiv_id":"2010.06157","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n \n Scattering methods, using a source of photons, electrons, X-rays, n(...TRUNCATED)
{"timestamp":"2021-04-02T02:16:35","yymm":"2010","arxiv_id":"2010.06126","language":"en","url":"http(...TRUNCATED)
"\n\\section{Introduction}\n\nAs the number of papers in our field increases exponentially, the revi(...TRUNCATED)
{"timestamp":"2020-12-07T02:05:25","yymm":"2010","arxiv_id":"2010.06119","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
3