The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 8
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 48379)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 8
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
\label{intro}
The dragging of inertial frames, or frame-dragging, is a fundamental and intriguing prediction of Einstein's theory of General Relativity. It has a key role in a number of astrophysical phenomena, including the orientation of jets from active galactic nuclei and quasars and the emission of gravitational waves from colliding black holes (\cite{tho,gw}). In General Relativity, the angular momentum of a central body causes a secular shift of the nodes of a satellite (the intersections of its orbit with the equatorial plane of the central body), and of its periastron (the closest point of its orbit to the central body) around that central body. This is called Lense-Thirring effect (\cite{lent}). In a number of papers \cite{ciu84,ciu86,ciu89,csr89,asi89,rie89,pet,ciu96}, we described how, by combining the orbital elements of a number of satellites with suitable coefficients, it would be possible to test frame-dragging and the Lense-Thirring effect with an accuracy depending on the number of the satellites' orbital observables used in the analysis and on their accuracy. The technique is described in detail in \cite{ciu96}; here we simply note that the major systematic errors arise from errors in the Earth's even zonal harmonics (the Earth's deviations from spherical symmetry which are both symmetrical with respect to the Earth's equatorial plane and to its symmetry axis.) In particular the largest source of systematic error is due to the largest deviation of the Earth from spherical symmetry, its oblateness, described by the even zonal harmonic of degree two, the Earth's quadrupole moment. Indeed each even zonal harmonic generates a {\\itshape classical} (i.e. not General Relativistic) shift of the node of a satellite and these shifts are dominated by the lowest degree even zonal harmonics and especially by the Earth's quadrupole moment. An idea \cite{ciu86} was to use two laser-ranged satellites with supplementary inclinations to eliminate the error due to the uncertainties of all the even zonal harmonics (this technique will be achieved by the forthcoming LARES 2, Laser Relativity Satellite 2, of ASI - the Italian Space Agency). Another idea was then to use $n$ observables, and in particular the $n$ nodes of $n$ satellites to both measure the Lense-Thirring effect and to eliminate the uncertainties due to the largest $n - 1$ even zonal harmonics: ``Another solution would be to orbit several high-altitude, laser-ranged satellites, similar to LAGEOS, to measure $J_2, J_4, J_6$ etc., and one satellite to measure $\dot{\Omega}^{Lense-Thirring}$ '' (p. 3102 of \cite{ciu89}).
A number of tests \cite{ciupav,ciupavper,ciuetal10,ciuetal16} with ever increasing accuracy was then carried out using this last technique, first using the two satellites LAGEOS (1976) of NASA and LAGEOS 2 of ASI and NASA (1992 \cite{coen}), both originally dedicated to space geodesy, and then including LARES (Laser Relativity Satellite), launched in 2012 by ASI, dedicated to relativity and space geodesy. In 2016, we published \cite{ciuetal16} a test of the Lense-Thirring effect using about 3.5 years of data of LARES, LAGEOS and LAGEOS 2. This test used their three nodal observables to eliminate the error due to the first two largest even zonal harmonics, i.e., the Earth quadrupole moment $J_2$, of degree two, and the even zonal of degree four $J_4$, and to test the Lense-Thirring effect. The formal error, or precision, of our test was about 0.2\% of frame-dragging, whereas the systematic error was estimated to be about 5\%. This systematic error was mainly due to the even zonal harmonics of degree strictly higher than four and was calculated by using the calibrated errors (i.e. including the systematic errors) of the Earth gravity model GGM05S \cite{GGM05S,reisEtAl2016} which we use to specify moderately low angular components of the Earth's gravity field. (In our analysis the Earth model GGM05S provided the even zonal harmonics of degree $2n = 6, 8, . . . , 90$. The large - degree harmonics have very little effect on the results.) GGM05S is a state-of-the-art determination of the Earth gravity field, obtained using the space mission GRACE (Gravity Recovery and Climate Experiment), launched in 2002 \cite{gra}. GRACE determined the Earth gravity field and its variations using two spacecraft in polar orbit at an altitude of about 400 kilometers. The pair extracted variations in the gravitational field by accurate ranging to each other.
A recent paper ``A comment on ``A test of general relativity using the LARES and LAGEOS satellites and a GRACE Earth gravity model by I. Ciufolini et al.'' '' by L. Iorio \cite{ior} (called I2017 in the following), claims, based on a comparison among different Earth gravity field models, that the systematic errors of our 2016 test, due to the Earth's even zonal harmonics of degree 6, 8 and 10, can be as large as 15\%, 6\% and 36\%, respectively. We show below Iorio is incorrect in these claimed results. In fact, I2017 mentions the Earth gravity model we use (GGM05S) only three times: once in the abstract, once in section 2.2 and once in the comment: ``It can be noted that Eq. (31) yields a realistic uncertainty for $C_{6,0}$ very close to the simple difference $C_{6,0}$ between the estimated coefficients of ITU\textunderscore GRACE16 and GGM05S''. ITU\textunderscore GRACE16 is another Earth model. Eq. (31) in I2017 calculates a coefficient differencing ITU\textunderscore GRACE16 and yet another Earth model: GOCO05S. So, on its face, I2017 says nothing directly about the accuracy of GGM05S but uses an arbitrary selection of models to infer the accuracy of the degree 6, 8, and 10 zonal harmonics.
In section 2.1 we show that the systematic errors reported in the paper I2017 are incorrect by some substantial factors. In section 2.2 we show that, with regard to the accuracy of the lowest even zonal harmonics, at least two of the Earth gravity models used in I2017, i.e., JYY\textunderscore GOCE04S and ITU\textunderscore GRACE16 are not comparable in accuracy with the Earth gravity model GMM05S we use, obtained with GRACE. In particular the lowest harmonics of the model JYY\textunderscore GOCE04S, obtained using data from the space mission Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) \cite{GOCE} $only$, cannot be compared with the accuracy of the lowest harmonics of the GRACE and Satellite Laser Ranging (SLR) model GGM05S. GOCE was designed to generate gravity field models with increased accuracy for the higher degree harmonics of the Earth's gravitational field but is not comparable in accuracy to GRACE (about an order of magnitude worse) for the lowest harmonics = the ones that dominate the errors in the Lense-Thirring analysis.
I2017 contains other, incorrect, claims about the number of significant decimal digits of the coefficients used in our test (claimed to be nine), necessary to eliminate the largest uncertainties in the even zonal of degree 2 and 4, and about the non-repeatability of our test, and other minor claims. In section 3, we show that the claim of I2017 that nine significant decimal digits in the coefficients are necessary for the cancellation of the error due to $J_2$ and $J_4$ is not correct and in fact, for a 1\% test of frame-dragging, we only need two or three significant decimal digits. Finally in Section 3.1, we address the claim of I2017 about the non-repeatability of our test of frame-dragging, and other minor claims.
\section{Erroneous claims of the errors induced by the gravity field uncertainties}
In I2017 the even zonal harmonics $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$ of the gravity field models ITU\textunderscore GRACE16,\\ ITSG\textunderscore Grace2014S, GOCO05S and JYY\textunderscore GOCE04S are compared. (The $\bar{C}_{2n,0}$ are related by a normalization to the even zonal harmonics $J_{2n}$. The explicit relation is given in section 2.1 below.) The difference between each normalized even zonal harmonic, $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$, of each pair of these gravity models is then calculated (see tables 3, 5 and 7 of I2017), and these differences are then propagated into the combination of the nodes of LAGEOS, LAGEOS 2 and LARES to produce a claimed percent error in the measurement of the frame-dragging of their nodes, i.e., of the Lense-Thirring effect (see tables 4, 6, 8 and 9 of I2017).
However, the findings of I2017 are affected by erroneous claims, both numerical and conceptual, as we now show.
\subsection{Numerical miscalculations in I2017}
In I2017 Iorio claims that the errors induced in the test of frame-dragging by the differences in the coefficients $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$ of the four above models are quite large and, for example, the errors induced by the differences in $\bar{C}_{6,0}$ may be as large as 15\% of frame-dragging. Similar claims are made for $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$.
Let us concentrate on the errors due to $\bar{C}_{6,0}$. We use the treatment of the standard text of space geodesy by Kaula \cite{kau}; we have also checked the results with the orbital estimator GEODYN. We find that the secular rate of the node of a satellite due to $\bar{C}_{6,0}$ can be easily calculated as follows.
The Lagrange equation for the rate of change of the node $\Omega$ of a satellite as a function of a disturbing function $R$ is \cite{kau,bert}:
\begin{equation}
\frac{d\Omega}{dt}=\frac{1}{na^{2}(1-e^{2})^{1/2}\sin i}\frac{\partial F}{\partial i}
\end{equation}
Where the force function $F$ is given by $F = \frac{G \, M_\oplus}{2a}+R$, $G$ is the gravitational constant, $M_\oplus$ is the Earth mass, and $n, a, e$ and $i$ are
respectively mean motion, semimajor axis, orbital eccentricity and inclination of an Earth satellite.
The disturbing function $R$ depends on the Earth potential $V$ (not including the central term).
The Earth’s potential $V$, real solution of the Laplace equation, can be written \cite{kau}:
\begin{equation*}
V=\sum _{l=0}^{\infty} \sum _{m=0}^{n} \frac{1}{r^{l+1}}P_{lm}(\sin\phi )[C_{lm}\cos m\lambda + S_{lm}\sin m\lambda]
\end{equation*}
where $P_{lm}(\sin\phi)$ are the Legendre associated functions, $r$, $\phi$ and $\lambda$ are respectively radial coordinate, latitude and longitude measured eastward, $l$ and $m$ are degree and order of the spherical harmonic, and $C_{lm}$ and $S_{lm}$ are respectively the cosine and sine coefficients of the spherical harmonic potential term.
The term $V_{lm}$ of the Earth potential of degree 6 and order 0 due to the even zonal harmonic $\bar{C}_{6,0}$ can be written \cite{kau,bert,geo}:
\begin{equation}
V_{60}=\frac{G \, M_{\oplus} \, R_{\oplus}^{6}}{a^{6+1}} \sum_{p=0}^{6}F_{60p}(i) \sum_{q=-\infty}^{\infty} G_{6pq}(e) S_{60pq}(\omega , M, \Omega)
\end{equation}
where:
\begin{dmath}
S_{60pq}= \sqrt{13}\, \bar{C}_{6,0} \cos [(6 - 2p) \omega + (6 - 2p + q) M]
\end{dmath}
\noindent and $R_{\oplus}$, $\omega$ and $M$ are respectively Earth radius, satellite' argument of perigee and mean anomaly.
$\bar{C}_{6,0}$ is the normalized even zonal harmonic coefficient of degree 6 and order 0.
The normalized even zonal harmonic coefficients, $\bar{C}_{2n, \, 0}$, the ones usually provided in the Earth gravity field models, are related to the denormalized coefficients $C_{2n, \,0}$ by the simple relation: $C_{2n, \,0} \equiv \sqrt{4n + 1} \bar{C}_{2n, \, 0})$.
For example $C_{20} = -1.8264 \cdot 10^{-3}$ and $\bar{C}_{20} = -4.8417 \cdot 10^{-4}$, and
$C_{6,0} = -5.40743 \cdot 10^{-7}$ and
$\bar{C}_{6,0} = -1.49975 \cdot 10^{-7}$ i.e.,
for the degree six even zonal harmonic: $C_{6,0} \equiv \sqrt{13} \bar{C}_{6,0}$,
(the non-normalized even zonal harmonic coefficients, usually written with the notation $J_{2n}$
are equal to the $C_{2n, \, 0}$ coefficients with a minus sign, e.g., the quadrupole coefficient
$J_{2}$ is $J_2 = 1.8264 \cdot 10^{-3}$).
By considering the secular rate only of the nodes of a satellite due to the even zonal harmonic of degree 6, $\bar{C}_{6,0}$, we have then:
\begin{equation}
V_{60} = \frac{G M_{\oplus} \sqrt{13}\bar{C}_{6,0} }{a}\left( \frac{R_{\oplus}}{a} \right)^{6} F_{603}(i) G_{630}(e)
\end{equation}
The functions $F_{603}(i)$ and $G_{630}(e)$ can be easily calculated using the recursive formulae of Kaula and are given by $F_{603}=-\frac{5}{16}+\frac{105(\sin)^{2} i}{32}-\frac{945(\sin)^{4} i}{128}+\frac{1155(\sin)^6 i}{256}$ and\\ $G_{630}=\frac{1+5e^{2}+\frac{15e^{4}}{8}}{(1-e^{2})^{11/2}}$:
Finally inserting $F_{603}(i)$ and $G_{630}(e)$ in Eqs. 1 and 4, we have the secular nodal rate due to $\bar{C}_{6,0}$:
\begin{dmath}
\frac{d\Omega_{6,0}}{dt} = \frac{105 (1+5e^{2}+\frac{15 e^{4}}{8}) n R^{6}\sqrt{13}\, \bar{C}_{6,0}}{16a^{6}(1-e^{2})^{6}} \cdot \cos i (1-\frac{9(\sin)^{2} i}{2}+\frac{33(\sin)^{4} i}{8})
\end{dmath}
By inserting in the nodal rate the orbital parameters, semimajor axis, $a$, eccentricity, $e$, and inclination, $i$, of the three satellites:
$ a_{LARES} \cong$ 7820 km, $ e_{LARES} \cong$ 0.0008, and $ i_{LARES} \cong$ 69.5$^{\circ}$;
$ a_{LAGEOS} \cong$ 12,270 km, $ e_{LAGEOS} \cong$ 0.0045, and $ i_{LAGEOS} \cong$ 109.84$^{\circ}$, and
$ a_{LAGEOS \, 2} \cong$ 12,163 km, $ e_{LAGEOS \, 2} \cong$ 0.0135, and $ i_{LAGEOS \, 2} \cong$ 52.64$^{\circ}$; we have:
\begin{eqnarray*}
\frac{d\Omega_{LAGEOS}}{dt} = -1.18019 \cdot 10^{11} \cdot \bar{C}_{6,0}\; mas/yr\\
\frac{d\Omega_{LAGEOS \, 2}}{dt} = -1.78652 \cdot 10^{11} \cdot \bar{C}_{6,0}\; mas/yr\\
\frac{d\Omega_{LARES}}{dt} = 3.27064 \cdot 10^{12} \cdot \bar{C}_{6,0}\; mas/yr\\
\end{eqnarray*}
Where mas stands for milliarcsec. Combining the nodal rates of LAGEOS, LAGEOS 2 and LARES due to $\bar{C}_{6,0}$ using the formula to eliminate the $\bar{C}_{2,0}$ and $\bar{C}_{4,0}$ contributions to the combined nodal rates (see formula (9) of section 3 below), we have:\\
\begin{strip}
\begin{dmath}
{ \Omega}^{6,0}_{LAGEOS} + c_1 { \Omega}^{6,0}_{LAGEOS 2} +
c_2 { \Omega\/}^{6,0}_{LARES} = (-1.18019 \cdot 10^{11} - c_1 \cdot 1.78652 \cdot 10^{11} + c_2 \cdot 3.27064 \cdot 10^{12}) \cdot \bar{C}_{6,0}\; mas/yr =
5.91029 \cdot 10^{10} \cdot \bar{C}_{6,0} \; mas/yr
\end{dmath}
\end{strip}
\noindent where $c_1 = 0.345$ and $c_2 = 0.073$.
Finally, the largest $C_{6,0}$ difference in Iorios's Table 3 (I2017) is GOCO05S - ITU\textunderscore GRACE16: $\Delta \bar{C}_{6,0} = 3.197 \times 10^{-11}$ in magnitude. Using this difference we get the error in the combined nodal rates of LAGEOS, LAGEOS 2 and LARES due to the difference between the $\bar{C}_{6,0}$ coefficients of GOCO05S and ITU\textunderscore GRACE16, that is 1.89 mas/year.
Since the combined frame-dragging effect is about\\ ${ \Omega}^{Lense-Thirring}_{combination} = 30.657 + c_1 \cdot 31.481 + c_2 \cdot 118.421 \; mas/yr =$\\
$ 50.16 \; mas/yr$, the final relative percent error is just:
\begin{equation}
\frac{1.89 \; mas/yr} {50.16 \; mas/yr} = 3.75 \% \; { \Omega}^{Lense-Thirring}_{combination},
\end{equation}
\noindent an error about four times smaller than 15\% as erroneously claimed in I2017, and within our 5\% estimated systematic error. Other entries in I2017 Table 3 are smaller (or much smaller) than GOCO05S - ITU\textunderscore GRACE16; the effect on the error is linear in the differences, so this result bounds the Lense-Thyrring error estimate derived from $C_{6,0}$ differences.
Similar calculational/numerical errors affect the other values listed in tables 4, 6, 8 and 9 of I2017. To continue our analysis of the difference, we find the percentage uncertainty arising from the difference in $C_{8,0}$ to be $3\times 10^{-3}\%$, compared to $2\times 10^{-2}\%$ in I2017. For the percentage uncertainty arising from the $C_{10,0}$ difference, we find, in agreement with I2017, $\approx 3\%$. Obviously, adding the uncertainties arising from $C_{6,0}$, $C_{8,0}$, and $C_{10,0}$ would lead to $\approx 6.75\%$ added in absolute value, and about $4.8\%$ added in quadrature. The discussion just above concerns models GOCO05S and ITU\textunderscore GRACE16. Neither is the model GGM05S that we use, but COCO05S is very similar to GGM05S, and has similar good low-multipole accuracy. ITU\textunderscore GRACE16 has much poorer low-multipole accuracy, and as we have just seen, this leads to estimated frame dragging uncertainty in the $5\%$ to $7\%$ range arising from differencing $C_{6,0}$, $C_{8,0}$, and $C_{10,0}$ between GOCO05S and ITU\textunderscore GRACE16.
The strongest claim made in I2017 involves differences involving $C_{10,0}$ between the model JYY\textunderscore GOCE04S and the other three models considered in I2017. The $C_{10,0}$ differences between the model JYY\textunderscore GOCE04S and the others considered in I2017 would lead to frame dragging uncertainties of order $30\%$. (However JYY\textunderscore GOCE04S is about an order of magnitude less accurate than state of the art models in the low multipoles; see Fig. 1. I2017's calculations are erroneous also here. I2017's Table 8, last column (JYY\textunderscore GOCE04S) should read $32\%$, $29\%$, $32\%$.) Once these and other computational errors in I2017 are corrected, these $\approx 30\%$ differences dominate Iorio's claims for large ``uncertainties". But reviewing I2017's Tables 6 and 8 most clearly shows that model JYY\textunderscore GOCE04S is an outlier; the fault lies with JYY\textunderscore GOCE04S (see Fig. 1). I2017's claims based on this outlier are not credible.
It is worth mentioning that in the comparison of \\ ITU\textunderscore GRACE16 and GGM05S, the effective epoch of the zonals can be different, which is relevant if they have a linear time dependence (seasonal and tidal variations do not have a significant impact on the results). GGM05S was determined with GRACE data spanning April 2003 to May 2013 (making the effective epoch $\sim$ 2008), while ITU\textunderscore GRACE16 used GRACE data from April 2009 to October 2013 (making the effective epoch $\sim$ 2011). Taking into account the linear drift (as determined from the full GRACE time series currently available) over the 3-year epoch difference in $C_{6,0}$, $C_{8,0}$, and $C_{10,0}$, we find that the differences between the two geopotential models are in fact reduced by a factor of 3 or more, suggesting an even closer level of agreement than simply difference the coefficients as published. \footnote{A minor point is also that the absolute value of the differences of $\bar{C}_{6,0}$, for example for GOCO05S and\\ JYY\textunderscore GOCE04S, and ITU\textunderscore GRACE16 and JYY\textunderscore GOCE04S, provided respectively with three and four significant digits in I2017, are erroneous. For example the value of $\bar{C}_{6,0}$ for model GOCO05S is evaluated at epoch January 1, 2008, neglecting annual variations. For model ITU\textunderscore GRACE16 the $\bar{C}_{6,0}$ value represents a mean for the period April 2009 to October 2013, and for model JYY\textunderscore GOCE04S a mean for the period November 2009 to October 2013: $\bar{C}_{60_{GOCO05S}} = -1.499663394539 \cdot 10^{-7}$, $\bar{C}_{60_{ITU\textunderscore GRACE16}} = -0.149998273044598 \cdot 10^{-6}$ and $\bar{C}_{60_{JYY\textunderscore GOCE04S}} = -0.1499850880263456 \cdot 10^{-6}$, so that their difference can be built as in I2017:
$\bar{C}_{60_{GOCO05S}} - \bar{C}_{60_{JYY\textunderscore GOCE04S}} = 1.87486 \cdot 10^{-11}$ and
$\bar{C}_{60_{ITU\textunderscore GRACE16}} - \bar{C}_{60_{JYY\textunderscore GOCE04S}}= 1.3185 \cdot 10^{-11} $.
However, in table (3) of I2017, these differences are respectively incorrectly quoted as $1.37 \cdot 10^{-11}$ and $1.827 \cdot 10^{-11}$, with four significant digits. These are not major errors but do influence the results to some degree.}
\subsubsection{\bfseries{Other inconsistent results in the publications by Iorio}}
It is curious that Iorio, in similar past papers \cite{ior1,ior2,ior3}, has produced results quite at variance with the present one in I2017, and with each other.
For example in 2005 \cite{ior1}, Iorio used the same technique that we applied to get a 5\% test of frame-dragging \cite{ciuetal16} to predict a ``{\\itshape reliable}'' 1\% test of frame-dragging: ``{\itshape . . . by inserting the new spacecraft in a relatively low, and cheaper, orbit ($a$ = 7500 - 8000 km, $i$ _= 70 deg) and
suitably combining its node with those of LAGEOS and LAGEOS II in order to
cancel out the first even zonal harmonic coefficients of the multipolar expansion of
the terrestrial gravitational potential $J_2, J_4$ along with their temporal variations.
The total systematic error due to the mismodelling in the remaining even zonal
harmonics would amount to $1\%$ and would be insensitive to departures of the
inclination from the originally proposed value of many degrees}'' \cite{ior1}.
But in a 2009 paper
\cite{ior2} he claimed that the total measurement uncertainty of frame-dragging
including the LARES satellite, could range from $1000\%$ to $100\%$: ``{\itshape The low altitude of LARES, $1450 km$
with respect to about\\ $6000 km$ of LAGEOS and LAGEOS II,
will make its node sensitive to much more even zonals than its two already orbiting
twins; it turns out that, by using the sigmas of the covariance matrices of some of
the latest global Earth's gravity solutions based on long data sets of the dedicated
GRACE mission, the systematic bias due to the mismodeled even zonal harmonics up to $l = 70$ will amount to $100 - 1000\%$}'' \cite{ior2}.
Later on, in 2011 \cite{ior3}, for the same orbit of the LARES satellite: ``{\itshape If, instead, one assumes $J_l$,
$l = 2,4,6,. . .$, i.e., the standard deviations of the sets of all the best estimates of $J_l$
for the models considered here the systematic bias, up to $l = 60$, amounts to $12\%$
(SAV) [sum of absolute values] and $6\% (RSS)$ [root sum squared]. Again, also this
result may turn out to be optimistic for the same reasons as before.}''
Other similar papers published an uncertainty of 29\% for the LARES experiment \cite{ior4}.
Similar contradicting statements and huge differences for the uncertainty in the test of frame-dragging with the
LAGEOS and LAGEOS 2 satellites, published between 2003 and 2011, can be found
in other papers by the same author. In summary the author of I2017 has over about a decade published
error budgets of the same LARES experiment that go from 1000\% to 1\% with a number of figures in between.
\subsection{Conceptual shortcomings of differencing the lowest even zonals of different Earth gravity field models}
In I2017 the difference between the even zonals of different Earth gravity field models are calculated and then these differences are propagated into the nodal rates to find the total uncertainty in the measurement of frame-dragging.
However, as we remarked in a number of papers \cite{ciuetal10}, it makes $no$ sense to compare Earth gravity models obtained with different techniques that have different intrinsic accuracies (that is, including systematic errors and not simply formal errors) and especially that have different accuracies of the lowest harmonics. Indeed the accuracy of the lowest even zonal harmonics of an Earth gravity field model obtained with data of GOCE $only$, such as JYY\textunderscore GOCE04S, cannot be compared to the accuracy of the lowest harmonics of models obtained with GRACE and SLR. Furthermore, the accuracy of the lowest harmonics of a model obtained with an energy integral method, such as ITU\textunderscore GRACE16, should not be compared to that
of GGM05S; energy integral methods incorporate only instantaneous position determinations, without equations of motion to interpolate between subsequent measurements. For this reason, of the four Earth models (ITSG\textunderscore Grace2014S, GOCO05S, ITU\textunderscore GRACE16, JYY\textunderscore GOCE04S) used in I2017, only the lowest harmonics of ITSG\textunderscore Grace2014S and GOCO05s are comparable in accuracy to those of GGM05S. (We reiterate that I2017 does not carry out this comparison.)
Let us explain this point in detail. Satellite gravity gradiometry (SGG) is a very powerful technique for the direct observation of higher order functionals of the gravitational potential directly, rather than inferring them from their perturbing effects on satellite orbits. This is very nicely discussed in several articles, e.g., \cite{rum}. One of the drawbacks of SGG however is the fact that the observations are primarily sensitive to a range of frequencies of the geopotential, those that correspond to the measurement band of the specific instrument used. In the case of the GOCE mission, because of restrictions on the development of the gradiometer, the useful bandwidth was from $5 \cdot 10^{—3}Hz$ to $0.1 Hz$. In the end the very long wavelength components of the field cancel out in the measurement process as “common mode” effects that cannot rise over the noise of the instrument.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{0-90_mod.eps}
\caption{We compare two GOCE-only gravity models (GOSG01S \cite{GOS} and JYY\textunderscore GOCE04S) as well as GGM05S with the EIGEN6C4 \cite{EIGEN6} (a ``combination model" which incorporates SST and SLR input to obtain highly accurate low-degree Earth gravity determinations - see text) \cite{reisEtAl2016,xuEtAl2017,yiEtAl2013,forsteEtAl2014}. The square-root variance (or RMS) is plotted as a function of the geopotential degree (the value of $2n$ in the symbol $J_{2n}$ of a multipole). The lower degrees represent longer wavelength features of the gravity field. Included on the plot are the estimated errors assigned to JYY\textunderscore GOCE04S and to GGM05S, which appear to be consistent with the actual errors as realized by their differences with EIGEN6C4. At the higher degrees, the GOCE-based models perform slightly better than GRACE models, but for the purpose of the Lense-Thirring analysis, only the lowest degrees are relevant. It is clear that for the GOCE-only models, the lower degree terms are about an order of magnitude less accurate. They obviously perform even worse for degrees 10 to 16}
\label{fig:1}
\end{figure*}
This results in SGG requiring some external information for the long wavelength (low degree) part of the field. This is the reason why those who use SGG data resort to adding-in Satellite-to-Satellite Tracking (SST) data, from which they can obtain the required information for the complete recovery of the field, from the lowest to the highest degree possible. In most cases the SST part comes from “high-low” Global Navigation Satellite System (GNSS) observations between the spacecraft carrying the gradiometer and GNSS spacecraft, and in such cases, the orbits are usually done in Precise Kinematic mode, which means that there are no equations of motion involved and the positions are determined independently at each observation point. This causes further degradation of the information contained in the very long wavelength part of these models. In other cases the information is derived from “low-low” SST, e.g. between the two GRACE spacecraft, using the ultra-precise K/Ka-Band Ranging (KBR) system, the same one used to produce the GRACE models. In that case of course the resulting model is a mix of GRACE and GOCE product, where the long wavelength info comes from the GRACE data and the higher degree part from GOCE, with the intermediate wavelengths being a region where both systems contribute.
Over the past two decades it was also recognized that due to the mass redistribution of the Earth System, the geopotential field is not a static one, it rather exhibits variations at all frequencies, spatial and temporal. Due to this, it is now customary that when one develops a model, these variations should be either estimated simultaneously, or forward-modeled on the basis of the best available models. For the longest wavelength components represented by the very low degree zonals, these are the estimates that we obtain from the analysis of several SLR missions covering several years, and these are part of the GRACE mission models. Obviously, models that are based on kinematic orbits (e.g. ITU\textunderscore GRACE, JYY\textunderscore GOCE) and use data over a short period of time, are not able to determine these temporal variations, but even worse, in most cases they do not even account for them, making it impossible to reference their coefficients to a specific date for comparison with models that are derived for a specific date (e.g. the GRACE mission models). Because of the high precision of the new techniques and the increase in modeling accuracy, temporal variations are now clearly visible up to high degrees and orders, so that comparison of models without careful consideration of these variations does not make any sense. GRACE has dealt with this issue by carefully developing a “de-aliasing” product that accounts for atmospheric, oceanic and such variations, so that the recovered variations can be ascribed to hydrological sources. Due to this specificity, it is no longer meaningful to use a single value and a linear rate to model even the very long wavelength components of the field (e.g. $J_2$). We now use a time-series of 15-day averaged values (sometimes even weekly estimates), in order to capture the effect of high frequency modulations caused by mass redistribution. One needs to be careful that these time series are derived using the same higher order model as in the case of the GRACE mission products, so that the ensemble represents the same potential field at all times (including the tidal part of course).
It is in the nature of the gravity gradiometer data from GOCE that the measurement errors dominate at the longer wavelength (lower degree) components of the gravity field. In Fig. 1, we compare two GOCE-only gravity models (GOSG01S and JYY\textunderscore GOCE04S) as well as GGM05S with the combination model EIGEN6C4. The square-root variance (or RMS) is plotted as a function of the geopotential degree (the value of $2n$ in the symbol $J_{2n}$ of a multipole). The lower degrees represent longer wavelengths and higher degrees reflect the shorter wavelength features of the gravity field. Included on the plot are the estimated errors assigned to JYY\textunderscore GOCE04S and to GGM05S, which appear to be consistent with the actual errors as realized by their differences with EIGEN6C4. At the higher degrees, the GOCE-based models perform slightly better than GRACE-only models, but for the purpose of the Lense-Thirring analysis, only the lowest degrees are relevant. It is clear that for the GOCE-only models, the lower degree terms are about an order of magnitude less accurate and cannot rationally be used to judge the accuracy of gravity models that are based on GRACE data \footnote{All models mentioned are available at: http://icgem.gfz-potsdam.de/tom, along with the related documentation} \cite{reisEtAl2016,xuEtAl2017,yiEtAl2013,forsteEtAl2014}.
\begin{table*}[h!]
\centering
\begin{tabular}{p{7.5cm}|p{2.5cm}p{2.5cm}p{2.5cm}l}
& $\bar{C}_{6,0}$ & $\bar{C}_{8,0}$ & $\bar{C}_{10,0}$ \\\hline
Difference (absolute value) of GGM05S with ITSG\textunderscore Grace2014S & $5.72392 \cdot 10^{-13}$ & $9.35295 \cdot 10^{-13} $ & $2.80392 \cdot 10^{-12}$ \\\hline
Difference (absolute value) of GGM05S with GOCO05S & $8.84729 \cdot 10^{-12}$ & $2.74188 \cdot 10^{-12}$ & $2.28925 \cdot 10^{-12}$ \\
\end{tabular}
\caption{Difference of the even zonal harmonics $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$ between GGM05S, and ITSG\textunderscore Grace2014S and GOCO05S}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{p{7.5cm}|p{2.5cm}p{2.5cm}p{2.5cm}}
& $\bar{C}_{6,0}$ & $\bar{C}_{8,0}$ & $\bar{C}_{10,0}$ \\\hline
Absolute value of the error propagated into the combination of the nodes of LAGEOS, LAGEOS 2 and LARES of the difference between GGM05S with ITSG\textunderscore Grace2014S in units of mas/yr & 0.0339082 & 0.00296451 & 0.258112 \\\hline
Absolute value of the error propagated into the combination of the nodes of LAGEOS, LAGEOS 2 and LARES of the difference between GGM05S with GOCO05S in units of mas/yr & 0.524109 & 0.00869065 & 0.210734 \\
\end{tabular}
\caption{Error propagated into the node of LAGEOS, LAGEOS 2 and LARES due the differences between GGM05S and ITSG\textunderscore Grace2014S and GOCO05s for each coefficient $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$.}
\end{table*}
\begin{table*}[h]
\centering
\begin{tabular}{p{3.5cm}|p{10.5cm}}
& Total percent error relative to the combined frame-dragging effect \\\hline
ITSG\textunderscore Grace2014S & 0.588\% \\
GOCO05s & 1.48\% \\
\end{tabular}
\caption{Total error (sum of each absolute value) propagated into the combination of the nodes of LAGEOS, LAGEOS 2 and LARES relative to the combined frame-dragging effect of LAGEOS, LAGEOS 2 and LARES (about 50.465 mas/yr)}
\end{table*}
Naturally, the approach adopted in deriving a model and the amount of proper accounting of other-than-gravity variations of the ``observed'' field affect the accuracy of the derived model. The ``formal'' covariance that comes out as a product of a least squares estimation has very little to do with the true accuracy of the model. Calibrating this covariance matrix is usually the most time-consuming effort for most of the highest accuracy models and the developers make sure to report that process in detail when delivering their models. There are very few models that provide all the information required to judge them in a relative comparison to other models with similar information. Unfortunately, a blindly executed direct comparison ignoring all the details behind the development of two models, the reference epoch of the harmonic coefficients, the background models used, etc., most certainly leads to incorrect and unacceptable conclusions. Even models that are seemingly derived from similar data and using even the same technique, if they are based on data collected over two different time periods (even if of equal length), will be significantly different if the temporally varying parts are not appropriately handled in both cases. This reason alone ought to be enough to force a very strict approach in making comparisons between models.
A simple difference of the corresponding coefficients is definitely the wrong approach and especially one should not compare the lowest harmonics of ITSG\textunderscore Grace2014S and GOCO05S with those of ITU\textunderscore GRACE16 and JYY\textunderscore GOCE04S (this last gravity model being obtained with GOCE $only$), and then should not propagate these differences into the nodal rates to evaluate the uncertainty in the test of frame-dragging, as done in I2017. ITSG\textunderscore Grace2014S and GOCO05S are models designed to be accurate for low order harmonics, so for completeness, in the next section we report the results of the errors obtained by differencing the lowest harmonics of ITSG\textunderscore Grace2014S and GOCO05S against the model GGM05S we use and then propagating these differences into the nodal rates. This approach fully confirms our error budget in our test of frame-dragging. (To reiterate, I2017 did not consider comparisons to GGM05S.)
\subsection{Errors induced by the gravity field uncertainties}
We wish to compare the gravity field models\\ ITSG\textunderscore Grace2014S and GOCO05S with GGM05S. Therefore, we took the differences between each of the harmonics $\bar{C}_{6,0}$, $\bar{C}_{8,0}$ and $\bar{C}_{10,0}$ of GGM05S with the corresponding harmonics of the gravity field models
ITSG Grace2014S and GOCO05S (the differences are reported in Table 1). We then propagated these errors into the combination of the nodal rates, Table 2, and we finally added the absolute values of the errors due to each difference of each coefficient of these two gravity models and compared the result to the frame-dragging effect.
The results, shown in Table 3, obtained in this way, estimate the uncertainty in the GGM05S measurement of frame-dragging by modeling errors as (schematically) GMM05S - ITSG\textunderscore Grace2014S and GMM05S - GOCO05S.
The results shown in Table 3 are fully consistent with the systematic error budget of about 5\%, or less, for our test of frame-dragging \cite{ciuetal16}; in fact they are substantially smaller than that $5\%$ estimate.
\section{The erroneous unnecessary number of decimal digits of the coefficients $c_1$ and $c_2$ claimed to be necessary in I2017}
In I2017, it is claimed that ``the numerical values of $c_1$, $c_2$ in Eqs. (14), (15) are quoted with nine decimal digits in order to assure a cancelation of $J_2$ accurate to better than 1\% level.''
Let first explain why these coefficients are needed and how they are calculated. Our analysis is performed in the following way.
(1) We first obtain the residuals of the nodes of LAGEOS, LAGEOS 2 and LARES by using the experimental data, i.e., the Satellite Laser Ranging (SLR) observations of these satellites and by using, independently, the orbital estimators GEODYN (NASA), EPOS-OC (GFZ) and UTOPIA (CSR-UT). (The three estimators give consistent results.) The orbital residuals are the difference between the $observed$ orbital elements of a satellite, obtained by fitting the SLR observations using the three independent orbital estimators, and the $calculated$ orbital elements, obtained by propagating their orbits using the three orbital estimators containing a full set of physical models among which is an Earth gravity field model, such as GGM05S. The orbital residuals are mainly due by errors in the modelling of the orbital perturbations, such as errors in the spherical harmonic expansion of the Earth's gravity field, or to any perturbation not included at all in the orbital estimators, such as the Lense-Thirring effect. The main sources of error in the measurement of frame-dragging (see sections 1 and 2.1, and \cite{ciu96,ciuetal10}), which produce non-zero orbital residuals, are due to the lowest order even zonal harmonics of the Earth gravity field and in particular to the Earth quadrupole moment $C_{2,0}$ and to $C_{4,0}$.
(2) We then consider the system containing the three equations of the measured nodal residuals of
LAGEOS, LAGEOS 2 and LARES, $\delta \Omega$, in the three unknowns $\delta \bar{C}_{2,0}$, $\delta \bar{C}_{4,0}$ and Lense-Thirring
effect, parametrized by a parameter $\mu$, where $\mu$ is equal to unity in General Relativity. The three equations for LAGEOS, LAGEOS 2 and LARES are:
\begin{dmath}
\footnotesize
\delta \dot \Omega_{SAT} \, = \frac{3}{2} \, n_{SAT} \, \left ( \, \frac{R_{\oplus}}{a_{SAT}} \right )^2 \, \frac{cos \, I_{SAT}}{\left ( \, 1 - e_{SAT}^2 \,\right )^2 } \,\\
\Biggl\{ \,\sqrt{5} \delta \bar{C}_{20} + \sqrt{9} \delta \bar{C}_{40} \, \Biggl [ \, \frac{5} {8} \, \left ( \,
\frac{R_{\oplus}} {a_{SAT}} \, \right )^{2} \, \times ( \, 7 \, sin^2 \, I_{SAT} - 4 \, ) \,
\frac{( \, 1 + \frac{3}{2} \, e_{SAT}^2 )} {\left ( \, 1 - e_{SAT}^2 \, \right )^2} \, \Biggr ] + \Sigma \, N_{2n \; SAT} \times \bar{C}_{2n \; 0} \, \Biggr \}+\mu \dot { \Omega\/}^{Lense-Thirring}_{SAT}
\end{dmath}
\noindent where SAT stands for LAGEOS or LAGEOS 2 or LARES, $n_{SAT}$ is their mean motion, $N_{2n \, SAT}$ are the coefficients (in the equation for the nodal rate) of the $\bar{C}_{2n, 0}$ for $2n > 4$, and the $\bar{C}_{2n,0}$ are the normalized
even zonal harmonic coefficients, .
(3) We then solve for the frame-dragging effect, one of the three unknowns, together with $\delta \bar{C}_{20}$ and $\delta \bar{C}_{40}$,
and we get the frame-dragging effect as a function of the three residuals of the nodes of LAGEOS, LAGEOS 2 and LARES.
The result for frame-dragging, is:
\begin{equation}
\mu = \frac {\delta \Omega_{LAGEOS} + c_1 \delta \Omega_{LAGEOS \, 2} + c_2 \delta \Omega_{LARES}}
{{ \Omega}^{Lense-Thirring}_{LAGEOS} + c_1 { \Omega}^{Lense-Thirring}_{LAGEOS 2} +
c_2 { \Omega\/}^{Lense-Thirring}_{LARES}}
\end{equation}
Where the two coefficients $c_1$ and $c_2$ are $c_1 = 0.345$ and $c_2 = 0.073$.
The precise value of these two coefficients was not provided in \cite{ciuetal16} since they are updated every 15-arc as a function of the changes in the orbital parameters. Nevertheless, in \cite{ciupav} the values of these coefficients, in the case of the LAGEOS and LAGEOS 2 test of frame-dragging, were explicitly given.
Now I2017 provides in Eqs. 14 and 15 these coefficients with a large number of unnecessary decimal digits, claiming that at least nine significant decimal digits are needed for our test of frame-dragging. However I2017 missed the main point of the technique that we used, as explained here and in a number of previous papers (see, e.g., \cite{ciupav,ciuetal10}. Indeed, the typical average size of the nodal residuals of the LAGEOS and LAGEOS 2, using the most recent determinations of the Earth gravity field, is of the order of about 150 mas/yr. Since the frame-dragging effect has on LAGEOS and LAGEOS 2 a size of about 31 mas/year, for a 5\% measurement of frame-dragging, thus with an error of about $\pm$ 1.5 mas/yr, the coefficient $c_1$ of LAGEOS 2, must only be accurate, at the level of about 1\%, i.e., two significant decimal digits of the $c_1$ are enough for a 5\% test, similarly two/three significant decimal digits of the LARES coefficient $c_2$ are enough for a 5\% test. Thus, contrary to what is claimed in I2017 the two coefficients $c_1$ and $c_2$ are only needed at the level of two or three significant decimal digits. I2017 misunderstood the analysis technique, and missed also this basic point.
Nevertheless we determined these two coefficients with many more significant digits, thanks to the technique of SLR to measure all the orbital elements of LAGEOS, LAGEOS 2 and LARES.
\section{Brief review of the methods to combine the orbital elements and results by other groups confirming our test}
The use of two passive laser-ranged satellites of LAGEOS type, with supplementary inclinations, to test\\ frame-dragging
was proposed in \cite{ciu84,ciu86,ciu89,csr89,asi89,rie89,pet,ciu96}.
The combination of the nodes of a number of satellites, used in \cite{ciuetal16} , was first proposed in \cite{ciu89} (see page 3102).
Then in \cite{ciu96} was first calculated the precise combination of the orbital elements of LAGEOS and LAGEOS 2. In
\cite{ciupav} the combination of the nodes of LAGEOS and LAGEOS 2 was displayed and used to provide a
test of frame-dragging. In \cite{ciu06} the use of the nodes of LAGEOS and LAGEOS 2 and of a similar satellite at
a lower altitude (LARES) was proposed; the uncertainty in the measurement of frame-dragging using these three satellites was
then calculated as a function of the inclination and of the semimajor axis of LARES (see Fig. 2). These calculations coupled with the
capabilities of the first qualifying launcher VEGA, led to the precise orbit of the LARES successfully launched
in 2012 by VEGA.
\begin{figure}
\includegraphics[width=0.9\textwidth]{fig2_iorio_bis.eps}
\caption{Percent error in the measurement of the Lense-Thirring effect, due
to the even zonal harmonics uncertainties, as a function of the inclination
and of the semimajor axis of LARES, using LARES, LAGEOS and LAGEOS 2.
The range of the semimajor axis of LARES is between 7400 km and 8300 km
and that of the inclination between 0 and 2 $\Pi$ [adapted from \cite{ciu06}.}
\label{fig:2}
\end{figure}
Furthermore in I2017 it is claimed ``Finally, it is remarkable that, after about twenty years since
the first reported tests with LAGEOS and LAGEOS II and
four years since the launch of LARES, nobody has yet published
any genuinely independent test of the Lense–Thirring
effect with such geodetic satellites in the peer-reviewed literature,
especially in view of how many researchers around the
world constitute the global satellite laser ranging community.''
I2017 seems to be unaware of the fact that the three
$independent$ orbital estimators GEODYN, EPOS-OC and UTOPIA have been
$independently$ run, respectively by the three groups of: (a) Universities of Salento (Lecce), Sapienza
(Rome), and Maryland BC/JCET (Joint Center for Earth Systems Technology);
(b) Center for Space Research (CSR) of the University
of Texas (UT) at Austin \cite{rie08,rie09} and GFZ (German Research Centre for Geosciences, Helmholtz Centre, Potsdam) \cite{koe12,koe16},
leading to the same results.
Furthermore, the test published in 2016 in \cite{ciuetal16} was fully confirmed by another completely
independent team and presented at an international conference \cite{bas}.
A similar test of frame-dragging, the 19\% test of frame-dragging by Gravity Probe B \cite{GPB}, was indeed
published by one team only.
\section{Conclusions}
All the claims of I2017 are groundless. They are either numerically and conceptually incorrect
or are based on erroneous assumptions and claims. In section (2.1) we have shown that the numerical figures
of I2017 are erroneous by some large factor; in section (2.2) we have explained that the lowest harmonics of
different Earth gravity field models, e.g., those obtained with GOCE only, such as JYY\textunderscore GOCE04S, and those obtained with GRACE and SLR,
such as GGM05S, cannot be compared and thus I2017 is flawed by the incorrect assumption of comparing the
lowest harmonics of different, noncomparable, Earth gravity models.
We also reported that by comparison of low degree harmonics of suitable, comparable, gravity field models, the 5\% systematic error estimate
of our Lense-Thirring analysis is confirmed.
In section 3 we showed that it is incorrect to
claim that the coefficients used in the combination of the satellites’ residual nodal rates
must be known with nine significant decimal digits, indeed three significant decimal
digits are enough for a 1\% test of frame-dragging.
Finally, in section 4, we evidenced that the\\ LARES test of frame-dragging was indeed
repeated by independent and different teams, contrary to the claims in I2017.
\section*{Acknowledgments}
We gratefully acknowledge the Italian Space Agency for the support of the LARES and LARES 2 space missions
under agreements No. 2017-23-H.0 and No. 2015-021-R.O. We are also grateful to the International Ranging Service,
ESA, AVIO and ELV. ECP acknowledges the support of NASA Grants NNX09AU86G and NNX14AN50G. RM
acknowledges NASA Grant NNX09AU86G and NSF Grant PHY-1620610 and JCR the support of NASA Contract
NNG17V105C. We thank the anonymous referee for useful comments to improve the paper.
|
{
"timestamp": "2019-10-24T02:03:06",
"yymm": "1910",
"arxiv_id": "1910.10224",
"language": "en",
"url": "https://arxiv.org/abs/1910.10224"
}
|
\section{Introduction}
\label{sec:introduction}
Sequence-to-sequence models that use an attention mechanism to align the input and output sequences~\cite{Graves:2013ua,Bahdanau:2014vz} are currently the predominant paradigm in end-to-end TTS.
Approaches based on the seminal Tacotron system~\cite{Wang:2017uz} have demonstrated naturalness that rivals that of human speech for certain domains~\cite{Shen:2017vg}.
Despite these successes, there are sometimes complaints of a lack of robustness in the alignment procedure that leads to missing or repeating words, incomplete synthesis, or an inability to generalize to longer utterances~\cite{Zhang:2018is,He:2019tg,Liu:2019us}.
The original Tacotron system~\cite{Wang:2017uz}
used the content-based attention mechanism introduced in \cite{Bahdanau:2014vz} to align the target text with the output spectrogram.
This mechanism is purely content-based and does not exploit the monotonicity and locality properties of TTS alignment, making it one of the least stable choices.
The Tacotron 2 system~\cite{Shen:2017vg} used the improved hybrid location-sensitive mechanism from \cite{Chorowski:2015uh} that combines content-based and location-based features, allowing generalization to utterances longer than those seen during training.
The hybrid mechanism still has occasional alignment issues which led a number of authors to develop attention mechanisms that directly exploit monotonicity~\cite{Raffel:2017vl,Zhang:2018is,He:2019tg}.
These monotonic alignment mechanisms have demonstrated properties like increased alignment speed during training, improved stability, enhanced naturalness, and a virtual elimination of synthesis errors.
Downsides of these methods include decreased efficiency due to a reliance on recursion to marginalize over possible alignments, the necessity of training hacks to ensure learning doesn't stall or become unstable, and decreased quality when operating in a more efficient hard alignment mode during inference.
Separately, some authors~\cite{SkerryRyan:2018ub} have moved back toward the purely location-based GMM attention introduced by Graves in \cite{Graves:2013ua}, and some have proposed stabilizing GMM attention by using softplus nonlinearities in place of the exponential function~\cite{Kastner:2018wg,Battenberg:2019tc}.
However, there has been no systematic comparison of these design choices.
In this paper, we compare the content-based and location-sensitive mechanisms used in Tacotron 1 and 2 with a variety of simple location-relative mechanisms in terms of alignment speed and consistency, naturalness of the synthesized speech, and ability to generalize to long utterances.
We show that GMM-based mechanisms are able to generalize to very long (potentially infinite-length) utterances,
and we introduce simple modifications that result in improved speed and consistency of alignment during training.
We also introduce a new location-relative mechanism called Dynamic Convolution Attention that modifies the hybrid location-sensitive mechanism from Tacotron 2 to be purely location-based, allowing it to generalize to very long utterances as well.
\section{Two Families of Attention Mechanisms}
\label{sec:attention_mechanisms}
\subsection{Basic Setup}
\label{subsec:basic_setup}
The system that we use in this paper is based on the original Tacotron system~\cite{Wang:2017uz} with architectural modifications from the baseline model detailed in the appendix of \cite{Battenberg:2019tc}.
We use the CBHG encoder from \cite{Wang:2017uz} to produce a sequence of encoder outputs, $\{\bm{h}_j\}_{j=1}^L$, from a length-$L$ input sequence of target phonemes, $\{\bm{x}_j\}_{j=1}^L$.
Then an attention RNN, \eqref{eq:attention_rnn}, produces a sequence of states, $\{\bm{s}_i\}_{i=1}^T$, that the attention mechanism uses to compute $\bm{\alpha}_i$, the alignment at decoder step $i$.
Additional arguments to the attention function in \eqref{eq:attention_fn_context_vector} depend on the specific attention mechanism (e.g., whether it is content-based, location-based, or both).
The context vector, $\c_i$, that is fed to the decoder RNN is computed using the alignment, $\bm{\alpha}_i$, to produce a weighted average of encoder states.
The decoder is fed both the context vector and the current attention RNN state, and an output function produces the decoder output, $\bm{y}_i$, from the decoder RNN state, $\d_i$.
\begin{align}
\{\bm{h}_j\}_{j=1}^L &= \textrm{Encoder}(\{\bm{x}_j\}_{j=1}^L) \\
\bm{s}_i &= \textrm{RNN}_\textrm{Att}(\bm{s}_{i-1}, \c_{i-1}, \bm{y}_{i-1}) \label{eq:attention_rnn} \\
\bm{\alpha}_i &= \textrm{Attention}(\bm{s}_i, ~\dots)
&\c_i &= \sum_j \alpha_{i,j} \bm{h}_j \label{eq:attention_fn_context_vector}\\
\d_i &= \textrm{RNN}_\textrm{Dec}(\d_{i-1}, \c_i, \bm{s}_i)
&\bm{y}_i &= f_\textrm{o}(\d_i)
\end{align}
\subsection{GMM-Based Mechanisms}
\label{subsec:gmm_mechanisms}
An early sequence-to-sequence attention mechanism was proposed by Graves in \cite{Graves:2013ua}.
This approach is a purely location-based mechanism that uses an unnormalized mixture of $K$ Gaussians to produce the attention weights, $\bm{\alpha}_i$, for each encoder state.
The general form of this type of attention is shown in \eqref{eq:gmm_attention}, where $\bm{w}_i$, $\bm{Z}_i$, $\bm{\Delta}_i$, and $\bm{\sigma}_i$ are computed from the attention RNN state.
The mean of each Gaussian component is computed using the recurrence relation in \eqref{eq:monotonic_gmm}, which makes the mechanism location-relative and potentially monotonic if $\bm{\Delta}_i$ is constrained to be positive.
\begin{align}
\alpha_{i,j} &= \sum_{k=1}^K \frac{w_{i,k}}{Z_{i,k}}
\exp \left(-\frac{(j-\mu_{i,k})^2}{2(\sigma_{i,k})^2}\right) \label{eq:gmm_attention} \\
\bm{\mu}_i &= \bm{\mu}_{i-1} + \bm{\Delta}_i \label{eq:monotonic_gmm}
\end{align}
In order to compute the mixture parameters, intermediate parameters ($\hat{\wb}_i,\hat{\Db}_i,\hat{\sigmab}_i$) are first computed using the MLP in \eqref{eq:gmm_mlp} and then converted to the final parameters using the expressions in Table~\ref{tab:gmm_attention}.
\begin{align}
(\hat{\wb}_i,\hat{\Db}_i,\hat{\sigmab}_i) &= V \tanh(W\bm{s}_i + b) \label{eq:gmm_mlp}
\end{align}
The version 0 (V0) row in Table~\ref{tab:gmm_attention} corresponds to the original mechanism proposed in \cite{Graves:2013ua}.
V1 adds normalization of the mixture weights and components and uses the exponential function to compute the mean offset and variance.
V2 uses the softplus function to compute the mean offset and standard deviation.
Another modification we test is the addition of initial biases to the intermediate parameters $\hat{\Db}_i$ and $\hat{\sigmab}_i$ in order to encourage the final parameters $\bm{\Delta}_i$ and $\bm{\sigma}_i$ to take on useful values at initialization.
In our experiments, we test versions of V1 and V2 GMM attention that use biases that target a value of $\bm{\Delta}_i=1$ for the initial forward movement and $\bm{\sigma}_i=10$ for the initial standard deviation (taking into account the different nonlinearities used to compute the parameters).
\begin{table}[htb]
\caption{Conversion of intermediate parameters computed in \eqref{eq:gmm_mlp} to final mixture parameters for the three tested GMM-based attention mechanisms. $\textrm{S}_\textrm{max}(\cdot)$ is the softmax function, while $\textrm{S}_+(\cdot)$ is the softplus function.}
\label{tab:gmm_attention}
\begin{tabular}{lllll}
\toprule
& $\bm{Z}_i$ & $\bm{w}_i$ & $\bm{\Delta}_i$ & $\bm{\sigma}_i$\\
\midrule
V0~\cite{Graves:2013ua} & $\bm{1}$ & $\e{\hat{\wb}_i}$ & $\e{\hat{\Db}_i}$ & $\sqrt{\e{-\hat{\sigmab}_i} / 2}$\\
V1 & $\sqrt{2\pi\bm{\sigma}_i^2}$ & $\textrm{S}_\textrm{max}(\hat{\wb}_i)$ & $\e{\hat{\Db}_i}$ & $\sqrt{\e{\hat{\sigmab}_i}}$\\
V2 & $\sqrt{2\pi\bm{\sigma}_i^2}$ & $\textrm{S}_\textrm{max}(\hat{\wb}_i)$ & $\textrm{S}_+(\hat{\Db}_i)$ & $\textrm{S}_+(\hat{\sigmab}_i)$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Additive Energy-Based Mechanisms}
\label{subsec:energy_based_mechanisms}
A separate family of attention mechanisms use an MLP to compute attention energies, $\bm{e}_i$, that are converted to attention weights, $\bm{\alpha}_i$, using the softmax function.
This family includes the content-based mechanism introduced in \cite{Bahdanau:2014vz} and the hybrid location-sensitive mechanism from \cite{Chorowski:2015uh}.
A generalized formulation of this family is shown in \eqref{eq:energy_based_mlp}.
\begin{align}
e_{i,j} &= \bm{v}^\intercal \tanh(W\bm{s}_i + V\bm{h}_j + U\bm{f}_{i,j} + T\bm{g}_{i,j} + \b) + p_{i,j} \label{eq:energy_based_mlp} \\
\bm{\alpha}_i &= \textrm{S}_\textrm{max}(\bm{e}_i) \\
\bm{f}_i &= \mathcal{F} * \bm{\alpha}_{i-1} \label{eq:static_filters} \\
\bm{g}_i &= \mathcal{G}(\bm{s}_i) * \bm{\alpha}_{i-1}, \quad
\mathcal{G}(\bm{s}_i) = V_\mathcal{G} \tanh(W_\mathcal{G} \bm{s}_i + \b_\mathcal{G}) \label{eq:dynamic_filters} \\
\bm{p}_i &= \log(\mathcal{P} * \bm{\alpha}_{i-1}) \label{eq:prior_filter}
\end{align}
Here we see the content-based terms, $W\bm{s}_i$ and $V\bm{h}_j$, that represent query/key comparisons and the location-sensitive term, $U\bm{f}_{i,j}$, that uses convolutional features computed from the previous attention weights as in \eqref{eq:static_filters}~\cite{Chorowski:2015uh}.
Also present are two new terms, $T\bm{g}_{i,j}$ and $p_{i,j}$, that are unique to our proposed Dynamic Convolution Attention.
The $T\bm{g}_{i,j}$ term is very similar to $U\bm{f}_{i,j}$ except that it uses dynamic filters that are computed from the current attention RNN state as in \eqref{eq:dynamic_filters}.
The $p_{i,j}$ term is the output of a fixed prior filter that biases the mechanism to favor certain types of alignment.
Table~\ref{tab:energy_based_attention} shows which of the terms are present in the three energy-based mechanisms we compare in this paper.
\begin{table}[htb]
\caption{The terms from \eqref{eq:energy_based_mlp} that are present in each of the three energy-based attention mechanisms we test.}
\label{tab:energy_based_attention}
\begin{tabular}{lccccc}
\toprule
& $W\bm{s}_i$ & $V\bm{h}_j$ & $U\bm{f}_{i,j}$ & $T\bm{g}_{i,j}$ & $p_{i,j}$ \\
\midrule
Content-Based~\cite{Bahdanau:2014vz} & \ding{51} & \ding{51} & - & - & - \\
Location-Sensitive~\cite{Chorowski:2015uh} & \ding{51} & \ding{51} & \ding{51} & - & - \\
Dynamic Convolution & - & - & \ding{51} & \ding{51} & \ding{51} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Dynamic Convolution Attention}
\label{subsec:DCA}
In designing Dynamic Convolution Attention (DCA), we were motivated by location-relative mechanisms like GMM attention, but desired fully normalized attention weights.
Despite the fact that GMM attention V1 and V2 use normalized mixture weights and components, the attention weights still end up unnormalized because they are sampled from a continuous probability density function.
This can lead to occasional spikes or dropouts in the alignment, and
attempting to directly normalize GMM attention weights results in unstable training.
Attention normalization isn't a significant problem in fine-grained output-to-text alignment, but becomes more of an issue for coarser-grained alignment tasks where the attention window needs to gradually move to the next index (for example in variable-length prosody transfer applications~\cite{Lee:2018uo}).
Because DCA is in the energy-based attention family, it is normalized by default and should work well for a variety of monotonic alignment tasks.
Another issue with GMM attention is that because it uses a mixture of distributions with infinite support, it isn't necessarily monotonic.
At any time, the mechanism could choose to emphasize a component whose mean is at an earlier point in the sequence, or it could expand the variance of a component to look backward in time, potentially hurting alignment stability.
To address monotonicity issues, we make modifications to the hybrid location-sensitive mechanism.
First we remove the content-based terms, $W\bm{s}_i$ and $W\bm{h}_i$, which prevents the alignment from moving backward due to a query/key match at a past timestep.
Doing this prevents the mechanism from adjusting its alignment trajectory as it is only left with a set of static filters, $U\bm{f}_{i,j}$, that learn to bias the alignment to move forward by a certain fixed amount.
To remedy this, we add a set of learned \emph{dynamic} filters, $T\bm{g}_{i,j}$, that are computed from the attention RNN state as in \eqref{eq:dynamic_filters}.
These filters serve to dynamically adjust the alignment relative to the alignment at the previous step.
In order to prevent the dynamic filters from moving things backward, we use a single fixed prior filter to bias the alignment toward short forward steps.
Unlike the static and dynamic filters, the prior filter is a causal filter that only allows forward progression of the alignment.
In order to enforce the monotonicity constraint, the output of the filter is converted to the logit domain via the log function before being added to the energy function in \eqref{eq:energy_based_mlp} (we also floor the prior logits at $-10^6$ to prevent underflow).
We set the taps of the prior filter using values from the beta-binomial distribution, which is a two-parameter discrete distribution with finite support.
\begin{align}
p(k) &= \binom{n}{k} \frac{\textrm{B}(k+\alpha, n-k+\beta)}{\textrm{B}(\alpha,\beta)}, \quad
k \in \{0,\ldots,n\}
\end{align}
where $\textrm{B}(\cdot)$ is the beta function.
For our experiments we use the parameters $\alpha=0.1$ and $\beta=0.9$ to set the taps on a length-11 prior filter ($n=10$).
Repeated application of the prior filter encourages an average forward movement of 1 encoder step per decoder step ($\mathbb{E}[k] = \alpha n/(\alpha+\beta)$) with the uncertainty in the prior alignment increasing after each step.
The prior parameters could be tailored to reflect the phonemic rate of each dataset in order to optimize alignment speed during training, but for simplicity we use the same values for all experiments.
Figure~\ref{fig:prior_alignment} shows the prior filter along with the alignment weights every 20 decoder steps when ignoring the contribution from other terms in \eqref{eq:energy_based_mlp}.
\begin{figure}[htb]
\centerline{\includegraphics[width=1.0\linewidth]{figures/prior_alignment.pdf}}
\caption{
Initial alignment encouraged by the prior filter (ignoring the contribution of other term in \eqref{eq:energy_based_mlp}).
The attention weights are shown every 20 decoders steps with the prior filter itself shown at the top.
}
\label{fig:prior_alignment}
\end{figure}
\section{Experiments}
\label{sec:experiments}
\subsection{Experiment Setup}
\label{subsec:experiment_setup}
In our experiments we compare the GMM and additive energy-based families of attention mechanisms enumerated in Tables \ref{tab:gmm_attention} and \ref{tab:energy_based_attention}.
We use the Tacotron architecture described in Section~\ref{subsec:basic_setup}
and only vary the attention function used to compute the attention weights, $\bm{\alpha}_i$.
The decoder produces two 128-bin, 12.5ms-hop mel spectrogram frames per step.
We train each model using the Adam optimizer for 300,000 steps with a gradient clipping threshold of 5 and a batch size of 256, spread across 32 Google Cloud TPU cores.
We use an initial learning rate of $10^{-3}$ that is reduced to $5\times 10^{-4}$, $3\times 10^{-4}$, $10^{-4}$, and $5\times 10^{-5}$ at 50k, 100k, 150k, and 200k steps, respectively.
To convert the mel spectrograms produced by the models into audio samples, we use a separately-trained WaveRNN~\cite{Kalchbrenner:2018wr} for each speaker.
For all attention mechanisms, we use a size of 128 for all tanh hidden layers.
For the GMM mechanisms, we use $K=5$ mixture components.
For location-sensitive attention (LSA), we use 32 static filters, each of length 31.
For DCA, we use 8 static filters and 8 dynamic filters (all of length 21), and a length-11 causal prior filter as described in Section~\ref{subsec:DCA}.
We run experiments using two different single-speaker datasets.
The first (which we refer to as the \emph{Lessac} dataset) comprises audiobook recordings from Catherine Byers, the speaker from the 2013 Blizzard Challenge.
For this dataset, we train on a 49,852-utterance (37-hour) subset, consisting of utterances up to 5 seconds long, and evaluate on a separate 935-utterance subset.
The second is the LJ Speech dataset~\cite{ljspeech17}, a public dataset consisting of audiobook recordings that are segmented into utterances of up to 10 seconds. We train on a 12,764-utterance subset (23 hours) and evaluate on a separate 130-utterance subset.
\subsection{Alignment Speed and Consistency}
\label{subsec:alignment_speed}
To test the alignment speed and consistency of the various mechanisms, we run 10 identical trials of 10,000 training steps and plot the MCD-DTW between a ground truth holdout set and the output of the model during training.
The MCD-DTW is an objective similarity metric that uses dynamic time warping (DTW) to find the minimum mel cepstral distortion (MCD)~\cite{kubichek1993mel} between two sequences.
The faster a model is able to align with the text, the faster it will start producing reasonable spectrograms that produce a lower MCD-DTW.
\begin{figure}[htb]
%
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{figures/les-align.pdf}}
\end{minipage}
%
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{figures/lj-align.pdf}}
\end{minipage}
%
\caption{
Alignment trials for 8 different mechanisms (10 runs each) trained on the Lessac (top) and LJ (bottom) datasets.
The validation set MCD-DTW drops down after alignment has occurred.
}
\label{fig:alignment_trials}
\end{figure}
Figure~\ref{fig:alignment_trials} shows these trials for 8 different mechanisms for both the Lessac and LJ datasets.
Content-based (CBA), location-sensitive (LSA), and DCA are the three energy-based mechanisms from Table~\ref{tab:energy_based_attention}, and the 3 GMM varieties are shown in Table~\ref{tab:gmm_attention}.
We also test the V1 and V2 GMM mechanisms with an initial parameter bias as described in Section~\ref{subsec:gmm_mechanisms} (abbreviated as GMMv1b and GMMv2b).
Looking at the plots for the Lessac dataset (top of Figure~\ref{fig:alignment_trials}), we see that the mechanisms on the top row (the energy-based family and GMMv2b) all align consistently, with DCA and GMMv2b aligning the fastest.
The GMM mechanisms on the bottom row don't fare as well, and while they typically align more often than not, there are a significant number of failures or cases of delayed alignment.
It's interesting to note that adding a bias to the GMMv1 mechanism actually hurts its consistency while adding a bias to GMMv2 helps it.
Looking at the plots for the LJ dataset at the bottom of Figure~\ref{fig:alignment_trials}, we first see that the dataset is more difficult in terms of alignment.
This is likely due to the higher maximum and average length of the utterances in the training data (most utterances in the LJ dataset are longer than 5 seconds) but could also be caused by an increased presence of intra-utterance pauses and overall lower audio quality.
Here, the top row doesn't fare as well: CBA has trouble aligning within the first 10k steps, while DCA and GMMv2b both fail to align once.
LSA succeeds on all 10 trials but tends to align more slowly than DCA and GMMv2b when they succeed.
With these consistency results in mind, we will only be testing the top row of mechanisms in subsequent evaluations.
\subsection{In-Domain Naturalness}
\label{subsec:naturalness}
We evaluate CBA, LSA, DCA, and GMMv2b using mean opinion score (MOS) naturalness judgments produced by a crowd-sourced pool of raters.
Scores range from 1 to 5, with 5 representing ``completely natural speech''.
The Lessac and LJ models are evaluated on their respective test sets (hence in-domain), and the results are shown in Table~\ref{tab:mos}.
We see that for these utterances, the LSA, DCA, and GMMV2b mechanisms all produce equivalent scores around 4.3, while the content-based mechanism is a bit lower due to occasional catastrophic attention failures.
\begin{table}[htb]
\caption{MOS naturalness results along with 95\% confidence intervals for the Lessac and LJ datasets.}
\label{tab:mos}
\centering
\begin{tabular}{lcc}
\toprule
& Lessac & LJ \\
\midrule
Content-Based & 4.07 $\pm$\xspace 0.08 & 4.19 $\pm$\xspace 0.06 \\
Location-Sensitive & 4.31 $\pm$\xspace 0.06 & 4.34 $\pm$\xspace 0.06 \\
GMMv2b & 4.32 $\pm$\xspace 0.06 & 4.29 $\pm$\xspace 0.06 \\
DCA & 4.31 $\pm$\xspace 0.06 & 4.33 $\pm$\xspace 0.06 \\
Ground Truth & 4.64 $\pm$\xspace 0.04 & 4.55 $\pm$\xspace 0.04 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Generalization to Long Utterances}
\label{subsec:generalization_to_long_utterances}
Now we evaluate our models on long utterances taken from two chapters of the Harry Potter novels.
We use 1034 utterances that vary between 58 and 1648 characters (10 and 299 words).
Google Cloud Speech-To-Text\footnote{\url{https://cloud.google.com/speech-to-text}} is used to produce transcripts of the resulting audio output, and we compute the character errors rate (CER) between the produced transcripts and the target transcripts.
\begin{figure}[htb]
%
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{figures/les-asr.pdf}}
\end{minipage}
%
\begin{minipage}[b]{1.0\linewidth}
\centering
\centerline{\includegraphics[width=1.0\linewidth]{figures/lj-asr.pdf}}
\end{minipage}
%
\caption{
Utterance length robustness for models trained on the Lessac (top) and LJ (bottom) datasets.
}
\label{fig:asr}
%
\end{figure}
Figure~\ref{fig:asr} shows the CER results as a function of utterance length for the Lessac models (trained on up to 5 second utterances) and LJ models (trained on up to 10 second utterances).
The plots show that CBA fares the worst, with the CER shooting up when the test length exceeds the max training length.
LSA shoots up soon after at around 3x the max training length, while
the two location-relative mechanisms, DCA and GMMv2b, are both able to generalize to the whole range of utterance lengths tested.
\section{Discussion}
\label{sec:discussion}
We have shown that Dynamic Convolution Attention (DCA) and V2 GMM attention with initial bias (GMMv2b) are able to generalize to utterances much longer than those seen during training, while preserving naturalness on shorter utterances.
This opens the door for synthesis of entire paragraphs or long sentences (e.g., for book or news reading applications), which can improve naturalness and continuity compared to synthesizing each sentence or clause separately and then stitching them together.
These two location-relative mechanisms are simple to implement and do not rely on dynamic programming to marginalize over alignments.
They also tend to align very quickly during training, which makes the occasional alignment failure easy to detect so training can be restarted.
In our alignment trials, despite being slower to align on average, LSA seemed to have an edge in terms of alignment consistency; however, we have noticed that slower alignment can sometimes lead to worse quality models, probably because the other model components are being optimized in an unaligned state for longer.
Compared to GMMv2b, DCA can more easily bound its receptive field (because its prior filter numerically disallows backward or excessive forward movement), which makes it easier to incorporate hard windowing optimizations in production.
Another advantage of DCA over GMM attention is that its attention weights are normalized, which helps to stabilize the alignment, especially for coarse-grained alignment tasks.
For monotonic alignment tasks like TTS and speech recognition,
location-relative attention mechanisms have many advantages and warrant increased consideration and further study.
Supplemental materials, including audio examples, are available on the web\footnote{\url{https://google.github.io/tacotron/publications/location_relative_attention}}.
\vfill\pagebreak
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2020-04-24T02:04:28",
"yymm": "1910",
"arxiv_id": "1910.10288",
"language": "en",
"url": "https://arxiv.org/abs/1910.10288"
}
|
\section{Acknowledgments}
We thank the MIT NLP group for their helpful discussion and comments.
This work is supported by DSO grant DSOCL18002.
\section{Introduction}
\begin{figure}[h!]
\includegraphics[width=1\linewidth]{./p1.pdf}
\caption{Examples (lower-cased) where multi-sentence context is required to ask the correct questions. Sentences containing answers are in green, while answers are underlined. The red phrases indicate additional background used by a human to generate the question. 1-stage and 2-stage attention QG are results generated by our model with different numbers of attention stages.}
\label{fig:example}
\vspace{-3mm}
\end{figure}
The tremendous popularity of reading comprehension through datasets like SQuAD \cite{rajpurkar2016}, MS MARCO \cite{nguyen2016} and NewsQA \cite{trischler2016} has led to a surge in machine reading and reasoning techniques. These datasets are typically constructed using crowd sourcing, which provides high quality questions, but at a high cost of manual labor.
There is an urgent need for automated methods to generate quality question-answer pairs from textual corpora.
Our goal is to generate a suitable question for a given target answer -- a span of text in a provided document. To this end, we must be able to identify the relevant context for the question-answer pair from the document. Modeling long documents, however, is formidable, and our task involves understanding the relation between the answer and encompassing paragraphs, before asking the relevant question. Typically most existing methods have simplified the task by looking at just the answer containing sentence.
However, this does not represent the human process of generating questions from a document. For instance, crowd workers for the SQuAD dataset, as illustrated in Figure \ref{fig:example}, used multiple sentences to ask a relevant question. In fact, as pointed out by \cite{Du2017}, around 30\% of the human-generated questions in SQuAD rely on information beyond a single sentence.
To accommodate such phenomenon, we propose a novel approach for document-level question generation by explicitly modeling the context based on a multi-stage attention mechanism.
As the first step, our method captures the immediate context, by attending the entire document with the answer to highlight phrases, e.g. \textit{``the unit was dissolved in''} from example 1 in Figure \ref{fig:example}, having a direct relationship with the answer, i.e. \textit{``1985''}. In an iterative step thereafter, we attend the original document representation with the attended document computed in the previous step, to expand the context to include more phrases, e.g. \textit{``abc motion pictures''}, that have an indirect relationship with the answer. We can repeat this process multiple times to increase the linkage-level of the answer-related background.
The final document representation, contains relevant answer context cues by means of attention weights. Through a copy-generate decoding mechanism, where at each step a word is either copied from the input or generated from the vocabulary, the attention weights guide the generation of the context words to produce high quality questions. The entire framework, from context collection to copy-generate style generation is trained end-to-end.
Our framework for document context representation, strengthened by more attention stages leads to a better question generation quality. Specifically, on SQuAD we get an absolute 5.79 jump in the Rouge points by using a second stage answer-attended representation of the document, compared to directly using the representation right after the first stage. We evaluate our hypothesis of using a controllable context to generate questions on three different QA datasets --- SQuAD, MS MARCO, and NewsQA. Our method strongly outperforms existing state-of-the-art models by an average absolute increase of 1.56 Rouge, 0.97 Meteor and 0.81 Bleu scores over the previous best reported results on all three datasets.
\section{Related Work}
Question generation has been extensively studied in the past with broadly two main approaches, rule-based and learning-based.
\textbf{Rule-based techniques} These approaches usually rely on rules and templates of sentences' linguistic structures, and apply heuristics to generate questions \cite{chali2015,Heilman2011,Lindberg2013,labutov2015}. This requires human effort and expert knowledge, making scaling the approach very difficult. Neural methods tend to outperform and generalize better than these techniques.
\textbf{Neural-based models} Since \citet{serban2016,Du2017}, there have been many neural sequence-to-sequence models proposed for question generation tasks. These models
are trained in an end-to-end manner and exploit the corpora of the question answering datasets to outperform rule based methods in many benchmarks. However, in these initial approaches, there is no indication about parts of the document that the decoder should focus on in order to generate the question.
To generate a question for a given answer, \cite{subramanian2017,kim2018,zhou2017,sun2018} applied various techniques to encode answer location information into an annotation vector corresponding to the word positions, thus allowing for better quality answer-focused questions. \cite{yuan2017}
combined both supervised and reinforcement learning in the training to maximize rewards that measure question quality. \cite{liu2019} presented a syntactic features based method to represent words in the document in order to decide what words to focus on while generating the question.
The above studies, only consider sentence-level question generation, i.e. looking at one document sentence at a time. Recently, \cite{du2018} proposed a method that incorporated coreference knowledge into the neural networks to better encode this linguistically driven connection across entities for document-level question generation. Unfortunately, this work does not capture other relationships like semantic similarity. As in example 2 of Figure \ref{fig:example}, two semantic-related phrases ``lower wages" and ``lower incomes" are needed to be linked together to generate the desired question. \cite{zhao2018} proposed another document-level question generation where they apply a gated self-attention mechanism to encode contextual information. However, their self-attention over the entire document is very noisy, redundant and contains many encoded dependencies that are irrelevant.
\section{Problem Definition}
In this section, we define the task of question generation. Given the document D and the answer A, we are interested in generating the question $\overline{Q}$ that satisfies:
\[\overline{Q} = \argmax_{Q}~Prob(Q|D,A)\]
\noindent where the document $D$ is a sequence of $l_D$ words:
$D = {\{x_i\}}^{l_D}_{i=1}$
, the answer $A$ of length $l_A$ must be a sub-span of $D$: $A = {\{x_j\}}^{n}_{j=m}$, where $1 \leq m < n \leq l_D $, and the question $\overline{Q}$ is a well-formed sequence of $l_Q$ words: $\overline{Q} = \{y_k\}^{l_Q}_{k=1}$ that can be answered from $D$ using $A$. The generated words $y_k$ can be derived from the document words ${\{x_i\}}^{l_D}_{i=1}$ or from a vocabulary $V$.
\section{Model Architecture}
In this section, we describe our proposed model for question generation. The key idea of our model is to use a multi-stage attention mechanism to attend to the important parts of the document that are related to the answer, and use them to generate the question. Figure \ref{fig:architecture} shows the high level architecture of the proposed model.
\subsection{Input and Context Encoding}
The input representation for the document and its interaction with the answer are described as follows.
\label{sec:enc}
\paragraph{Input Encoding}
Our model accepts two inputs, an answer $A$ and the document $D$ that the answer belongs to. Each of which is a sequence of words. The two sequences are indexed into a word embedding layer $W_{emb}$ and then passed into a shared Bidirectional LSTM layer \cite{sak2014long}:
\begin{align}
H^A = \text{BiLSTM}(\mathbf{W_{emb}}(A))\\
H^D = \text{BiLSTM}(\mathbf{W_{emb}}(D))
\end{align}
where $H^A$ $\in$ $\mathbb{R}^{\ell_A \times d}$ and $H^D \in \mathbb{R}^{\ell_D \times d}$ are the hidden representations of $A$ and $D$ respectively, and $d$ is the hidden size of the Bidirectional LSTM.
\paragraph{Context Encoding}
The answer's context in the document is identified using our multi-stage attention mechanism, as described below.
\begin{figure}[t!]
\includegraphics[width=1\linewidth]{./architecture.pdf}
\caption{The architecture of our model (with two-stage attention). For simplicity we assume that the document has 4 words and the answer has 3 words.}
\label{fig:architecture}
\end{figure}
\noindent \textbf{Initial Stage} (context with direct relation to answer):
We pass $H^D,H^A$ into an alignment layer. Firstly, we compute a soft attention affinity matrix between $H^D$ and $H^A$ as follows:
\begin{equation}
M_{ij}^{(1)} = \textbf{F}(h_{i}^{D})\:\textbf{F}(h_{j}^{A})^{\top} \label{align1}
\end{equation}
where $h_{i}^{D}$ is the $i^{th}$ word in the document and $h_{j}^{A}$ is the $j^{th}$ word in the answer. $\textbf{F}(\cdot)$ is a standard nonlinear transformation function (i.e., $\textbf{F}(x) = \sigma(\textbf{W}x + \textbf{b})$, where $\sigma$ indicates Sigmoid function), and is shared between the document and answer in this stage. $M^{(1)} \in \mathbb{R}^{ \ell_D \times \ell_A }$ is the soft matching matrix. Next, we apply a column-wise max pooling of $M^{(1)}$.
The key idea is to generate an attention vector:
\begin{align}
a^{(1)} = \text{softmax}(\max_{col}(M^{(1)}))
\end{align}
\noindent where $a^{(1)} \in \mathbb{R}^{~l_D}$. Intuitively, each element $a_i^{(1)} \in a^{(1)}$ captures the degree of relatedness of the $i^{th}$ word in document $D$ to answer $A$ based on its maximum relevance on each word of the answer. To learn the context sensitive weight importance of document, we then apply the attention vector on $H^D$:
\begin{align}
C^{(1)} = H^D \odot a^{(1)}
\end{align}
\noindent where $\odot$ denotes element-wise multiplication. $C^{(1)} \in R^{l_D \times d}$ can be considered as the first attended contextual representation of document where the words directly related to the answer are amplified with the high attention scores whilst the unrelated words are filtered out with low attention scores.\\
\noindent \textbf{Iterative Stage} (enhance the context with indirect relations): In this stage, we expand the context by collecting more words from the document that are related to \textit{direct-context} computed in the first stage. We achieve this by attending the contextual attention representation of document obtained in stage 1 with original document representation as follows:
\begin{align}
&M_{ij}^{(2)} = \textbf{F}(h_{i}^{D})\:\textbf{F}(C_{j})^{\top} \\
&a^{(2)} = \text{softmax}(\max_{\text{col}}(M^{(2)}))\\
&C^{(2)} = H^D \odot a^{(2)}
\end{align}
We can repeat the steps in this stage to enhance the context to the answer-related linkage level $k$. We denote the answer-focused context representation after $k$ stages as $C^{(k)}$. In our experiments, we train our models with a predefined value $k$, which is fine-tuned on the validation set.
\paragraph{Answer Masking} Due to the enriched information in the context representation, it is essential for the model to know the position of the answer so that: (1) it can generate question that is coherent with the answer, and (2) does not include the exact answer in the question. We achieve this by masking the word representation at the position of the answer in the context representation $C^{(k)}$ with a special masking vector:
\begin{align}
C^{\text{final}} = Mask(C^{(k)})
\end{align}
\noindent $C^{\text{final}} \in R^{l_D \times d}$ can be considered as final contextual attention representation of document and will be used as the input to the decoder.
\subsection{Decoding with Pointer Generator Network}
Using our context rich input representation $C^{\text{final}}$ computed previously, we move forward to the question generation. Our decoding framework is inspired by the pointer-generator network \cite{pointer-generator}. The decoder is a BiLSTM, which at time-step $t$, takes as its input, the word-embedding of the previous time-step's output $W_e(y^{t-1})$ and the latest decoder state attended input representation $r^{t}$ (described later in Equation \eqref{eq:individual_context}) to get the decoder state $h^t$:
\begin{align}
h^t = BiLSTM([r^t, \mathbf{W_e}(y^{t-1})], h^{t-1})
\label{eq:decoder}
\end{align}
Using the decoded state to generate the next word, where words can either be copied from the input; or generated by selecting from a fixed vocabulary:
\begin{align}
P_\text{vocab} = \text{softmax}(\mathbf{V}^{\top}[h^t,r^t])
\label{eq:fixed_vocabulary}
\end{align}
The \textit{generation probability} $p_\text{gen} \in [0,1]$ at time-step $t$ depends on the context vector $r^t$, the decoder state $h^t$ and the decoder input $x^t = [r^t, \mathbf{W_e}(y^{t-1})]$:
\begin{align}
p_\text{gen} = \sigma(\mathbf{w_{r}}r^{t} + \mathbf{w_{x}}x^{t} + \mathbf{w_{h}}h^{t})
\label{eq:generate_copy}
\end{align}
\noindent where $\sigma$ is the sigmoid function. This gating probability $p_\text{gen}$ is used to evaluate the probability of eliciting a word $w$ as follows:
\begin{align}
P(w) &= p_\text{gen}P_\text{vocab}(w) + (1-p_\text{gen})\sum_{i:w_{i}=w} a_{i}^{t}
\label{eq:vocabulary}
\end{align}
\noindent where $\sum_{i:w_{i}=w} a_{i}^{t}$ denotes the probability of word $w$ from the input being generated by the decoder:
\begin{align}
e_i^{t} &= \mathbf{u}^{\top}\tanh(C^{\text{final}}_i + h^{t-1}) \\
a^t &= \text{softmax}(e^t)
\end{align}
Unlike traditional sequence to sequence models, our input $C^{\text{final}}$ is already weighted via the answer level self-attention. This weighting is reflected directly in the final generation via the copy mechanism through $a^t$, and is also used to evaluate the input context representation $r^t$:
\begin{equation}
r^t = \sum_{i}a_i^{t}{C^{\text{final}}_i}
\label{eq:individual_context}
\end{equation}
Finally, the word output at time step $t$, $y^t$ is identified as:
\begin{align}
y^{t} &= \argmax_w P(w)
\end{align}
\noindent
The model is trained in an end to end framework, to maximize the probability of generating the target sequence $y^1,...,y^{l_Q}$. At each time step $t$, the probability of predicting $y^t$ is optimized using cross-entropy from the probability of words over the entire vocabulary (fixed and document words). Once the model is trained, we use beam search for inference during decoding. The beam search is parameterised by the possible number of paths $k$.
\section{Experimental Setup}
In this section we describe the experimental setting to study the proficiency of our proposed model.
\subsection{Datasets}
We evaluate our model on 3 question answering datasets: SQuAD \cite{rajpurkar2016}, MS Marco \cite{nguyen2016} and NewsQA \cite{trischler2016}. These form a comprehensive set of datasets to evaluate question generation.
\vspace{-3mm}
\bigskip
\noindent
\textbf{SQuAD.} SQuAD is a large scale reading comprehension dataset containing close to 100k questions posed by crowd-workers on a set of Wikipedia articles, where the answer is a span in the article. The dataset for our question generation task is constructed from the training and development set of the accessible parts of SQuAD. To be able to directly compare with other reported results, we consider the two following splits:
\begin{itemize}
\item Split1: similar to \cite{zhao2018}, we keep the SQuAD train set and randomly split the SQuAD dev set into our dev and test set with the ratio 1:1. The split is done at sentence level.
\item Split2: similar to \cite{Du2017}, we randomly split the original SQuAD train set randomly into train and dev set with the ratio 9:1, and keep the SQuAD dev set as our test set. The split is done at article level.
\end{itemize}
\noindent
\textbf{MS MARCO.} MS MARCO is the human developed question answering dataset derived from a million Bing search queries. Each query is associated with paragraphs from multiple documents resulting from Bing, and the dataset mentions the list of ground truth answers from these paragraphs. Similar to \cite{zhao2018}, we extract a subset of MS Marco where the answers are sub-spans within
the paragraphs, and then randomly split the original train set into train (51k) and dev (6k) sets. We use the 7k questions from the original dev set as our test set.
\vspace{-2mm}
\bigskip
\noindent
\textbf{NewsQA.} NewsQA is the human generated dataset based on CNN news articles. Human crowd-workers are motivated to ask questions from headlines of the articles and the answers are found by other workers from the articles contents.
In our experiment, we select the questions in NewsQA where answers are sub-spans within the articles. As a result, we obtain a dataset with 76k questions for train set, and 4k questions for each dev and test set.
\vspace{-2mm}
\bigskip
\noindent
Table \ref{tab:datasets} gives the details of the three datasets used in our experiments.
\vspace{-2mm}
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Dataset & Train & Dev & Test & $l_D$ & $l_Q$ & $l_A$ \\ \hline
SQuAD-1 & 87,488 & 5,267 & 5,272 & 126 & 11 & 3\\
SQuAD-2 & 77,739 & 9,749 & 10,540 & 127 & 11 & 3 \\
MS Marco & 51,000 & 6,000 & 7,000 & 60 & 6 & 15\\
NewsQA & 76,560 & 4,341 & 4,292 & 583 & 8 & 5\\
\hline
\end{tabular}
\vspace{-1mm}
\end{small}
\caption{Description of the evaluation datasets. $l_D$ , $l_Q$ and $l_A$ stand for average length of document, question and answer respectively.}
\vspace{-2mm}
\label{tab:datasets}
\end{table}
\begin{table*}[ht!]
\centering
\begin{tabular}{|l||c|c|c|c||c|c|}
\hline
Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline
PCFG-Trans & 28.77 & 17.81 & 12.64 & 9.47 & 18.97 & 31.68 \\
SeqCopyNet & - & - & - & 13.02 & - & 44.00 \\
seq2seq+z+c+GAN & 44.42 & 26.03 & 17.60 & 13.36 & 17.70 & 40.42 \\
NQG++ & 42.36 & 26.33 & 18.46 & 13.51 & 18.18 & 41.60 \\
MPQG & - & - & - & 13.91 & - & - \\
APM & 43.02 & 28.14 & 20.51 & 15.64 & - & - \\
ASs2s & - & - & - & 16.17 & - & - \\
S2sa-at-mp-gsa & 45.69 & 30.25 & 22.16 & 16.85 & 20.62 & 44.99 \\
CGC-QG & 46.58 & 30.90 & 22.82 & 17.55 & 21.24 & 44.53 \\ \hline
Our model & \textbf{46.60} & \textbf{31.94} & \textbf{23.44} & \textbf{17.76} & \textbf{21.56} & \textbf{45.89} \\ \hline
\end{tabular}
\vspace{-2mm}
\caption{Results in question generation on SQuAD split1}
\label{tab:split1}
\end{table*}
\begin{table*}[ht!]
\centering
\begin{tabular}{|l||c|c|c|c||c|c|}
\hline
Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline
LTA & 43.09 & 25.96 & 17.50 & 12.28 & 16.62 & 39.75 \\
MPQG & - & - & - & 13.98 & 18.77 & 42.72 \\
CorefNQG & - & - & 20.90 & 15.16 & 19.12 & - \\
ASs2s & - & - & - & 16.20 & 19.92 & 43.96 \\
S2sa-at-mp-gsa & 45.07 & 29.58 & 21.60 & 16.38 & 20.25 & 44.48 \\ \hline
Our model & \textbf{45.13} & \textbf{30.44} & \textbf{23.40} & \textbf{17.09} & \textbf{21.25} & \textbf{45.81} \\ \hline
\end{tabular}
\vspace{-2mm}
\caption{Results in question generation on SQuAD split2}
\label{tab:split2}
\vspace{-4mm}
\end{table*}
\subsection{Implementation Details}
We use a one-layer Bidirectional LSTM with hidden dimension size of 512 for the encoder and decoder. Our entire model is trained end-to-end, with batch size 64, maximum of 200k steps, and Adam optimizer with a learning rate of 0.001 and L2 regularization set to $10^{-6}$. We initialize our word embeddings with frozen pre-trained GloVe vectors \cite{Pennington2014}. Text is lowercased and tokenized with NLTK. We tune the step of biattention used in encoder from \{1, 2, 3\} on the development set. During decoding, we used beam search with the beam size of 10, and
stopped decoding when every beam in the stack generates the $\textless EOS \textgreater$ token.
\subsection{Evaluation}
Most of the prior studies evaluate the model performances against target questions using automatic metrics. In order to have an empirical comparison, we too use
Bleu-1, Bleu-2, Bleu-3, Bleu-4 \cite{Papineni2002}, METEOR \cite{Denkowski2014} and ROUGE-L \cite{Lin2004} to evaluate the question generation methods. Bleu
measures the average n-gram precision on a set of reference sentences. METEOR is a recall-oriented metric used to calculate the similarity between generations and references. ROUGE-L is used to evaluate longest common sub-sequence recall of the generated sentences compared to references. A question structurally and syntactically similar to the human question would score high on these metrics, indicating relevance to the document and answer.
In order to have a more complete evaluation, we also report human evaluation results, where annotators evaluate the quality of questions generated on two important parameters: naturalness (grammar) and difficulty (in answering the question) (Section 6.2).
\begin{table*}[ht!]
\centering
\begin{tabular}{|l||c|c|c|c||c|c|}
\hline
Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline
LTA & - & - & - &10.46 & - & - \\
QG+QA & - & - & - & 11.46 & - & - \\
S2sa-at-mp-gsa & - & - & - & 17.24 & - & - \\ \hline
Our model & \textbf{41.43} & \textbf{29.97} & \textbf{23.01} & \textbf{18.25} & \textbf{42.77} & \textbf{19.43} \\ \hline
\end{tabular}
\caption{Results in question generation on MS MARCO}
\label{tab:ms_macro}
\end{table*}
\begin{table*}[ht!]
\centering
\begin{tabular}{|l||c|c|c|c||c|c|}
\hline
Model & Bleu-1 & Bleu-2 & Bleu-3 & Bleu-4 & Meteor & Rouge-L \\ \hline
PCFG-Trans & 16.90 & 7.94 & 4.72 & 3.08 & 13.74 & 23.78 \\
MPQG & 35.70 & 17.16 & 9.64 & 5.65 & 14.13 & 39.85 \\
NQG++ & 40.33 & 22.47 & 14.83 & 9.94 & 16.72 & 42.25 \\
CGC-QG & 40.45 & 23.52 & 15.68 & 11.06 & 17.43 & 43.16 \\ \hline
Our model & \textbf{42.54} & \textbf{26.14} & \textbf{17.30} & \textbf{12.36} & \textbf{19.04} & \textbf{44.05} \\ \hline
\end{tabular}
\caption{Results in question generation on NewsQA}
\label{tab:news_qa}
\end{table*}
\subsection{Baselines}
As baselines, we compare our proposed model against several prior work on question generation. These include:\vspace{-3mm}
\begin{itemize}
\itemsep-0.2em
\item \textbf{PCFG-Trans} \cite{Heilman2011}: a rule-based system that generates a question based on a given answer word span.
\item \textbf{LTA} \cite{Du2017}: the seminal Seq2seq model for question generation.
\item \textbf{ASs2s} \cite{kim2018}: a Seq2Seq model learns to identify which interrogative word should be used by replacing the answer in the original passage with a special token.
\item \textbf{MPQG} \cite{Song2018}: a Seq2Seq model that matches the answer with the passage before generating question
\item \textbf{QG+QA} \cite{duan2017}: a model that combines supervised and reinforcement learning for question generation
\item \textbf{NQG++} \cite{zhou2017}: a Seq2Seq model with a feature-rich encoder to encode answer position, POS and NER tag information.
\item \textbf{APM} \cite{sun2018}: a model that incorporates the relative distance between the context words and answer when generating the question.
\item \textbf{S2sa-at-mp-gsa} \cite{zhao2018} : a Seq2Seq model that uses gate self-attention and maxout-pointer mechanism to encode the context of question.
\item \textbf{SeqCopyNet} \cite{zhou2018_seq}: a Seq2Seq model that use the copying mechanism to copy not only a single word but a sequence from the input sentence.
\item \textbf{Seq2seq+z+c+GAN} \cite{yao2018}: a GAN-based model captures the diversity and learning representation using the observed variables.
\item \textbf{CorefNQG} \cite{du2018}: a Seq2Seq model that utilizes the coreference information to link the contexts.
\item \textbf{CGC-QG} \cite{liu2019}: a Seq2Seq model that learns to make decisions on which words to generate and to copy using rich syntactic features.
\end{itemize}
\begin{comment}
\bigskip
\noindent
\textbf{Du}: This is the first large scale data driven model. Here the paragraph and question are passed separately to the generate the question. There is no explicit mechanism to focus on this large input while generating a question.
\bigskip
\noindent
\textbf{Yao}:Here they first compute answer encoded document representation by encoding the answer in the paragraph. Then, using self attention, a self-matched representation is presented to the encoder along with the original answer encoded paragraph representation. This is fed to the decoder for final output generation.
\bigskip
\noindent
\textbf{Liu}: Construct a clue word predictor, by computing a dependency tree over the document and running it through a graph convolution. This clue-word predictor, along with other syntactic features is fed in to the model to generate a question by deciding to either generate or copy a word from the passage, guided by these features.
\bigskip
\noindent
\textbf{Masking}: We study the impact of including the masking the answer in the paragraph representation.
\bigskip
\noindent
\textbf{Depth}: We study the impact of varying the depth of the biattention recursion on the generation output.
\end{comment}
\section{Results and Analysis}
In this section, we discuss the experimental results and some ablation studies of our proposed model.
\subsection{Comparison with Baseline Models}
We present the question generation performance of baseline models and our model on the three QA datasets in Tables \ref{tab:split1}, \ref{tab:split2}, \ref{tab:ms_macro} and \ref{tab:news_qa} \footnote{For most baselines, we don't have access to their implementations. Hence, we present results for only datasets that they report on in their papers.}. We find that our model consistently outperforms all other baselines and sets a new state-of-the-art on all datasets and across different splits.
For SQuAD split-1, we achieve an average absolute improvement of 0.2 in Bleu-4, 0.3 in Meteor and 1.3 points in Rouge-L score compared to the best previous reported result. \footnote{We take 5 random splits and report the average across the splits. The lowest performance of the 5 runs also exceeds the state-of-the-art in this setting. Previous methods take an equal random split of the development set into dev/test sets. This can lead to inconsistencies in comparisons.} For SQuAD split-2, we achieve even higher average absolute improvement of 0.7, 1.0 and 1.4 points of Bleu-4, Meteor and Rouge-L scores respectively, compared to \textit{S2sa-at-mp-gsa} - the best previous model on the dataset and also a document-level question generation model. Showing that our model can identify better answer-related context for question generation compared to other document-level methods. On the MS MARCO dataset, where the ground truth questions are more natural, we achieve an absolute improvement of 1.0 in Bleu-4 score compared to the best previous reported result.
On the NewsQA dataset, which is the harder dataset as the length of input documents are very long, our overall performance is still promising. Our model outperforms the CGC-QG model by an average absolute score 1.3 of Bleu-4, 1.6 of Meteor, and 0.9 of Rouge-L, again demonstrating that exploiting the broader context can help the question generation system better match humans at the task.
\subsection{Human Evaluation}
To measure the quality of questions generated by our system, we conduct a human evaluation. Most of the previous work, except the LTA system \cite{Du2017}, do not conduct any human evaluation, and for most of the competing methods, we do not have the code to reproduce the outputs. Hence, we conduct human evaluation using the exact same settings and metrics in \cite{Du2017} for a fair comparison. Specifically, we consider two criterion in human evaluation: (1) Naturalness, which indicates the grammaticality and fluency; and (2) Difficulty, which measures the syntactic divergence and the reasoning needed to answer the question. We randomly sample 100 sentence-question pairs from our SQuAD experimental outputs. We then ask four professional English speakers to rate the pairs in terms of the above criterion on a 1$-$5 scale (5 for the best). The experimental result is given in Table \ref{tab:human_evaluation}.
\begin{table}[h]
\centering
\begin{small}
\begin{tabular}{|p{3.5cm}|p{1.5cm}|p{1.5cm}|}
\hline
& Naturalness & Difficulty \\ \hline
LTA & 3.36 & 3.03\\
Our model & 3.68 & \textbf{3.27} \\ \hline
Human generated questions & \textbf{4.06} & 2.65\\
\hline
\end{tabular}
\end{small}
\caption{Human evaluation results for question
generation. Naturalness and difficulty are rated
on a 1$-$5 scale (5 for the best).}
\label{tab:human_evaluation}
\end{table}
The inter-rater agreement of Krippendorff's Alpha between human evaluations is 0.21. The results imply that our model can generate questions of better quality than the LTA system. Our system tends to generate difficult questions owing to the fact that it gathers context from the whole document rather than from just one or two sentences.
\subsection{Ablation Study}
In this section, we study the impact of (1) The proposed attention mechanism in the encoder; (2) The number of attention stages used in that mechanism; and (3) The masking technique used for the encoder.
\begin{table*}[h!]
\centering
\begin{tabular}{|p{5.0cm}||p{1.1cm}|p{1.1cm}|p{1.5cm}|}
\hline
Model & Bleu-4 & Meteor & Rouge-L \\ \hline
\noindent Original (2-stage attention) & \textbf{17.76} & \textbf{21.56} & \textbf{45.89} \\
~~~ - without attention & 3.06 & 10.83 & 28.75\\
~~~ - without masking & 5.19 & 13.08 & 31.14\\
~~~ - with 1-stage attention & 14.52 & 18.28 & 40.10 \\
~~~ - with 3-stage attention & 12.87 & 16.05 & 38.33 \\
\hline
\end{tabular}
\caption{Ablation study on SQuAD split 1.}
\label{tab:ablation}
\end{table*}
\bigskip
\noindent
\textbf{Impact of using encoder attention~} In this ablation, we remove the attention mechanism in the encoder and just pass the vanilla document representation to the decoder. As shown in Table \ref{tab:ablation}, without using attention mechanism, the performance drops significantly (more than 14 Bleu points). We hypothesize that without attention, the model lacks the capability to identify the import parts of document and hence generates questions unrelated to the target answer.
\bigskip
\noindent
\textbf{Impact of number of attention stages~} As shown in Table \ref{tab:ablation}, with an increase in the number of attention stages from 1 to 2, the performance of model improves significantly, with an increment of more than 3 Bleu-4 points.
To have a deeper understanding about the impact of the number of attention stages, we calculate for the words in the document that occurred in the ground truth question, their total attention score at the end of input attention layer as in Figure \ref{fig:score}. For 1-stage and 2-stage attention, the total attention score of the question words to be copied from the document are 0.43 and 0.52 respectively, demonstrating that in SQuAD dataset, the 2-stage attention covers more of the question words in a focused manner. An example for this effect can be seen in Figure \ref{fig:density}. The extra stage clearly helps in gathering more relevant context to generate a question closer to the ground-truth.
However, on further increasing the number of attention stages to 3, we observe that the quality of generated questions deteriorates. This can be attributed to the fact that for most of the questions in SQuAD, such 3-stage attention leads to a very cloudy context, where several words get covered, but with a diluted attention. 3-stage attention's coverage in Figure \ref{fig:score} shows this clearly, where its coverage in ground-truth questions is lower than even the 1-stage attention, justifying its poor question generation quality.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{./attention_scores.png}
\caption{Average total attention score of words in the document that occurred in the ground truth question when using different attention stages (SQuAD split 1).}
\label{fig:score}
\vspace{2mm}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=1.0\linewidth]{./attention3.png}
\caption{Qualitative analysis of attention vector. The intensity of
the color (red) denotes the strength of the attention weights.}
\label{fig:density}
\end{figure}
\bigskip
\noindent
\textbf{Impact of masking~} While attending to the answer's context and the related sentences is crucial, we find that it is imperative to mask out the answer before getting the input representation. It is demonstrated from the experimental results in Table \ref{tab:ablation} where the Bleu-4 score is increased more than 12 points when applying this masking.
\subsection{Case Study}
In Figure \ref{fig:example}, we introduce some examples that the document-level information obtained from our proposed multi-stage attention mechanism is needed to generate the correct questions.
In example 1, the two-stage attention model is able to identify the phrases \textit{``this unit''} referring to \textit{``abc motion pictures''}, which is out of the sentence containing the answer.
In example 2, two semantic-related phrases \textit{``lower incomes''} and \textit{``lower wages''} in two different sentences are successfully linked by our two-stage attention model to generate the correct question.
In example 3, the two-stage attention model is able to link two different sentences containing the same word (\textit{``french''}) and semantic-related words (\textit{``bible''} and \textit{``scriptures''}), forming relevant context for generating the expected question.
\section{Conclusion}
In this paper, we proposed a novel document-level approach for question generation by using multi-step recursive attention mechanism on the document and answer representation to extend the relevant context. We demonstrate that taking additional attention steps helps learn a more relevant context, leading to a better quality of generated questions. We evaluate our method on three QA datasets - SQuAD, MS MARCO and NewsQA, and set the new state-of-the-art results in question generation for all of them.
|
{
"timestamp": "2019-10-24T02:04:41",
"yymm": "1910",
"arxiv_id": "1910.10274",
"language": "en",
"url": "https://arxiv.org/abs/1910.10274"
}
|
\section{Introduction}
Demographic, spatial or genetic structures affect genetic diversity because they determines genetic flows between lineages, relationships between individuals, and coalescent rates \citep{charlesworthetal03}. In turn, genetic polymorphism within and between taxa is commonly used for estimating population structures \citep{goldsteinandchikhi, mulleretal} or demographic changes \citep{beichmanetal}, to infer population history, migration patterns, or to search for genes under selection \citep{stephan}. These methods are mostly based either on the site frequency spectrum, the identity per state or descent, or on summary statistics in an Approximate Bayesian Computation (ABC) framework \citep{beaumontzhangbalding}.\\
Statistical testing and model selection are generally performed under simplifying assumptions which allow computations of quantities such as the likelihood of a model, in particular under neutrality.For instance, under the Wright-Fisher model, the population size is supposed deterministic: it is known at any given time and independent of the composition of the population, \textit{i.e.} it is supposed that the mechanisms underlying the variations of the population size are extrinsic and without noise. Individuals thus compete for space but the carrying capacity of the environment does not change because of the evolution of the population itself, or because of extrinsic or intrinsic stochasticity. In birth-death models, population size can vary but populations can grow indefinitely because individuals do not interact. In addition, the Wright-Fisher and birth-death models are most often supposed neutral when used for demographic inference, \textit{i.e.} the reproduction and survival rates do not depend on the genetic lineage \cite[but see a recent birth-death model without interactions where rates can depend on mutations,][]{rasmussenandstadler}. \\
Yet, the assumptions of neutrality, extrinsic control of population size or non-interacting individuals are certainly often violated. For instance, genealogies of the seasonal influenza virus show important departure from neutrality which might suggest that selection and interaction between lineages are important enough to significantly affect evolution and the shapes of the phylogenetic trees \citep{bedfordetal,strelkowalassing}. Reproduction rates and carrying capacities have also been shown to depend on strains in the domesticated yeasts \citep{sporetal}, and the ecological literature contains many cases where competitive interactions vary among strains or species \citep{gallieni2017}. Finally, not explicitly including competition in spatially structured population leads to biological inconsistencies in population genetics models \citep{felsenstein75}. Developing {models and} inference methods which relax such hypotheses is thus a contemporaneous challenge, in order to improve our knowledge of the history and ecological features of species and populations. As emphasized by \cite{frostpybusgogviboudbonhoefferbedford}, this challenge is particularly important for the analysis of phylodynamics in clonal species such as viruses.\\
Some of these assumptions have been already relaxed. For instance, \cite{rasmussenandstadler} developed a model where reproductive and death rates can differ between lineages which can emerge because of spontaneous mutations. They applied their method on Ebola and influenza viruses in order to have estimate of fitness effects of mutations from phylodynamics. Indeed, variation of death and birth rates between lineages can affect viruses phylogenies, which can be detected and used to infer the effect of mutations. However, they supposed no interaction between lineages, discarding a possible effect of competition between viruses strains. \\
In this paper, we present a model and an inference method which allow the relaxation of several of these assumptions. First, in Section \ref{sec:micro}, we recall the stochastic process describing the eco-evolution of a structured population with ecological feedbacks \cite[introduced in][]{billiardferrieremeleardtran}. This model takes into account: i) A trait structure that can affect birth, death and competitive rates. The traits, which evolves because of mutations and selection, are seen as proxies for the species, taxa or strains; ii) Explicit competitive interactions between and within lineages; iii) Varying population sizes depending on the genetic composition of the population, \textit{i.e.} the carrying capacity depends on the ecological properties of existing strains (their birth, death and competitive rates). The model assumes that reproduction is asexual, that mutations affecting fitness are rare, and that neutral mutation follows an intermediate timescale between reproduction and death rates (the ecological timescale) and the rate at which mutations affecting fitness appear (the evolutionary timescale). {Second, in Section \ref{sec:def-FBcoal}, a new forward-backward coalescent process is proposed to describe the phylogenies in such a population. The forward step accounts for interactions, demography and evolution of trait structures, defining the skeleton on which the phylogenies of sampled individuals can be reconstructed in the backward step. Phylogenies of structured populations have been previously already modeled in nested coalescent models, \textit{e.g.} \citep{blancasbenitezduchampslambertsirijegousse,blancasbenitezguflerkliemtranwakolbinger,duchamps,verduausterlitzetal}, but in our case interactions within and between lineages, ecological feedbacks between selection and population size, and multiple coalescence mergers are taken into account. Contrarily to $\Lambda$-coalescent models proposed in the literature \citep{donnellykurtz_99,pitman,sagitov}, multiple merging here are not due to sweepstakes reproductive successes but they appear as a consequence of natural selection via mutation-competition and timescales.} Third, in Section \ref{sec:ABC}, we develop an ABC framework in order to estimate the parameters of the model from genetic diversity data. We show how ecological parameters such as individual birth and death rates, and competitive abilities can be estimated. Finally, we applied our inferential procedure on the one hand on simulated data from a eco-evolutionary toy model, and on the other hand, on genetic data from Y-chromosomes sampled in Central Asia human populations \citep{chaixetal,heyeretal} in order to test whether different social organizations can be associated with difference in fertility.
\section{The forward-backward coalescent model}\label{sec:fundations}
In the current work, we extend the population model developed in \cite{billiardferrieremeleardtran} \citep[following][]{metzgeritzmeszenajacobsheerwaarden,champagnat06,champagnatmeleard} to include phylogenies and develop a statistical ABC procedure that we apply on simulated and real datasets. The eco-evolution of a structured population with ecological feedbacks is described by a stochastic process. The population is structured by traits, considered as proxies for species, taxa or strains. These traits can affect birth, death and competitive rates, and new traits are generated by mutations. Explicit competitive interactions are modeled between individuals of the population with intensities depending on the traits, inducing varying population sizes depending on the genetic composition of the population. Also, a marker structure is added. Markers are assumed neutral in the sense that they have no impact on fecundity, survival or competition. They are introduced in the model to measure the neutral diversity and allow the reconstruction of the phylogenies. The model assumes asexual reproduction and complete linkage between traits and markers, and that the population evolves following three timescales. First, the ecological timescale: birth and death rates occur at a fast rate. Second, marker mutations arise slightly slower than the ecological timescale. Finally, mutations on the trait under selection occur at the slowest timescale. This reflects for instance that a large proportion of a genome is not composed of traits under selection. This happens for example in the influenza virus which shows a large diversity within seasons despite a very rapid evolution and adaptation \citep{neherbedford}.
\medskip\noindent Before precisely describing the application of the model to infer demographic and genetic parameters within an ABC framework, we summarize hereafter the main features and outcomes of the model.
\subsection{Genetic diversity in an eco-evolutionary dynamics with three timescales: The substitution Fleming-Viot process (SFVP)}\label{sec:micro}
We assume a population of clonal individuals characterized, on the one hand, by a trait $x\in {\cal X}\subset \mathbb{R}^d$, which affects the demographic processes such as birth, death and competitive interactions between individuals and, on the other hand, by a vector of genetic markers $u\in {\cal U}\subset \mathbb{R}^q$ , supposed neutral (\emph{i.e.} $u$ does not affect the demographic process). Individuals with trait $x$ give birth at rate $b(x)$ and $d(x)$ is their intrinsic death rate. The competitive interactions between individuals with traits $x$ and $y$ add an effect $C(x,y)$ on the individual death rate. When the population is large, the evolution of the population can be decomposed into the succession of invasions of favorable mutations on the trait $x$, because ecological processes are very fast, and the population jumps from one state to another. The neutral marker also evolves between each adaptive jump, at a faster timescale that is compensated by mutations of small effect. Since the ecological parameters change after each adaptive jump on trait $x$ (the birth rate, death rate and the population size change), the evolution of the neutral marker also changes. Hence, even if the marker is neutral, its own evolution depends on the state of the population at a given time, especially on the competitive interactions $C(x,y)$ between individuals with traits $x$ and $y$. Overall, the joint eco-evolutionary dynamics of the neutral marker and the selected traits can be approximated by the so-called Substitution Flewing-Viot Process \cite[SFVP,][see Appendix \ref{sec:maths} for details]{billiardferrieremeleardtran}.\\
\textit{Distribution of the trait $x$ between two adaptive jumps.} At the ecological timescale, when the population is large, $p$ strains with traits $x_1,\dots x_p$ can coexist. Between two adaptive jumps, the trait distribution in the population remains almost constant. Indeed, the size of subpopulations can vary but are expected to stay close to their equilibria $\widehat{n}(x_1; x_1,\dots, x_p),\dots \widehat{n}(x_p; x_1,\dots, x_p),$ given by the following competitive Lotka-Volterra system of ordinary differential equations (ODE) {that approximates the evolution in the ecological timescale}:
\begin{align}
\frac{dn_t(x_j)}{dt}=\Big(b(x_j)-d(x_j)-\sum_{\ell=1}^p C(x_j,x_\ell)n_t(x_\ell)\Big)n_t(x_j),\ j\in \{1,\dots , p\}, \label{eq:lotka-volterra}
\end{align}
where {$n_t(x)$} can be seen as the density of individuals of strain with trait $x$.
The equilibrium $\widehat{n}(x_i ; x_1,\dots ,x_p)$ of the population of the strain with trait $x_i$ depends on the whole trait structure of the population which is in turn defined entirely by the set of traits present in the population (the arguments of $\widehat{n}$ given after the semicolon).\\
\textit{Change of the distribution of the trait $x$ during an adaptive jump.} In the timescale of trait mutations and in the population composed of $p$ strains with traits $x_1,\dots x_p$ and respective sizes $\ \widehat{n}(x_1 ; x_1,\dots, x_p), \dots, \widehat{n}(x_p ; x_1,\dots, x_p)$, when a mutation on trait $x_i$ occurs at time $t$, a new strain is introduced with trait $x_i+h$ where $h$ is drawn in a distribution $m(x_i,h)dh$ (mutations on trait $x$ are not necessarily small, \textit{i.e.} selection can be strong). Whether the mutant strain invades or not the population depends on its invasion fitness defined by
\begin{equation}
f(y ; x_1,\dots, x_p)= b(y)-d(y)- \sum_{j=1}^p \widehat{n}(x_j ; x_1,\dots, x_p) C(y,x_j) \label{def:fitness}
\end{equation}
\citep{metzgeritzmeszenajacobsheerwaarden,champagnat06,champagnatferrieremeleard}. The mutant strain invades with probability $\frac{[f(x_i+h ; x_1,\dots, x_p)]_+}{b(x_i+h)}$, in which case the population jumps to a new state given by the solution of the Lotka-Volterra ODE system (Eq. \ref{eq:lotka-volterra}) updated with the introduction of the mutant strain $(\widehat{n}(x_1 ; x_1,\dots, x_p, x_i+h),\dots \widehat{n}(x_i+h ; x_1, \dots, x_{p}, x_i+h))$. {In the new equilibrium, some former traits $x_1,\dots ,x_p$ may be lost.} The evolution of the trait can thus be described by a Polymorphic Evolution Sequence (PES), \textit{i.e.} the succession of the adaptive jumps of the population from one state to another \citep{champagnatmeleard2011}. For a visual abstract of the PES, see Fig. \ref{Fig:PES} in Appendix.
\bigskip
\textit{Evolution of the neutral marker.}
When the mutant strain with trait $x=x_i+h$ invades the population, {say at time 0}, an adaptive jump occurs. {Let us denote by $u$ the marker of the first mutant individual $(x,u)$.} Initially, the distribution of the neutral marker within strain $i$ and trait $x$, is thus composed of a single individual with marker $u$.
The evolution of the marker distribution within this strain is given by $F_{t}^{u}(x,dv)$, the distribution at time $t$ of the marker values within the strain with trait $x$ given the initial value $u$. This distribution changes with time depending on the supposed mutation kernel on the marker, on the birth and death rates of individuals with trait $x$, and on the competitive interactions $C(x,y)$ with all the other individuals of any trait value $y \in \{x_1,\dots, x_p, x_i+h\}$. Between two adaptive jumps, assuming small marker mutations but not necessarily small trait mutations, how the distribution $F_{t}^{u}(x,dv)$ evolves with time is given by the following stochastic differential equation \cite[see][]{billiardferrieremeleardtran} (derivation details and a more general form are given in Appendix \ref{sec:maths})
\begin{equation}\label{eq:FV}
\int_{\mathcal{U}} \phi(v) F^{u}_{t}(x,dv) = \phi(u) + b(x) \int_{0}^t \bigg(\int_{\mathcal{U}} \Delta \phi(v) F^{u}_{s}(x,dv)\bigg) ds+M^x_{t}(\phi).
\end{equation}
The left side of the equation can be seen as the expectation of the distribution of the marker value at time $t$, where $\phi$ is a test function (supposed twice differentiable on $\mathcal{U}$). Different choices of functions $\phi$ will provide descriptors of the distribution $F_{t}^u$ (for example $\phi(v)=v$ gives the mean of the distribution). The right side of the equation tells what is the expected form of the distribution. The first term on the right side gives the initial conditions: the first mutant with trait $x$ has a marker value $u$, hence the initial condition for the distribution is $\phi(u)$. The second term on the right side integrates the changes of the distribution which are only due to mutations on the marker between time $0$ (the invasion time of $x$) and $t$. Since mutation only occurs at birth, the rate at which $F$ changes with mutation is proportional to the birth rate $b(x)$. Within the integral, $\Delta \phi(v)$ is the Laplacian of the function $\phi$ which gives the rate of change of $F$ in all the dimensions of the marker values (which depends on the assumptions made on the mutation kernel and can be generalized, see Appendix \ref{sec:maths}). The last term $M_t^x(\phi)$ on the right side gives the changes of $F$ which are due to the ecological processes, \textit{i.e.} the fluctuations due to the birth and death of the individuals with trait $x$. $M_t^x(\phi)$ is a martingale \textit{i.e.} a square integrable random variable with mean 0 and variance
\begin{align}\label{eq:crochet-PMB-FV}
\mbox{Var}(M^x_t(\phi))= \frac{2b(x)} {\widehat{n}(x ; x_1,\dots, x_p, x_i+h)}
\int_{0}^t \mathbb{E}\bigg[\bigg(\int_{\mathcal{U}} \phi^2(v) F^{u}_{s}(x,dv) - \Big(\int_{\mathcal{U}} \phi(v) F^{u}_{s}(x,dv)\Big)^2\bigg) \bigg] \ ds.
\end{align}
The fraction in the right hand side (r.h.s.) of Eq. \ref{eq:crochet-PMB-FV} corresponds to the demographic variance $\,2b(x)\,$ divided by the effective population size
\begin{equation}
\label{def:Ne}
N_e(x)=\widehat{n}(x ; x_1,\dots, x_p, x_i+h).
\end{equation}The population effective size, which partially governs the evolution of the diversity at the neutral marker, depends on the trait value $x$, but also on the whole trait distribution $x_1,\dots, x_p, x_i+h$. In particular, it means that the variance in the neutral diversity within the strain with trait $x$ depends on the competitive interactions of the latter with all the other strains
\subsection{Genealogies in a forward-backward coalescent with competitive interactions}\label{sec:def-FBcoal}
Genealogies are piecewise-defined and constructed by dividing time between intervals separating adaptive jumps of the PES, following a forward-backward coalescent process. Since the evolution of trait $x$ depends on the current distribution of the traits in the population, the PES tree is constructed forward in time where the successive adaptive jump times are denoted by $(T_k)_{k\in \{1,\dots J\}}$, with $T_0=0$ and $J$ is the number of jumps that occurred before time $t$. \medskip\noindent During the PES, a subpopulation with trait $x_i$ has its own coalescent rate on the markers which depends on its reproductive rate $b(x_i)$ and on the distribution of the traits in the whole population (Eq. \ref{def:Ne}). Genealogies are thus expected to be different among the different strains and between different adaptive jumps of the PES. Between adaptive jumps, since under our assumptions trait $x$ distribution and population size are supposed fixed, within-strains genealogies can be constructed backward in time. Given the PES during the time interval $[T_k,T_{k+1})$ ($k\in \{0,\dots J-1\}$)and the trait distribution $\{x_1,\dots x_p\}$, the genealogy of the individuals within the strain with trait $x_i$ is obtained by simulating a Kingman coalescent with coalescence rate $\frac{2b(x_i)}{\widehat{n}\big(x_i ; x_1,\dots, x_p\big)}$ (Eq. \ref{eq:crochet-PMB-FV}). When an adaptive jump occurs at time $T_k$, all lineages in the subpopulation of strain with trait $x_i$ instantaneously coalesce because a single mutant is always at the origin of a new strain during the PES. Note that coalescence is instantaneous under the assumptions underlying the PES, \textit{i.e.} at the timescale governing the evolution of the trait, the transition to fixation of the mutant trait is negligible. The allelic state at the marker is determined given the previously constructed genealogy, depending on the mutational model considered.
A more formal definition of the coalescent and associated proofs are given in App. \ref{sec:phylo}. A simulation algorithm for the construction of genealogies under our model is given in App. \ref{sec:simulations}.\\
\section{ABC inference in an eco-evolutionary framework}\label{sec:ABC}
We showed in the previous sections that the genetic structure of a sample of $n$ individuals can be related to the parameters of our eco-evolutionary model. We now aim at using this framework to infer genealogies, ecological and genetic parameters from genetic and/or phenotypic data sampled in a population at time $t$. In other words, given a dataset containing the genotype at the marker $u$ and the genotype or phenotype at the trait $x$ for the $n$ sampled individuals, we want to infer the parameters of the model: birth, death and competitive interaction rates, mutation rates, etc. Since we have only a partial information on the population ($n$ individuals are sampled and {possible extinct lineages are unobserved}), the likelihood of a model given the data has no tractable form. Given a possible genealogy of the $n$ individuals, an infinite number of continuous genealogical trees could be obtained from the model. The likelihood of each tree depends on the number and the traits of the different subpopulations (strains) during the history of the population, including the unobserved and extinct ones. Because summing over all possible unobserved data (number of unobserved and extinct lineages with their traits and adaptive jump times) is not feasible in practice, we have to make inference without likelihood computations.\\
{An alternative to likelihood-based inference methods is given by the Approximate Bayesian Computation (ABC) \citep{beaumontzhangbalding,beaumontcornuetmarinrobert}, which relies on repeated simulations of the forward-backward coalescent trees (Section \ref{sec:def-FBcoal}).} In the following, we briefly give a general presentation of the application of the ABC method to our model. We then apply the method to simulations of a toy model (the Dieckmann-Doebeli model) and to real data (genetic data on microsatellites on the Y chromosomes of human populations from Central Asia, with their social and geographic structures).
\subsection{ABC estimation of the ecological parameters based on the genealogical tree}
The dataset denoted $\mathbf{z}$ contains the genotype and/or phenotype on the trait $x$ and the marker $u$ for each of the $n$ sampled individuals. The trait $x$ can be geographic locations, species or strain identity, size, color, genotypes or anything that affect the ecological parameters and fitness. The marker $u$ can also be genotypic or phenotypic measures, discrete or continuous, qualitative or quantitative, but with no effect on fitness (the marker is supposed neutral). Our goal is to use the dataset $\mathbf{z}$ to estimate the parameters of the model denoted $\theta$ (in our case, birth and death rate, competition kernel, mutation probabilities and kernel) using an ABC approach. To do so, the following procedure is repeated a large number of times:
\begin{enumerate}
\item[$1^{st}$ step.] A parameter set $\theta_i$ is drawn in a prior distribution $\pi(d\theta)$;
\item[$2^{nd}$ step.] A PES and its neutral nested genealogies of the $n$ sampled individuals are simulated in each model associated with the parameters $\theta_i$;
\item[$3^{rd}$ step.] A set of summary statistics $S_i$ is computed from the data simulated under $\theta_i$, for each $i$
\end{enumerate}
The posterior distribution of the model is then approximated by comparing, for each simulation $i$, the simulated summary statistics $S_i$ to the ones from the real dataset {and by computing for each parameter $\theta_i$ a weight $W_i$ that defines the approximated posterior distribution (see Formula \ref{def:posterior} in Appendix)}. Three categories of summary statistics have been used, each associated with a different aspect of the genealogical tree (the complete list of summary statistics is given in the Appendix \ref{sec:summarystatistics}):
\begin{itemize}
\item[-] The trait distribution describing the strains diversity and their abundances (\textit{e.g.} number of strains, the mean and variance of strains abundance, ...);
\item[-] The marker distribution in the sampled population describing the neutral diversity within each sampled strain (\textit{e.g.} the M-index, $F_{st}$, Nei genetic distances,...);
\item[-] The shape of the genealogy (\textit{e.g.} most recent common ancestor, length of external branches, number of cherries, ...).
\end{itemize}
Depending on the dataset and the information that is available for a given population, four scenarios can be encountered:
\begin{itemize}
\item[Scenario 1.] {\bf Complete information}: The evolutionary history of the trait and the genealogies, populations and subpopulations abundances, values of the sampled individuals on the trait $x$ and the marker $u$. This situation certainly never occurs but it is a reference which allows to evaluate the expected ABC estimation in a perfect situation where all information is available. This situation can also include cases where independent information can be added such as fossil records;
\item[Scenario 2.] {\bf Population information}: Total population abundance, values of the trait $x$ and marker $u$ of the sampled individuals. The estimations given with those statistics represent the estimations one could expect with a complete knowledge of the present population;
\item[Scenario 3.] {\bf Sample information}: The number of sampled sub-populations, the values of the trait $x$ and the marker $u$ of the sampled individuals;
\item[Scenario 4.] {\bf Partial sample information}: Only the number of sampled sub-populations and the values of the marker $u$ of the sampled individuals.
\end{itemize}
The four situations will be compared regarding the quality of the ABC estimations of the model parameters.
\subsection{Application 1: Inference of the parameters in the Dieckmann-Doebeli model}\label{sec:Dieckmann-Doebeli}
In this section, we applied the ABC statistical procedure on the traits distribution and their phylogenies generated by a simple eco-evolutionary model \citep{roughgarden, dieckmanndoebeli, champagnatferrieremeleard}. The birth rate of an individual with trait $x$ is $b(x)=\exp(-x^2/2\sigma_b^2)$, the individual natural death rate is constant $d(x)=d_C$, and the competition between two individuals with traits $x$ and $y$ is $C(x,y)=\eta_c\,\exp(-(x-y)^2/2\sigma^2_c)$, $\sigma_c>0$ . The trait space is chosen to be $\mathcal{X}=[-1,1]$. The effect of a mutation on the trait $x$ is randomly drawn in a Gaussian mutation kernel with mean 0 and variance $\sigma_m^2$ (values outside $\mathcal{X}$ are excluded). The probability of mutation is $p$. The markers are assumed to be a vector of 10 microsatellites, each of them mutating with the same rate $q$. When a microsatellite mutates, we increase or decrease its value by 1 with equal probability.\\
The distribution of the phylogenies depends on the parameter $\theta=(p,q ,\sigma_b,\sigma_c,\sigma_m,d_c, \eta_c,t_{\sc{sim}})$, where $t_{\sc{sim}}$ is the duration of the PES ($t_{\sc{sim}}$ is not known \textit{a priori} and must be considered as a nuisance parameter).
\subsubsection{Posterior distribution and parameters estimation}
We ran $N=400\ 000$ simulations with identical prior distributions and scaling parameter $K=1000$ (see details in App. \ref{sec:ABC-recap}). Chosen parameter sets and prior distributions are given in Appendix \ref{sec:simulations}. We randomly chose four simulations runs among the $N$ simulations as \textit{pseudo-data} sets (these sets are named $A$, $B$, $C$ and $D$, see App. Table \ref{Table:data_parameters} and Fig. \ref{Fig:PES_line3}). All other simulations runs were used for the parameters estimates. Fig \ref{Fig:PriorPost_line3} shows the posterior distribution for one of the the pseudo-dataset (see App. \ref{sec:posterior-distrib} for full results). Our results show that estimates based on all statistics (Scenario 1, blue distribution) are not always the most accurate, suggesting that some of the descriptive statistics introduce noise and worsen estimate accuracy. However, the descriptive statistics providing knowledge about how population is trait-structured do not belong to this group and importantly improves estimation when available (compare orange \textit{vs.} red posterior distributions).
The impact of the number of microsatellites on the quality of the estimation is tested for the first pseudo-dataset $A$ (see App. Tab. \ref{Table:data_parameters}) with the number of microsatellites varying from 10 to 100. A sensitivity analysis is shown in App. Fig. \ref{Fig:nb_microsat_all}: the results are quite robust to this number. For some parameters such as $t_{sim}$, better precision is achieved with increased number of microsatellites, and for other parameters such as $q$ or $p$, the impact of the number of microsatellites is more visible under Scenario 4 when we should rely a lot on the information brought by the microsatellites.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{ccc}
A & \includegraphics[width=7.5cm]{L03_pes.pdf} & \includegraphics[width=7.5cm]{L03_coalescent.pdf}\\
B & \includegraphics[width=7.5cm]{L04_pes.pdf} & \includegraphics[width=7.5cm]{L04_coalescent.pdf}\\
C & \includegraphics[width=7.5cm]{L05_pes.pdf} & \includegraphics[width=7.5cm]{L05_coalescent.pdf}\\
D & \includegraphics[width=7.5cm]{L06_pes.pdf} & \includegraphics[width=7.5cm]{L06_coalescent.pdf}\\
& (a) & (b)
\end{tabular}
\caption{{\footnotesize \textit{Dynamics of (a) the trait $x$ and (b) the neutral marker $u$ of the four pseudo-data sets $A$, $B$, $C$ and $D$ randomly sampled among $N=400,000$ simulations runs of the Doebeli-Dieckmann's model (Parameter sets are given in App. Table \ref{Table:data_parameters}). Figures show the Substitution Fleming-Viot Process (SFVP) and the nested phylogenetic tree of $n$ individuals sampled at the final time of the simulation. (a): The trait $x$ follows a Polymorphic Evolutionary Substitution (PES) process introduced in \cite{champagnatmeleard}. (b): The genealogies of the marker $u$ follow a forward-backward coalescent process nested in the PES tree as described in Section \ref{sec:def-FBcoal}. The colors refer to the lineage to which one individual belong shown in (a).}}}\label{Fig:PES_line3}
\end{center}\end{figure}
\clearpage
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=14cm]{L03_prior-posterior.pdf}
\caption{\textit{\footnotesize Prior and posterior distributions (pseudo-data set $A$ in Fig. \ref{Fig:PES_line3}). Black dashed curve: prior distribution; Vertical red line: true value. The different colors correspond to different scenario regarding which data are available or not: Blue, Scenario 1 (All descriptive statistics are available); Pink, Scenario 2 (data from the totality of the population); Red, Scenario 3 (data from a sample of the population); Orange, Scenario 4 (data from a sample of the population, the traits $x$ is not known). Results for other pseudo-data sets are given in App.\ref{sec:posterior-distrib}}}\label{Fig:PriorPost_line3}
\end{center}
\end{figure}
\clearpage
\subsubsection{Discrepancy with Kingman's coalescent}\label{sec:docoalescenttreessignificantlydifferfromKingman}
After a correct renormalization, Kingman's coalescent are generally considered as a good approximation of coalescent trees, even in structured populations. However, in our model, the population structure itself can evolve, demographic rates can vary with time, and subpopulations can interact with each other, which might strongly affect the topology of the coalescent trees and its branches length. In this section, our aim is to evaluate to what extent the Kingman's coalescent is a good approximation or not of the genealogies generated by the Doebeli-Dieckmann's model. In case of a significant discrepancy, we further determined the properties of the trees which show important differences between both models, and then we identified and evaluated the type and extent of errors that one would expect when using Kingman's coalescents for inference without taking into account the evolution of population structure.\\
We considered statistics commonly used to test the neutrality of the phylogenies of $n$ sampled individuals \citep{fuli}: the number of cherries $C_n$, i.e. the number of internal nodes of the tree having two leaves as descendants, the length of external branches $L_n$, i.e. edges of the phylogenetic tree admitting one of the $n$ leaves as extremity, and the time $T_n^{\mbox{{\scriptsize MRCA}}}$ to the most recent common ancestor (MRCA). The distributions of the normalized $C_n$ and $L_n$ and the distribution of $T_n^{\mbox{{\scriptsize MRCA}}}$ for the forward-backward Doebeli-Dieckmann's coalescent and the Kingman's coalescent are compared. {For Kingman's coalescent, asymptotic normality has been established for $C_n$ and $L_n$ \cite[see][]{blumfrancois2005-Sackin,jansonkersting}.} The distribution of $T_n^{\mbox{{\scriptsize MRCA}}}$ for the Kingman coalescent is computed by using the fact that the trees are binary with exponential durations between each coalescence. Neutrality tests conditionally on the number of lineages $m$ at the time of sampling are performed using the behavior of these statistics under the null assumption $H_0$ that the phylogenies correspond to a Kingman's coalescent. For each $m$, we chose as pseudo-data one of the simulations of our model with $m$ species at the final time, and we performed normality tests for $C_n$ and $L_n$, and an adequation test for the expected distribution under Kingman for $T_n^{\mbox{{\scriptsize MRCA}}}$. This was repeated 100 times for each value of $m\in \{1,\dots 10\}$ (details given in App. \ref{sec:neutrality-test}). \\
Fig. \ref{Fig:neutral-test} shows the distributions of the \textit{a posteriori} p-values for the normality tests for $L_n$ and $C_n$. The coalescent trees significantly differ from Kingman's coalescent trees regarding the external branch length $L_n$ (Fig. \ref{Fig:neutral-test}(a)), while the number of cherries $C_n$ is not always significantly different (the p-values have a median close to 0.05, Fig. \ref{Fig:neutral-test}(b)). Finally, Fig. \ref{Fig:neutral-test}(c) shows the distribution of the time to the MRCA depending on the number of lineages $m$. A mean comparison test shows that the mean of the $T_n^{\mbox{{\scriptsize MRCA}}}$s obtained from the simulations of our forward-backward coalescent significantly differs from the expected MRCA time under a Kingman's coalescent (see App. \eqref{test:comparaison-moyennes}). Hence, our results show that coalescent tree topologies generated under a Doebelli-Dieckmann's model are expected to be significantly different from a Kingman's coalescent. \\
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=5cm,height=5cm, trim= 0 1cm 0cm 2cm, clip=true]{2019_10_16_Box-plot_Ln_nbPop_normKS.pdf}
&
\includegraphics[width=5cm,height=5cm, trim= 0 1cm 0cm 2cm, clip=true]{2019_10_16_Box-plot_Cn_nbPop_normKS.pdf}
&
\includegraphics[width=5cm,height=5cm, trim= 0 1cm 0cm 2cm, clip=true]{time_MRCA_nbPop.pdf} \\
(a) & (b) & (c)
\end{tabular}
\caption{\textit{{\small
(a): External branch length $L_n$: Box-plot of the p-values of the Kolmogorov-Smirnov test, for each value of the number of lineages $m$
at sampling time (in abscissa).
(b): Number of cherries $C_n$: Box-plot of the p-values of the Kolmogorov-Smirnov test, as a function of $m$.
For (a) and (b), $100$ ABC analysis were done for each value of $m$ and we tested if the distribution of the normalized external branch length follows a Gaussian distribution ($H_0$). The threshold value of rejection of $H_0$, $0.05$, is represented by the dashed red line. If the p-values are lower than this threshold, the distribution of the statistics ($L_n$ or $C_n$) of the forward-backward coalescent trees generated by a Doebeli-Dieckmann model is significantly different than the one under a Kingman's coalescent. (c): Compared distributions of the age of the MRCA for the forward-backward coalescent (plain line) and for the Kingman's coalescent (dotted line).}}}\label{Fig:neutral-test}
\end{center}
\end{figure}
Fig. \ref{Fig:Branch-cherries-lines3-6} shows further comparison between Kingman's coalescent and the trees under our model. The distribution of external branch lengths under our model follows an asymmetrical leptokurtic distribution and it tends to be much shorter than under a Kingman's coalescent. The time to the MRCA is also much longer under our model than the Kingman's coalescent. The distribution of the number of cherries follows a symmetrical bell-shaped distribution flattened around the mode.
\begin{figure}[ht!]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=6cm, trim= 0 0cm 0cm 2cm, clip=true]{2019_10_16_Hist_Ln_simulA.pdf}
&\includegraphics[width=6cm, trim= 0 0cm 0cm 2cm, clip=true]{2019_10_16_Hist_Cn_simulA.pdf}
& \includegraphics[width=6cm, trim= 0 0cm 0cm 2cm, clip=true]{2019_10_22_Hist_Tmrca_simulA.pdf}
\\
(a) & (b) & (c)
\end{tabular}
\caption{\textit{{\small Histograms of (a) the renormalized external branch lengths, (b) the renormalized number of cherries, (c) the time to the MRCA. The simulations are shown for $p=0.0076$, $q=0.7503$, $\sigma_b=1.186$, $\sigma_c=0.4951$, $\sigma_m=0.1448$, $\eta_c=0.0211$ and $t_{sim}=1025.619$ (set of parameter A in Table \ref{Table:data_parameters}. Results for three other `reference' sets are given in App. \ref{sec:neutrality-test}). The dashed line represents the distribution followed by a Kingman's coalescent ( Gaussian distribution for (a) and (b), simulations for (c)). }}}\label{Fig:Branch-cherries-lines3-6}
\end{center}
\end{figure}
Overall, we found that the coalescent trees generated by a Doebeli-Dieckmann model significantly differ from a Kingman's coalescent. In particular, we found that using a Kingman's coalescent model and ignoring the trait structure of a population tend to overestimate the recent coalescent times. The genealogies generated by the forward-backward coalescent under a Doebeli-Dieckmann's model are expected to differ from a standard or renormalized Kingman's coalescent for various reasons: i) there are multiple instantaneous coalescence events when a new lineage appears; ii) coalescence rates differ among lineages, creating asymmetries in the phylogenetic tree (trees can therefore be imbalanced); iii) coalescence rates vary in time since they depend on the structure of the population and the traits present at a given time; and iv) eco-evolutionary feedbacks and competitive interactions between lineages affect coalescent rates in the whole population. \\
\subsection{Application 2: correlations between genetic and social structures in Central Asia}\label{sec:chaix}
In Anthropology, a common question is whether or not socio-cultural changes can affect demographic parameters, such as fertility rates. For instance, it is hypothesized that agriculturalists have a higher fertility than foragers \cite[\textit{e.g.} ][]{sellenandmace}, which is supported by several studies \cite[\textit{e.g.} ][]{bentleyetal, rossetal}. In this section, we analyze genetic data in order to test whether populations with two different lifestyles and social organizations show different fertility rates. Nineteen human populations from Central Asia have been sampled in previous studies (Fig. \ref{Fig:carte}(a), \cite{chaixetal, heyeretal}). Two types of socio-cultural organizations are encountered: Indo-iranian populations are patrilineal, \text{i.e.} mostly pastoral and organized into descent groups (tribes, clans...); Turkic populations are cognatic, \text{i.e.} mostly sedentary farmers organized in nuclear families. 631 individuals have been sampled (310 from a cognatic population, 321 from a patrilineal one). Ten microsatellite loci have been genotyped on the Y-chromosome. Since there is no recombination on the sexual chromosomes in humans, it is appropriate to use our model which assumes clonal reproduction. Hence, we will perform ABC analysis on the genetic diversity following the paternal lineages.
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8cm,height=6cm, trim=0 1cm 0 0, clip=true]{Carte.png}
&
\includegraphics[width=8cm,height=6cm]{Regression_locations.png}\\
(a) & (b)
\end{tabular}
\caption{\textit{{\small
(a): Map of sampling locations from \cite{heyeretal}. Triangles correspond to cognatic Indo-Iranian populations, quares to patrilineal Turkic populations. (b): Regression of the data to a 1-dimensional problem.}}}\label{Fig:carte}
\end{center}
\end{figure}
We considered that the trait $x$ in the model is a vector containing the geographic location of the population and the social organization (cognatic or patrilineal). For geographical positions, given the Fig. \ref{Fig:carte}(a), we consider that geographic location is 1-dimensional: we can fit a polynomial curve through the geographical positions of the tribes:
\[P(x)=673.4-25.13 \ x+0.327\ x^2-1.39\ 10^{-3}\ x^3 \ (R^2= 0.92).\] Hence the location of each population is given by the coordinates $(x,P(x))$ (Fig. \ref{Fig:carte}(b)). The distance between populations is computed thanks to the line integral along the interpolated curve (see details in App. \ref{append:chaix-simul}). The neutral marker $u$ is a vector containing the genotype at the ten microsatellites. Here we assume that the neutral marker is fully linked with the trait corresponding to the social organization.
Our aim is to use our ABC procedure on the genetic data to estimate the parameters $\theta=(p_{\mbox{{\scriptsize xb01}}},b_0,b_1,p_{\mbox{{\scriptsize loc}}},q, \sigma_{\mbox{{\scriptsize loc}}},\eta_0,\eta_1,\sigma_c,t_{\sc{sim}})$ of our model. The individual birth rates is assumed to depend on social organization only and not on geographic location: $b_0$ for the patrilineal populations and $b_1$ for the cognatic ones. Death rates are supposed to be due to density-dependent competition for the sake of simplicity: the competitive effect of an individual located at coordinate $y$ on an individual in a patrilineal (resp. cognatic) population at location $y'$ is supposed $C(y,y')=\eta_0 \exp\big(-(y-y')^2/2\sigma_c^2\big)$ (resp. $C(y,y')=\eta_1 \exp\big(-(y-y')^2/2\sigma_c^2\big)$). The individual death rate at location $y$ is given by the sum of the competitive effects of all individuals. We supposed that individual can found new population after dispersal (corresponding to a mutation on the trait $x$ at birth), with probability $p_{\mbox{{\scriptsize loc}}}$, and/or change of social organization, with probability $p_{\mbox{{\scriptsize xb01}}}$. The location of the new population is randomly drawn in a centered Gaussian with standard deviation $\sigma_{\mbox{{\scriptsize loc}}}$. Following anthropological data, we assumed that social organization changes are unidirectional only from patrilineal pastoral to cognatic farmers populations \citep{chaixetal}. $t_{\sc{sim}}$ and $q$ respectively are the duration of the coalescent and the marker mutation probability.\\
Estimating the parameter $\theta$ and using the ABC procedure to select between alternative models will allow us to test whether the null hypothesis
\begin{equation}
H_0\ : \ b_0=b_1\label{test:H_0}
\end{equation} is acceptable, compared to the alternative hypothesis $H_a\ :\ b_0<b_1$ \cite[see \textit{e.g.} ][]{grelaudrobertmarinrodolphetaly,pranglefearnheadcoxbiggsfrench,stoehrpudlocucala}. We generated a set of data with the \textit{a priori} probability $1/2$ of having $b_0=b_1$ and the \textit{a priori} probability $1/2$ of having $b_0<b_1$ (see details in App. \ref{append:chaix-simul}). {To do this, we generated 10,000 datasets with $b_0=b_1$ and 10,000 datasets with $b_0<b_1$. The ABC estimation provides weights $W_i$ for each of these 20,000 simulations (see Eq. \ref{def:posterior}) yielding the posterior distribution of the parameters (see Fig. \ref{fig:chaix-posterior}). These weights $W_i$ also allow to compute the posterior probabilities of each hypothesis: $H_0\:\ \{b_0=b_1\}$ or $H_a\ :\ \{b_0<b_1\}$. When the estimated posterior probability for $\{b_0<b_1\}$ is larger than a certain threshold $\alpha$, the null hypothesis $H_0$ is rejected.} \\
We first checked the quality of the ABC estimation and of the test \eqref{test:H_0} on simulated data. {Among the 20,000 simulations presented in the above paragraph, we chose 200 simulations to play in turn the role of the true dataset, 100 among those with $b_0=b_1$ and 100 among those with $b_0<b_1$. }We obtained that parameters estimates were generally close to the true values (App. \ref{append:chaix-simul}). We then use these 200 datasets to perform 200 tests (using for each of them the 19,999 other simulations). Since we know for each of these 200 tests whether the data are obtain under $H_0\:\ \{b_0=b_1\}$ or $H_a\ :\ \{b_0<b_1\}$, this provides insight on the power of our test and allows us to set the threshold defining the critical region of the test. Here we can choose this threshold $\alpha=0.5$ which is very natural (see App. \ref{append:chaix-simul}). We can then conclude the test for the dataset from Central Asia populations .\\
For the ABC test, we obtained an estimated posterior probability for $\{b_0<b_1\}$ equal to 0.4518, below the threshold $\alpha=0.5$, so that the null hypothesis $H_0$ \eqref{test:H_0} can not be rejected. The p-value of the test, estimated as the proportion of these simulations where $\widehat{\mathbb{P}}(H_a\ |\ S_{\sc{obs}})\geq 0.4518$, can be estimated to 47\%. Hence there is no significantly higher fecundity in cognatic populations compared with patrilineal ones.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=18cm,height=10cm]{2020_07_12_prior_posterior_Heyer_20000_adj.pdf}
\end{center}
\caption{{\small \textit{Results of the ABC estimation for the dataset of Heyer et al. \cite{heyeretal} for Central Asia human populations. The prior distributions are plotted in dashed lines and the posterior densities in plain red lines.}}}\label{fig:chaix-posterior}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=12cm,angle=0,height=9cm]{birthrates.pdf} &
\includegraphics[width=12cm,angle=0,height=9cm]{diffb0b1.pdf}\\
(a) & (b)
\end{tabular}
\caption{{\small \textit{(a): Approximate posterior distributions for $b_0$ and $b_1$, obtained by ABC on the Central Asian database with 40,000 simulations. (b): Approximate posterior distribution of $b_1-b_0$. The posterior mean of $b_1-b_0$, equal to 0.25, is indicated as the vertical dashed red line.}}}\label{fig:chaixb0b1}
\end{center}
\end{figure}
\clearpage
\section{Discussion}
Inferences from genetic data are most often performed under three important assumptions in the existing literature. First, the population size and structure are known parameters: either it is fixed or it follows a deterministic evolution, according to a given scenario (\textit{e.g.} expansion or bottleneck, or a fixed structure with known migration rates between sub-populations). Second, mutations are supposed to not affect the genealogical trees, \textit{i.e.} models are supposed neutral. Selection is rarely explicitly taken into account in inference methods (yet see for instance, background selection can bias the estimation of demographic variations \citep{charlesworthetal03, johrietal2020}). Third, there is no feedback between the evolution of the population and its demography: a selected mutation is supposed not to affect population size, or the population structure. The most frequent models used in inference, the Kingman's coalescent and the Wright-Fisher model, make the three assumptions altogether. The goal of the present paper was to present a model and an inference method which allow to relax all these assumptions. We showed that by using an ABC procedure, it was possible to estimate ecological, demographic and genetic parameters from genotypic and phenotypic data. \\
Recently, \cite{rasmussenandstadler} proposed a birth-death model without interactions where mutations can affect the birth and death rates of individuals in a strain, which in return affect the genealogies. They showed how it was possible to use phylogenies to estimate the effect of mutations on fitness in some viruses. In our paper, we go a step further by allowing interactions between individuals, and population structure and demography that depend on the evolution of the population. Our model assumes two genetics traits, a selected trait which governs the structure of the population, and a marker linked to the trait which is neutral and used to infer the genealogy. We first showed how genetic diversity at the neutral marker is related to the evolution at the selected trait, and to the size and structure of the population. We then used this relationship by developing ABC procedure which allows to estimate ecological parameters based on genetic diversity at the neutral marker and on the partial or total knowledge of the population structure. We showed on simulated data that the ABC procedure gives accurate estimates of ecological parameters such as the birth, death and interactions rates, and genetic parameters such as the mutation rate. Our results also showed that non-neutral genealogies can easily be detected under our framework.\\
The ABC procedure is well fitted to deal with complex models if we can simulate the latter easily, which has become increasingly common for most ecological models \cite[\textit{e.g.} ][]{legendreclobert,hallermesser}. Here, we applied our model and its ABC procedure to reanalyze the genetic diversity of microsatellites on Y chromosomes in Central Asia human populations. The genetic diversity are compared between two social organizations and lifestyles: patrilineal vs. cognatic. Previous studies showed a significantly different genetic diversity and coalescent trees topologies, which was interpreted as evidence of the effect of socio-cultural traits on biological reproduction, due to how wealth is transmitted within families \citep{chaixetal, heyeretal}. However, these conclusions were obtained under simplifying assumptions: genealogies followed a modified Wright-Fisher model, and the genetic diversity and coalescent trees topologies were compared independently, \textit{i.e.} there was no interaction between populations and between social organization. Such assumptions dismissed the possibility that socio-cultural traits and social organization could change, that new populations can be founded, and that competitive interactions between individuals within and between social organizations might affect demography and evolution. We relaxed all these limitations by applying our model. We supposed that the trait under selection can affect the birth rate. Contrarily to \cite{heyeretal}, we did not test whether wealth transmission could explain differences in genetic diversity and coalescent trees topologies. Rather, we addressed a long-standing question in anthropology: can fertility be affected by a change in a social organization, in particular with a change in the agricultural mode. We found no evidence of a fertility difference between both kinds of social organization. Our findings then ask the question why human populations can adopt new socio-cultural traits without any strong evidence of a biological advantage. Further analyses and data would be necessary to confirm our results, especially regarding the number of children per females. In the data, this information is based on a few interviews that are not at all precise \citep[see Table S3 in the suppl. mat.,][]{chaixetal}. However, since the genetic diversity sampled in contemporaneous population is due to long historical process, it seems difficult to estimate fertility for several dozens or hundreds generations. Our results only suggest that there is, on average, no evidence of an effect of a social trait on fertility all along the history of Central Asia human populations. \\
Finally, our paper illustrates that it is actually possible to merge ecological and genetic data and models. Our model is based on classical competitive Lotka-Volterra equations, under the assumptions of rare mutations relatively to ecological processes. The genealogies and genetic diversity produced under such a model are then used to infer ecological and demographic parameters. We showed that relaxing strong assumptions of genetic models is possible, and that it allows to provide new analysis methods. Even though we applied our inferential procedure only to simulated genetic data or microsatellites genetic diversity, our model is general enough to embrace any type of data: SNPs, phenotypic traits, etc. The development of stochastic birth and death models, with (this paper) or without \citep{rasmussenandstadler} interactions open the way to new methods for analyzing data. As highlighted by \cite{frostpybusgogviboudbonhoefferbedford}, this is particularly important for the study of epidemics and pathogens evolution. These authors give a list of current challenges which can partly addressed thanks to the method and models developed here. For instance, the role of the host structure on the pathogens evolution and genetic diversity, the role of stochasticity, and providing more complex and realistic evolutionary models.\\
\textit{Acknowledgments:} The authors thank
The authors thank Laurent S\'eries and Sylvain Ferrand for their help with the CMAP compute servers. They also thank Frédéric Austerlitz and Rapha\"elle Chaix for discussion and for sharing anthropological data from Central Asia. This research has been supported by the Chair ``Modélisation Mathématique et Biodiversité" of Veolia Environnement-Ecole Polytechnique-Museum National d'Histoire Naturelle-Fondation X. V.C.T. also acknowledges support from Labex CEMPI (ANR-11-LABX-0007-01) and B\'ezout (ANR-10-LABX-58). \\
\textit{Competing interests:} The authors declare no competing financial interests in relation to the current work.\\
\textit{Data archiving:} The genetic data, simulation and the programs developed in the paper will be archived on Dryad and Github.\\
{\footnotesize
\providecommand{\noopsort}[1]{}\providecommand{\noopsort}[1]{}\providecommand{\noopsort}[1]{}\providecommand{\noopsort}[1]{}
|
{
"timestamp": "2020-07-28T02:09:01",
"yymm": "1910",
"arxiv_id": "1910.10201",
"language": "en",
"url": "https://arxiv.org/abs/1910.10201"
}
|
"\\section{Introduction}\nCognitive and affective state monitoring has become a subject of interest (...TRUNCATED)
| {"timestamp":"2019-10-23T02:20:57","yymm":"1910","arxiv_id":"1910.10076","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nSuperNova Remnants (SNRs) interacting with Molecular Clouds (MCs) are id(...TRUNCATED)
| {"timestamp":"2019-10-23T02:18:14","yymm":"1910","arxiv_id":"1910.09987","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\\IEEEPARstart{I}n recent years, there has been a growing interest in ex(...TRUNCATED)
| {"timestamp":"2019-12-30T02:11:03","yymm":"1910","arxiv_id":"1910.10079","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\n\nInterference is the hallmark of oscillatory phenomena \\cite{book1,b(...TRUNCATED)
| {"timestamp":"2019-10-24T02:01:01","yymm":"1910","arxiv_id":"1910.10186","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nThere is no doubt that the open-source Lucene search library is the most(...TRUNCATED)
| {"timestamp":"2019-10-29T01:23:28","yymm":"1910","arxiv_id":"1910.10208","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Introduction}\r\n\r\nMetal-insulator transitions (MITs) in strongly correlated electron(...TRUNCATED)
| {"timestamp":"2019-12-25T02:02:37","yymm":"1910","arxiv_id":"1910.10188","language":"en","url":"http(...TRUNCATED)
|
End of preview.